IRC log of webrtc on 2020-03-30

Timestamps are in UTC.

15:03:18 [RRSAgent]
RRSAgent has joined #webrtc
15:03:18 [RRSAgent]
logging to https://www.w3.org/2020/03/30-webrtc-irc
15:03:29 [dom]
Meeting: WebRTC Virtual Interim
15:03:35 [dom]
Chair: Bernard, Jan-Ivar, Harald
15:03:40 [dom]
Scribe: dom
15:03:43 [hta]
hta has joined #webrtc
15:03:56 [dom]
Topic: WebRTC Features at risk
15:04:05 [dom]
Bernard: a few unimplemented features not yet marked at risk
15:04:09 [dom]
... 3 issues filed related to that
15:04:24 [dom]
... first one is Issue 2496 - the voiceActivityFlag exposed in SSRC, not implemented anywhere
15:04:32 [dom]
... any disagreement to marking it at risk?
15:04:36 [dom]
Henrik: SGTM
15:05:01 [dom]
Bernard: we have one unimplemented MTI per issue 2497, partialFramesLost
15:05:10 [dom]
... should we remove it from the MTI list?
15:06:08 [dom]
Jan-Ivar: no objection to unmark that one; will we get implementations for the other ones?
15:07:05 [dom]
Henrik: they need to be moved from one dictionary to the other - they've been implemented, they just need to be moved into a different object
15:08:01 [dom]
JIB: it's not clear to us yet how easy it will be to implement in Firefox; pointers to upstream webrtc.org hooks would help
15:08:13 [dom]
RESOLVED: remove MTI marker on partialFramesLost
15:08:40 [dom]
Bernard: last one is multiple DTLS certificates, not implemented anywhere
15:09:12 [dom]
HTA: the goal was to help support signed certificates, which is completely unspecified
15:10:15 [dom]
Dom: so if we remove support for it, the idea would be to say the spec only uses the first certificate in the list
15:11:02 [dom]
TimP: wasn't the background of this support for multiple kind of certificates?
15:11:17 [dom]
Bernard: with full support for DTLS 1.2, that's no longer relevant
15:11:46 [dom]
Bernard: I'm hearing consensus on all of these
15:11:53 [dom]
Topic: ISSUE-2495 When is negotiation complete?
15:12:14 [dom]
JIB: this emerged while writing tests for WPT, but is applicable beyond testing
15:12:48 [dom]
... "Perfect negotiation" is the pattern we recommend in the spec that helps abstract away the negotiation from the rest of the application logic
15:14:06 [dom]
... having a negotiationended event would help avoid glare, simplify the logic
15:14:26 [dom]
... the obvious approach to detect the end of negotiation is racy
15:16:04 [jianjunz]
jianjunz has joined #webrtc
15:16:32 [dom]
... there are workaround, action-specific spin-tests (while loops)
15:16:46 [dom]
... but that's bad, leading to timeouts
15:17:23 [dom]
... I've also tried another workaround by dispatching my own negotiated event at the exact right time
15:17:40 [dom]
... this is slightly better, but we can still miss cases
15:18:30 [dom]
... can we do better? I have 3 proposals
15:18:47 [dom]
... fire a negotiationcomplete from SRD(answer) if renegotiation isn't needed
15:19:29 [dom]
... one downside is that subsequent actions may delay the event if further negotiations is needed in some edge cases
15:20:22 [dom]
... Proposal B is a boolean attribute for negotiationneeded - needs careful implementation in relation to the negotiationneeded event
15:20:29 [dom]
... it's also delayed by subsequent actions
15:20:57 [dom]
... Proposal C: an attribute exposing a promise for negotiationcomplete
15:21:45 [dom]
... it's better because it's not delayed by subsequent actions (by replacing promises as new negotiations get started)
15:22:21 [dom]
Henrik: compared to proposal A?
15:23:01 [dom]
JIB: imagine you call addTransceiver-1 & addTransceiver-2, you have to wait until addTransceiver-2 before the event fires (which you don't in proposal C)
15:23:39 [dom]
Henrik: you can build your own logic if you care about partial negotiations - what you want to know in general is "am I done or not"?
15:23:59 [dom]
HTA: I question the question - why should I care if negotiation is complete?
15:24:28 [dom]
... what you have here is indeed a problem, but what the app cares about is whether the transceiver is connected to a live channel or not
15:24:42 [dom]
... you don't have this problem with datachannels since you have an onopen event
15:25:06 [dom]
... if we want to solve this at all (I would prefer not adding any API at this point), I think we should look at a signal on the transceiver availability
15:25:23 [dom]
Bernard: don't you get that from our existing states, e.g. via the transports?
15:25:38 [dom]
Harald: we have it with currentDirection, but without an event, it has to be polled
15:26:37 [dom]
JIB: I think apps do need to know whether the transceiver is ready or not, and having that done with a timeout is not great
15:26:52 [dom]
HTA: what I'm saying is what matters is the readiness of the transceiver, not the state of the negotiation
15:27:05 [dom]
... if we want to add anything here, it should be a directionchange event to the transceiver
15:27:26 [dom]
TimP: it could be done with proposal C which indicates "what" is complete (i.e. which transceiver is ready)
15:27:38 [dom]
... otherwise, I agree you want to know what it is you got
15:27:49 [dom]
JIB: you would get that via JS closure
15:30:33 [dom]
Henrik: I think this is a "nice-to-have" - useful for testing & debugging; but I think it's a problem that can be solved with the existing API
15:30:53 [dom]
JIB: I don't think this can be polyfilled, given that negotiationneeded is now queued
15:31:53 [dom]
... negotiationneeded can be queued behind other operations in the PC
15:32:39 [dom]
Henrik: you can detect this for each of your negotiated states by observing which changes are actually reflected (with different logic for each type of negotiation)
15:32:50 [dom]
... this would be nicer, but I don't think it's needed
15:33:04 [dom]
JIB: you mentioned setStreams - it cannot be observed locally
15:33:58 [dom]
... another advantage of the promise is that it lets you determine if you're still on the same "negotiation train" by comparing promises
15:35:00 [dom]
Youenn: it would be interesting to see if libraries built on top of PC are implementing that pattern
15:35:12 [dom]
... this might be a good way to determine its appeal
15:35:37 [dom]
Henrik: it would be great for debugging for sure, esp in the age of perfect negotiation
15:37:32 [dom]
Youenn: so let's wait to see what apps adopting perfect negotiation before committing to this
15:37:51 [dom]
Conclusion: keep for post 1.0 (in webrtc-extension?)
15:38:06 [dom]
Topic: ISSUE 2502 When are effects of in-parallel stuff surfaced?
15:40:36 [dom]
Henrik: the singaling/Transceiver states defined in JSEP and the API can't be the same to the cost of racy behavior
15:41:19 [dom]
... which means the requirements imposed by JSEP on these states create ill-defined / inconsistent behaviors
15:42:21 [dom]
... Proposals to address this: Proposal A: we make addTrack dependent only on WebRTC states, not JSEP states
15:42:32 [dom]
... this is probably what the spec says, not what implementations do
15:43:20 [dom]
... Proposal B: we make addTrack depend on a "JSEP transceiver", but would be racy and create implementation specific behaviors
15:43:38 [dom]
JIB: I agree there is a race in JSEP
15:43:53 [dom]
... JSEP was written without thinking about threads at all
15:45:02 [dom]
... the problem is not really about whether we're in a JS thread or not
15:45:08 [dom]
... we have to make copies of things
15:45:32 [dom]
Henrik: my mental model is that WebRTC JS shallow objects refer to JSEP objects
15:46:03 [dom]
... the only problem is with addTrack because of recycling of transceivers
15:47:23 [dom]
JIB: the hygienic thing would be to copy state off from JSEP when looking at transceivers. Is that proposal A?
15:47:30 [dom]
Henrik: it's implicit in proposal A
15:48:59 [dom]
JIB: the only problem with that with your example on slide 17 - this would leave a hole e.g. in the context of perfect negotiation
15:49:55 [dom]
Henrik: I think that's a better alternative than starting to meddle with internal JSEP objects
15:50:18 [dom]
... the hole here is that if you're unlucky, you need another round of negotiation
15:51:09 [dom]
... and in that situation, you would be in a racy scenario in the first place
15:51:24 [dom]
HTA: the code of slide 17 is not compatible with perfect negotation
15:51:46 [dom]
Henrik: I think proposal A is the only sane approach
15:52:08 [dom]
HTA: this sounds ready for a pull request
15:52:42 [dom]
JIB: I think the spec is currently racy given "JSEP in parallel" so it's more than an informative change
15:53:38 [dom]
RESOLVED: getTransceivers() SHALL NOT be racy
15:53:47 [dom]
Topci: Media Capture and Streams
15:53:54 [dom]
SubTopic: Issue 671 new audio acquisition
15:53:59 [dom]
s/Topci/Topic/
15:54:10 [dom]
Sam: Sam Dallstream, engineer at Microsoft
15:54:19 [dom]
... this is a feature request / issue on the spec
15:54:43 [dom]
... at the spec stands today, it is hard to differentiate streams meant for speech recognition vs communication
15:55:14 [dom]
... the current implementations are geared towards communication, which sometimes is at odd with the needs for speech recognition
15:55:30 [dom]
... e.g. in comms, adding noise can be good, but it hurts with speech recognition
15:56:14 [dom]
... slide 22 shows the differences of needs between the two usages, extracted from ETSI specs
15:56:23 [dom]
s/22/22 and 23
15:57:23 [dom]
... the first proposal to address this would be a new constraint (e.g. "category") that allows to specify "default", "raw", "communication" "speechRecognition"
15:57:45 [dom]
... it translates well to existing platforms: windows, iOS, Android have similar categories
15:58:18 [dom]
... the problem is that it competes with content-hint in a confusing way - content-hint is centered around default constraints AND provide hints to consumer of streams
15:58:38 [dom]
... whereas this one is setting optimization on the stream itself (e.g. levels of echo canceling)
15:59:05 [dom]
... A second proposal is to modify the constraints to make them a bit more specific, and add a new hint to content-hint
15:59:22 [dom]
... the advantage is that it fits the current content-hint draft, with more developer freedom
15:59:47 [dom]
... but it may be hard to implement though
16:00:22 [dom]
... Would like to hear if there is consensus on the need, and get a sense of the direction to fulfill it
16:01:08 [dom]
Henrik: for clarification, for echoCancellation, it's not turning it off, it's tweaking it for speech recognition
16:01:27 [dom]
Sam: right - right now echoCancellation it's a boolean (on or off)
16:02:01 [dom]
HTA: but then how does it fit well well with the existing model?
16:02:36 [dom]
Sam: I meant it's easier for API consumers, but you're right it conflicts with other constraints
16:03:01 [dom]
Bernard: this is not about device selection here
16:03:13 [dom]
JIB: indeed, most of this is done in software land in any case
16:03:23 [dom]
Henrik: right, here it's more about disable/enabling feature
16:04:16 [dom]
JIB: what's the use case that can't be done by gUM-ing & turn off echoCancellation, gainAutoControl, ambientNoise?
16:04:31 [dom]
Bernard: it's not on & off
16:05:00 [dom]
TimP: e.g. in speech interactions, you don't want the voice AI to hear itself
16:05:39 [dom]
Sam: Alexa right now turns off everything and then adds their own optimization for speech recognition
16:06:03 [dom]
... so this can already be done, but the idea is to allow built-in optimizations so that not everyone has to do their own thing
16:06:20 [dom]
Youenn: do systems provide multiple echo canceller?
16:06:31 [dom]
... I don't think you can do that in MacOS
16:06:43 [dom]
Sam: that's why the second proposal isn't as straightforward
16:07:04 [dom]
s/Mac/i
16:08:09 [dom]
Henrik: the advantage of these categories is that they vague enough that implementations can adjust depending on what the underlying platforms provide
16:08:19 [dom]
... but then it's not clear exactly what the hint does
16:08:35 [dom]
HTA: I would expect a lot of variability across platforms in any case
16:08:47 [dom]
Henrik: as is the case for echoCancellation: true
16:09:01 [dom]
HTA: indeed (as the multiple changes of the impl in Chrome show)
16:09:38 [dom]
Henrik: it sounds like it is hard-enough to describe, implementation-specific enough that it should be a hint
16:10:08 [dom]
JIB: I think that's fair to say that the audio constraints have been targeted a the communications use case
16:10:26 [dom]
... not sure how much commitment there is for the purpose of speech recognition
16:11:07 [dom]
Sam: right
16:12:08 [dom]
Henrik: with interop in mind, echoCancellation: true worked because everyone did their best job at solving it, not doing it the same thing
16:12:50 [dom]
... to get that done with this new category, we would need the same level of commitment and interest from browser vendors
16:13:20 [dom]
... the alternative is turning everything off and doing post process in WebAudio/WASM
16:13:45 [dom]
TimP: another category beyond comm, speech-rec here is broadcast
16:14:09 [dom]
... it shouldn't be a two-states switch
16:14:27 [dom]
JIB: anything here that couldn't be solved with WebAudio / AudioWorklets
16:14:43 [dom]
Sam: I would need to take another look at that one
16:14:53 [dom]
HTA: you would still need a "raw" mode
16:15:17 [dom]
Youenn: maybe also look at existing open source implementation of ambient noise and whether they share some rough parameters
16:17:28 [dom]
Sam: it sounds like leaning towards 2nd proposal
16:17:45 [dom]
Dom: maybe first also determine what can be done in user land already with Web Audio / Web Assembly
16:18:01 [dom]
... if this is already doable there, then maybe we should gain experience with libraries first
16:18:26 [dom]
HTA: given we already have a collection of hints in content-hint that have been found useful, it's kind of easy to add it there
16:19:28 [dom]
Bernard: would this applies up to gUM?
16:19:33 [dom]
HTA: yes, that's already how it works
16:19:55 [dom]
JIB: if we're thinking adding a new hint, we may need new constraints specific to speech-recognition
16:21:13 [dom]
[discussion around feature detection for content-hints]
16:21:59 [dom]
SubTopic: ISSUE 639 Enforcing user gesture for gUM
16:22:21 [dom]
Youenn: powerful APIs are nowadays bound to user gesture
16:22:37 [dom]
... if we were designing gUM today, it would be as well
16:22:44 [dom]
... but that's not Web compatible to change now
16:23:01 [dom]
... can we create the conditions to push Web apps to migrate to that model
16:24:06 [dom]
... PR 666 proposes to require user gesture to grant access without a prompt
16:24:29 [dom]
... I've looked at a few Web sites; whereby.com works with the restrictions on
16:24:36 [dom]
... it wouldn't work in Hangout or Meet
16:25:37 [dom]
... Interested in feedback on the approach and availability to help with webrtc app developers outreach
16:26:36 [dom]
Youenn: the end goal would be that calling gUM without user gesture should be rejected
16:26:57 [dom]
... user gesture is currently an implementation-dependent heuristic - this is being worked on
16:27:17 [dom]
Henrik: I think we would need it to be better defined
16:27:31 [dom]
... it is also linked to 'user-chooses'
16:27:56 [dom]
Youenn: the situation is very similar to getDisplayMedia where Safari applies the user gesture restriction
16:28:14 [dom]
... it could be the same with gUM
16:28:33 [dom]
JIB: I like the direction of this; we could describe it as privacy & security issue
16:28:49 [dom]
... with feature-policy, there is a privacy escalation pb through navigation
16:29:36 [dom]
... jsfiddle allowed all feature policies, so from my site I could have navigated to my jsfiddle, got priviledged there before navigating back with an iframe
16:29:45 [dom]
... so that sounds like an important fix
16:29:54 [dom]
... the prompting fallback sounds interesting
16:30:20 [dom]
... denying on page load might be harder to reach
16:30:43 [dom]
... it's not clear that same-origin navigation should be blocked
16:31:01 [dom]
Youenn: user gesture definition is still a heuristic, these could fit into that implementation freedom
16:31:11 [dom]
HTA: how much legitimate usage would we break?
16:31:39 [dom]
... before progressing this, we should have a deployed browser with a counter to detect with/without user gesture
16:32:21 [dom]
Youenn: Webex and Hangout call it on pageload, so that would make the counter very high
16:32:34 [dom]
HTA: so will someone get data?
16:32:50 [dom]
Youenn: I don't think Safari can do this; would be happy if someone can do this
16:32:56 [dom]
... I can reach to top Web site developers
16:33:11 [dom]
HTA: would anyone at Mozilla interested in collecting this data?
16:33:39 [dom]
JIB: based on our user gesture algorithm? I'll look, but can't quite commit resources to this at the moment
16:34:07 [dom]
Conclusion: more info needed
16:34:49 [dom]
Topic: Next meeting
16:35:00 [dom]
HTA: probably in April / May
16:35:03 [dom]
RRSAgent, draft minutes
16:35:03 [RRSAgent]
I have made the request to generate https://www.w3.org/2020/03/30-webrtc-minutes.html dom
16:35:25 [dom]
RRSAgent, draft minutes v2
16:35:25 [RRSAgent]
I have made the request to generate https://www.w3.org/2020/03/30-webrtc-minutes.html dom
16:35:28 [dom]
RRSAgent, make log public
16:37:01 [dom]
i/Topic: WebRTC/Topic: WebRTC/
16:37:13 [dom]
s/Topic: WebRTC F/SubTopic: WebRTC F
16:37:32 [dom]
s/Topic: Issue-2495/SubTopic: Issue-2495
16:37:45 [dom]
s/Topic: Issue 2502/SubTopic: Issue 2502
16:37:46 [dom]
RRSAgent, make log public
16:41:01 [dom]
Present: Harald, Bernard, Jan-Ivar, Youenn, TimPanton, DomHM, SamDallstream, Florent, Henrik, Jianjun
16:41:07 [dom]
RRSAgent, draft minutes v2
16:41:07 [RRSAgent]
I have made the request to generate https://www.w3.org/2020/03/30-webrtc-minutes.html dom
18:57:33 [hta]
hta has left #webrtc