15:00:11 RRSAgent has joined #webrtc 15:00:15 logging to https://www.w3.org/2025/10/21-webrtc-irc 15:00:15 Zakim has joined #webrtc 15:00:15 RRSAgent, make log public 15:00:16 Meeting: WebRTC October 2025 meeting 15:00:16 Agenda: https://www.w3.org/2011/04/webrtc/wiki/October_21_2025 15:00:16 Slideset: https://docs.google.com/presentation/d/1ZLJKdwl_nGSVOLi7VZml9_rN0UGg0eMehE8vI0bUYC0/ 15:00:16 Chairs: Guido, Jan-Ivar, Youenn 15:00:58 Present+ Dom, Youenn, Henrik, Jan-Ivar, Kacper_Wasniowski 15:01:21 Present+ Sameer 15:01:31 Present+ TimP 15:01:37 Present+ Guido 15:02:18 Present+ Bartosz_Habrjski 15:02:29 Recording is starting 15:02:40 Present+ Carine 15:03:03 Present+ Harald 15:04:49 Topic: -> https://github.com/w3c/webrtc-pc/issues/3077 Should the remote track mute in response to replaceTrack(null)? 15:04:49 [slide 10] 15:05:41 [slide 11] 15:07:13 [slide 12] 15:07:40 Jan-Ivar: I agree with your reading of the spec 15:07:51 ... track.enabled is for the Web site, track.muted for the user agent 15:08:10 ... media only flows if both are turned on 15:08:23 ... the mute is a signal from the UA to say "this is why you're not seeing frames" 15:08:35 [support for this view from Youenn and Harald] 15:08:40 [slide 13] 15:09:54 [support for proposal from Jan-Ivar, Youenn, Harald] 15:10:01 RESOLVED: Proceed with the two proposals presented in the slides 15:10:21 Topic: -> https://github.com/w3c/webrtc-extensions/pull/243 WebRTC-extensions receiver.on[c/s]srcchange event 15:10:21 [slide 15] 15:11:58 Present+ SunShin 15:12:10 [slide 16] 15:14:02 [slide 17] 15:14:25 Jan-Ivar: ssrc change based on decode vs RTP? 15:15:17 Henrik: they're based on the last decoded; you want to get the closest to reception time to take into account the jitter buffer e.g. if you want to adjust the volume 15:16:03 Jan-Ivar: so these events allow to avoid polling. Can we made the timing more explicit in the PR ? 15:16:08 ... otherwise supportive 15:17:19 ... timeline on fixing on unmute? 15:17:36 Henrik: we had to revert due to a bug, but it's still on track to be fixed 15:17:58 RESOLVED: Merge the PR with clarification on decode 15:18:00 Present+ PeterT 15:18:13 Topic: -> https://github.com/w3c/webrtc-extensions/issues/244 WebRTC-extensions 5G network slicing 15:18:13 [slide 20] 15:21:55 [slide 21] 15:22:22 Peter: is there any risk to the UA enabling this and something going wrong with the app? 15:22:48 Youenn: it's a trade-off - using 5G network slices for latency might reduce your bandwith (although it's usually not the case) 15:23:10 ... there might be slices for preserving energy, so if the UA is doing it wrong, it might have downsides 15:23:25 ... but in general, for typical webrtc apps, this should be fine 15:23:46 Peter: having the app being able to opt-in and out is one thing, what should be the default is another question 15:23:59 Youenn: my assumption is that for PeerConnection, the default should be opt-in 15:24:34 TimP: This doesn't tie with our experience of how carriers are delivering it to most customers 15:24:43 ... network slices are on demand and cost money 15:25:09 ... I'm not sure we can come to a good conclusion yet 15:25:33 Youenn: 5G network slices can be used in many different contexts - I'm focusing the much more narrow set of things exposed in iOS (and probably Android) 15:26:09 Jan-Ivar: I'm in support of keeping the UA in control and letting web apps declare their preference in terms of low-latency needs 15:26:26 Youenn: I tend to agree on a hint approach 15:26:53 ... not sure we should tie this to 5G vs "best low latency possible" (which the UA would pick a 5G network slice if available) 15:27:15 ... WebTransport has a congestionControl attribute to guide the UA 15:28:15 -> https://developer.android.com/develop/connectivity/5g/use-network-slicing 5G network slices on Android 15:28:47 harald: to make 5G network slices usable for the Web app, there needs to be visibility on which slices are available under what constraints, or let the UA deal with it based on a declaration of needs from the app 15:29:04 ... we've had this discussion about control ownership between app and UA any number of times 15:29:43 ... big apps tend to want and need control, so do browsers 15:29:59 ... given the pace of 5G rollout, I don't think we're in a hurry 15:30:25 [slide 22] 15:32:30 [slide 23] 15:35:06 Henrik: if you change which 5G network slice you use, does that change the ICE candidate you need to use? 15:35:52 Youenn: I don't think so; typically, for iOS, it's at the time you instantiate the connection (the UDP socket) that you'll need to tell that this particular connection should use a low-latency slice - and it will remain like this for the rest of the connection 15:36:10 Henrik: if you're changing from low-latency to bandwith, you would need to an ICE restart? 15:36:29 Youenn: I'm thinking of an immutable configuration here 15:37:08 Jan-Ivar: an important use case for WebTransport is MOQ - so not all WebTransport is low latency 15:37:29 ... I support a hint, wouldn't want the app to learn about a slice being used 15:38:09 Youenn: so 1 or 3 15:38:27 ... low-latency could be the default, with an opt-out 15:38:54 Jan-Ivar: but if it's a scarce resource, opt-in might be better 15:39:34 ... we could use a 3 value enum ("default", opt-in value, opt-out value) 15:40:06 TimP: when we tried slicing last year, it got you a completely different IP address (although that might have changed since then) 15:40:18 ... so I agree it would be hard to make it a dynamic setting 15:40:43 ... a concern with the enum is that you might have different needs for uplink and downlink (which slices in theory can support) 15:41:29 Youenn: worth digging into this - if you have more details on uplink/downlink settings, that'd be useful; with an enum, we can add values over time 15:42:30 Peter: having an enum to say "I really really care about latency" feels a bit awkard given that the whole stack is built for latency 15:43:34 ... esp if it's a just synonym to enable network slicing 15:43:42 Youenn: this could enable other optimizations later on 15:44:24 SunShin: NVidia is interested in taking advantage of 5G network slicing; we've enabled this on our Android client and would like to see it expanded to the Web client 15:45:06 Harald: to match TimP's point, should this be moved to Receiver/Sender instead of the PC? 15:46:40 Youenn: I'm hearing interest in a hint-based API, possibly with 3 values ("default", "low-latency", "not-low-latency"), a separation between uplink and downlink; I can come back with a concrete proposal along these lines 15:46:55 RESOLVED: Craft a PR to webrtc-extensions to reflect discussion 15:47:00 RRSAgent, draft minutes 15:47:01 I have made the request to generate https://www.w3.org/2025/10/21-webrtc-minutes.html dom 15:47:10 Topic: -> https://github.com/w3c/webrtc-encoded-transform/issues/214 SFrame processing model 15:47:10 [slide 26] 15:50:39 [slide 27] 15:52:32 Jan-Ivar: how do you get SFrameOptions to the ScriptTransform? 15:53:10 Youenn: it could be a type to the options, or an additional argument - something we can bikeshed on 15:53:34 Henrik: if a=sframe is not present but you wanted to use SFrame, you would renegotiate on the receiver side 15:53:44 ... if it is present, is the SFrameTransform created for you 15:54:04 Youenn: you will need to create it yourself (or a ScriptTransform), otherwise the packets will be dropped as they can't be decrypted 15:54:40 Henrik: so until the transform is set up, there is a race condition where the first few frames can be dropped because they can't be decrypted yet 15:55:49 Youenn: on the sender, if you start with no transform, you need to renegotiate; until the negotiation goes back to stable, the packets won't be able to flow - there will be delay in the switch 15:56:30 Henrik: [realizing there may not be a race condition after all] 15:56:52 ... re a=sframe 15:57:17 Youenn: if A send a=sframe, and B doesn't support it, B will respond without the a=sframe, and A will understand B doesn't support it 15:57:44 Jan-Ivar: how will that exposed to the app? 15:57:59 Youenn: the UA will reject the m-line 15:58:14 Jan-Ivar: should we open the possibility for the app to fallback to no-sframe? 15:58:33 Youenn: this could be exposed with a reason why the m-line was rejected 15:58:49 Henrik: this isn't exposed with existing m-line rejections today 15:59:04 Present+ BrianBaldino 15:59:31 Youenn: the web app will have to react to the logic of rejection implemented by the UA 16:00:11 Brian: that's the trade-off of locking the sframe association to the m-line (which avoids the race condition Henrik was worrying about) 16:00:38 Youenn: yes, that rigidity helps avoid situations where e.g. the app would think sframe is set up when it wasn't actually 16:01:55 Brian: we should document the recommended way to support a fallback scenario (where the app prefers sframe but is happy to go without it) by providing two m-lines, with and without sframe 16:03:12 Henrik: if B doesn't support sframe, it might end with a receiver where nothing comes in - which is probably fine 16:03:52 Youenn: if we see web apps needing to parse SDP to understand sframe rejection, this might suggest we need an API to surface it 16:04:04 Henrik: maybe you can detect it through a stopped transceiver? 16:04:11 Youenn: right, but there could be multiple reasons 16:05:01 Kacpper: how SFrame packet vs frame should be negotiated? in SFrameOptions? in Transceiver 16:05:16 Youenn: the SFrameTransform object would have it set with an options object 16:05:41 ... for ScriptTransform, there are ways to make it work, but we haven't received request to support per-packet in ScriptTransform so far 16:05:49 Harald: the SDP rule is "ignore what you don't understand" 16:06:29 ... if you want to offer "communicate in the clear or in sframe", you have to send an offer with sframe, receive an answer where it's removed, and then turn off sframe on that transceiver 16:07:23 ... (in most cases, falling back to non-encrypted would at least require going back to the user, and so probably a different PC) 16:08:07 Youenn: one situation that we'll need to consider is starting with no sframe, rolling back, then switching to sframe - we might need to allow switching the transform to null 16:08:46 Jan-Ivar: not sure why we would not support SPacket with ScriptTransform 16:09:42 Youenn: on the receiver side, you receive an SFrameChunk, either several RTP packets concatenated, or or single packet, based on the payload 16:10:03 .... if you're receiving per packet, on the receiver side you could receive an encoded videoframe 16:10:47 ... the scripttransform would decode it; it would then feed it to its depacketizer until it has a full frame, which can then be passed to the writablestream 16:11:18 Jan-Ivar: why would the JS even see the encryption? why can't the UA decrypt it for me, whether at the packet or frame level 16:11:35 Youenn: if we do that, we need to expose key management to ScriptTransform 16:12:11 ... my thinking is that ScriptTransform would be used e.g. for crypto suites not supported by the UA 16:12:32 ... It is possible to add SPacket support to ScriptTransform, but it requires new API on the sender side 16:13:16 ... you need to give to the SFrame packetizer where to split chunks (either enqueuing several frames, or providing delimiters to the packetizer) - it's feasible, but it will require changes either to the API or the processing model 16:13:29 ... since nobody has requested it, I think we can leave it for later 16:14:27 Jan-Ivar: maybe so, but we still need to clarify what gets exposed on the receiver side - in particular that it would need to go through the SFrameDecrypter (which isn't clear on the slide) rather than done transparently by the UA 16:14:42 Youenn: we could change "type" to "packetizationFormat" to clarify that 16:16:07 Henrik: Making sframe fail fast sounds good; but the fact that the answerer may still think the m-section still exists until the next negotiation 16:16:19 ... the fallback scenario would not need a negotiation 16:16:39 Youenn: for simplicity sake, rejection seemed easier; otherwise, this needs additional API surface 16:16:59 ... we could add it later 16:17:06 [slide 28] 16:17:12 RRSAgent, draft minutes 16:17:13 I have made the request to generate https://www.w3.org/2025/10/21-webrtc-minutes.html dom 16:19:53 Harald: is there any use case for migration? this would avoid the whole rollback discussion 16:20:17 ... this would also help with the timing of the transform object 16:20:21 Henrik: +1 16:21:15 ... I think it should be prerequesite that sframe transform is set when you do the offer and the answer 16:21:50 ... if we don't have to support migrations, we avoid the problems with rollback, but also race conditions during negotiation 16:22:28 Youenn: I like this, it would be simpler; the transform setter would throw if this wasn't negotiated 16:23:55 Brian: we would be interested in supported both nosframe to sframe scenarios and vice versa 16:24:51 Youenn: sframe comes with new payload types (based on Jonathan's feedback) which can help with disambiguating m-lines 16:26:46 ... I think we should start with the simpler model Harald suggested, and extend later if we see real benefits for migration 16:27:22 [slide 29] 16:28:34 Jan-Ivar: in my mind, SFrame or SPacket isn't really a matter of use case, it's only the underlying technology; I'll file an issue to follow up on this 16:28:59 Youenn: we could have a different setter, but that seems more complex 16:31:11 Youenn: I'll update the PR with the feedback, with a more constrained model; we'll discuss it in the editors meeting and see if it needs to come back to the WG, but I hear overall consensus on the direction 16:31:24 RESOLVED: Update the Pull Request to align with the feedback at the meeting 16:31:56 Topic: -> https://github.com/w3c/mediacapture-main/issues/1058 Media Capture: Clarify what "system default" means 16:31:56 [slide 33] 16:39:08 Henrik: do we need the app to care about browser or OS default? 16:40:13 Jan-Ivar: e.g. if you change the system default microphone in system settings in MacOS, you'll get a devicechange event in Safari and Firefox with a changed order in enumerated devices 16:40:36 ... this allows web sites to learn about user choices at the OS level, but this isn't behaving that way with the picker approach in Chrome 16:42:22 Youenn: for microphone, there is an OS default - in that case, FF's behavior follows my reading of the spec 16:42:42 ... there is no OS default for cameras, so that leaves us in a bit of limbo with an undefined behavior 16:43:16 ... it's a real issue - some web sites get a track and if they don't get the device id they expect, they re-ask with the first enumerated devices 16:43:33 ... I'm not sure how to handle the camera case 16:44:22 Jan-Ivar: that's an issue we have with Teams (we're working on it with them); Firefox has had that behavior for years, so most web sites should be OK 16:44:33 ... aligning with Firefox should help increase web compat 16:45:21 Guido: I agree with Youenn that if interpreting system default as OS default, Chrome has a bug for microphones here 16:45:51 ... the intent from the Permissions team was to show the device chosen by the user to be "more default" (featured more prominently) 16:46:27 ... I agree we should fix it for microphones; for camera, since there is no system default, we can't assume there is one, and apps shouldn't assume there is one, which is why the "default" entry is useful 16:47:37 Jan-Ivar: there is often a "primary" camera that would be good to list first 16:47:50 Guido: but "primary" might also apply to the one chosen by the user 16:49:54 ... we would have to define it, and it's not obvious what this would be 16:50:16 Youenn: +1 to filing an issue on camera, informed by what Chrome is doing for primary 16:50:53 Jan-Ivar: what about support for devicechange event on change to system default? 16:53:12 Youenn: that would benefit from understanding the underlying approach to picking the primary camera in Chrome - e.g. if several pages are capturing, this may trigger a devicechange event 16:55:31 RESOLVED: Confirm the spec is as expected for mics (and Chrome needs a fix), camera needs more discussion on interop in a dedicated spec issue 16:55:54 Topic: -> https://github.com/w3c/mediacapture-extensions/issues/164 Detect speech on muted microphone 16:55:54 [slide 34] 16:58:30 Youenn: there is a solution to that in the MediaSession API, with the Voice activity media session with an action handler called when voice activity is detected and the mic is muted 16:58:35 ... this is already implemented in Safari 16:58:46 ... it's not at the track or getUserMedia level 16:59:22 ... that allows to mute the microphone while allowing the web site to use mediasession to request unmute 16:59:43 Guido: I like the direction, and it's good to hear media session has it (would need to check support for multiple mics) 17:00:30 ... I'm not completely sure having that will suffice in getting web sites to not keep the mic open - one use case is to keep the audio processing model working correctly with recent audio 17:00:36 ... e.g. to avoid echo 17:01:32 ... even if it is done in the browser, it needs to be warmed up to do the cancelling properly 17:02:01 Jan-Ivar: good info; in any case, I'll take a look at Media Session 17:02:37 Henrik: there may be apps doing also more advanced voice detection than what the UA would provide with such a mechanism 17:02:58 ... but I agree the discrepancy between app UI and device signals is creepy 17:03:01 RRSAgent, draft minutes 17:03:02 I have made the request to generate https://www.w3.org/2025/10/21-webrtc-minutes.html dom 18:03:46 Zakim has left #webrtc