14:57:59 RRSAgent has joined #webrtc 14:58:03 logging to https://www.w3.org/2023/01/17-webrtc-irc 14:58:25 Zakim has joined #webrtc 15:58:43 Meeting: WebRTC January 2023 meeting 15:58:43 Agenda: https://www.w3.org/2011/04/webrtc/wiki/January_17_2023 15:58:43 Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Jan/att-0003/WEBRTCWG-2023-01-17.pdf 15:58:43 Chairs: HTA, Jan-Ivar, Bernard 16:02:44 Present+ Henrik, Varun, Cullen, Dom, PatrickRockhill, Youenn, PeterThatcher, TimPanton, MikeEnglish 16:02:57 Present+ Elad 16:03:02 Present+ Harald 16:03:17 Present+ Carine 16:04:01 Present+ TovePetersson 16:04:05 Present+ Jan-Ivar 16:04:14 Present+ BenWagner 16:04:25 Present+ Florent 16:04:44 Present+ TonyHerre 16:06:12 Recording is starting 16:09:29 Topic: Call for Consensus (CfC) Status 16:09:30 [slide 8] 16:09:59 Harald: we've seen support on low latency use cases - seeing consensus 16:10:15 ... on the Face Detection, there is an objection from Bernard - we'll have to review it and come back 16:10:41 Youenn: resolving the issues related to low latency use cases would be needed to declare consensus 16:10:47 Harald: I think we can merge and iterate 16:11:13 Youenn: it's already in the document - I would prefer we remove the notice "no consensus" once these issues are resolved 16:11:25 Harald: please mark this as an objection on the list then 16:11:44 ... More CfC expected 16:12:16 Topic: -> https://github.com/w3c/webrtc-pc/ WebRTC-pc 16:12:16 Subtopic: Issue #2795 Missing URL in RTCIceCandidateInit 16:12:16 [slide 12] 16:13:01 Youenn: we decided to remove the url from RTCPeerConnectionIceEvent 16:13:15 ... the dictionary to create that event has a candidate field and an URL field 16:13:21 ... that second field should probably be removed 16:13:59 ... Usually, events can be shimmed - not for IceEvent, since you can't create an IceCandidate with an undefined URL 16:14:26 ... do we want to change this? 16:14:45 ... two questions then: removing URL from IceEventInit, adding URL to the constructor 16:15:07 Harald: the URL is useful to identify which candidates come from which servers 16:15:19 ... a consturctor that can't create all values is problematic for testing 16:15:35 ... I would like to see that IceCandidate can take a URL to generate those candidates 16:16:06 Jan-Ivar: no strong opinion; but the fact that the constructor doesn't have a parameter doesn't prevent it being added to the object 16:16:20 Youenn: right, but this leaves edge cases where this wouldn't work as expected 16:16:24 RRSAgent, draft minutes 16:16:25 I have made the request to generate https://www.w3.org/2023/01/17-webrtc-minutes.html dom 16:17:07 Henrik: no strong opinion, but also finds strange one of these things can't be constructed 16:17:38 Present+ Bernard 16:17:55 RESOLVED: mild preference to add url to icecandidate constructor and consensus to remove url from IceEventInit 16:18:27 Jan-Ivar: would this mean that an JSON-stringified IceCandiate would include an url? 16:18:43 Youenn: it's a separate issue 16:19:04 Jan-Ivar: they're linked given they use the same dictionary 16:19:16 Youenn: we could define a different dictionary for json-ification 16:19:30 Jan-Ivar: also needs to consider the impact on addCandidate 16:19:54 Subtopic: Issue #2780 duplicate rids in sRD underspecified 16:19:54 [slide 13] 16:20:21 [merged since no consensus was expressed on github] 16:20:31 Subtopic: PR #2801: Prune createAnswer()'s encodings and [[SendEncodings]] in sLD(answer) 16:20:32 [slide 14] 16:20:45 [merged since no consensus was expressed on github] 16:21:16 Topic: -> https://github.com/w3c/webrtc-extensions WebRTC Extensions 16:21:16 Subtopic: Issue #43 / PR #139: Mixed Codec Simulcast 16:21:16 [slide 19] 16:21:33 Bernard: we've discussed this at the July meeting 16:21:40 ... Florent developed PR #139 16:22:06 ... the use case is for mixed codec simulcast, e.G. you want to use AV1 but will only get decent performant at low resolution 16:22:21 ... you would use a different codec at a higher res (e.g. vp8 or vp9) 16:23:16 [slide 20] 16:24:22 [slide 21] 16:24:55 Bernard: this example puts 2 codecs in - AV1 and VP8; at full res, only VP8 16:25:55 [slide 22] 16:25:55 Florent: issue #126 addresses another problem that this PR would also help cover 16:26:24 ... some applications want to select which codec is used but without renegotiation reordering 16:26:47 ... the current approach is heavy, annoying and issue-prone 16:27:01 [slide 23] 16:28:50 [slide 24] 16:31:39 Bernard: questions about weird cases in the field 16:32:06 ... imagine there is a hardware encoder but is not available because it gets preempted 16:32:26 ... are there situations where having an array would allow for a better fallback? although this wouldn't help in this case 16:32:56 Florent: wrt limited capacity of hardware encoder- if there is no osftware fallback, setParameters would throw if resources can't be acquired 16:33:00 s/osf/sof 16:33:12 ... although not if that happens later - Henrik has a proposal for that 16:33:42 ... if you run out of capacity on the hardware encoder, there is no control to surface errors upon software fallback 16:33:57 Bernard: maybe Henrik's proposal will help there indeed 16:34:36 Harald: the renegotiation problem is prettily easily solved: when you set the encoding, it must be valid; when you negotiate (even 1st one), you remove anything that isn't in the negotiated codecs 16:34:59 ... for ease of use, we should have an array and use the 1st entry of the array that are still available after negotiations 16:35:25 Henrik: I have a proposal that somewhat overlaps with it that we will talk about later 16:35:34 ... I do have a preference for a single codec value in setParameters 16:35:45 ... there should either be sensible defaults or have the stream disabled 16:36:00 ... I would keep this API surface as simple as possible 16:36:15 Florent: if the selected codec doesn't match, we could throw an error for the app to handle 16:36:56 jan-ivar: in the API so far, we've tried hard to keep setParameters and negotiation deal with the same settings to avoid creating races 16:37:04 ... that may be solved by what Harald described 16:37:22 Florent: the negotation is about what codecs are allowed, not the ones that are used - the usage is not the same 16:37:44 ... at the moment, renegotiation is used to push the first in the list to get it used, but I think more control is needed 16:37:54 Bernard: there may be a difference between before and after offer/answer 16:38:23 ... after offer/answer, the codecs in the list are within the negotiated envelop, you check against that, not capabilities 16:38:49 ... for addTransceiver, potentially there hasn't been an O/A yet 16:39:22 ... if you haven't called setCodecPreferences, it could be any in capabilities; this could lead to a contradiction with the addTransceiver 16:39:36 ... this has to be thought through, probably iterating in the PR 16:40:02 Florent: we should indeed check against capabilities, codec preferences; we should align with what is done e.g. in SVC 16:40:45 ... maybe sRD should throw an error; developer tools may help provide more visibility on what SDP would send 16:40:57 ... maybe we can iterate on this on github as we prepare the PR 16:41:12 Subtopic: Issue #127: How to deal with encoder errors? 16:41:13 [slide 25] 16:41:19 Henrik: somewhat related but also different 16:43:20 [slide 26] 16:43:26 RRSAgent, draft minutes 16:43:28 I have made the request to generate https://www.w3.org/2023/01/17-webrtc-minutes.html dom 16:44:58 Bernard: should it always be active=false on all layers? e.g. if only a given encoder is a source of errors 16:45:29 Henrik: you may want to know which layers the errors happened; the event may need to surface which encoders this error occurred 16:45:41 jan-ivar: clarification that this is an event, not callbacks 16:46:11 ... is it necessary to set active to false and let JS deal with the situation overall? 16:46:33 henrik: if the encoder doesn't work, it can't keep encoding: it needs to be stopped to fallback to a sensible default 16:46:54 ... I'm concerned that any default would end up sending unexpected keyframes 16:47:30 youenn: what might constitute an error? e.G. transient vs fatal error? this may lead to fragmentation 16:47:45 ... do we want to articulate this on error vs not error, or a change more generally? 16:48:18 henrik: very good point; some errors may simply be a notification but the app may not need to act because it can be recovered from 16:48:44 ... would be useful to say whether this should include the fallback in case of codec removal from negotiation 16:48:56 harald: we shouldn't stop anything unless the error forces it 16:49:12 ... so setting active=false shouldn't only impact affected layers 16:49:18 s/shouldn't/should/ 16:49:47 ... it should be an event, since events can have a default behavior that can be disabled 16:50:25 ... so the event could be fired every time there is a significant change, and by default let it managed by the UA that the app can intercept 16:50:43 Bernard: +1 to Youenn and Harald 16:51:08 ... but I don't think it's for recovery - this is just for real errors? 16:51:28 Henrik: right, but there may still be fallbacks (hardware → software) 16:51:43 bernard: if the error is recoverable, I would assume you wouldn't have that event 16:52:03 Youenn: some OSes are changing from hardware to software based on the resolution of the stream - not even an error 16:52:14 henrik: maybe this should be scoped to unrecoverable errors? 16:53:00 timp: I like this, but I don't think it's about errors, it should be codecavailabilitychange 16:53:03 henrik: good point 16:53:04 harald: +1 16:53:19 Subtopic: Issue #130: how does setOfferedRtpHeaderExtensions work? 16:53:19 [slide 27] 16:54:36 Harald: Fippo and I were disagreeing on the interpretation of the spec - I'm now thinking Fippo is right, but want to make sure the WG is also comfortable with that interpretation - please chime in in the issue 16:55:01 jan-ivar: +1 to fippo's interpretation, which is also more Web compatible 16:55:25 ... also frozenarrays in dictionary is frowned upon 16:55:34 youenn: +1 To fippo's as well 16:55:55 Topic: -> https://github.com/w3c/webrtc-encoded-transform WebRTC Encoded Transform 16:55:55 [slide 30] 16:56:39 [slide 31] 16:57:52 [slide 32] 16:59:06 [slide 33] 16:59:24 Harald: I designed an API for frame handling 16:59:31 ... creating frames from data and metadata 16:59:45 ... modify a frame metadata (in particular to avoid data copy) 17:00:13 ... data modification would happen async from metadata - which raises the question about consistency of the frame 17:00:27 [slide 34] 17:01:05 Harald: we've been reasonably successful using streams for frames; but reconfiguration requests are more events-like 17:01:09 [slide 35] 17:01:31 Harald: I propose an interface to handle this, as previously presented after the IETF hackathon 17:02:30 [slide 36] 17:03:13 [slide 37] 17:03:50 Harald: the long term plan would be to redefine RTPSender / RTPReceiver as composed of smaller components (encoder, packetizer) 17:04:23 [slide 38] 17:05:35 Bernard: is there an assumption that the packetizer is the one in the browser, or would it be possible to bring your own packetizer? e.g. would be useful for HEVC in WebRTC 17:06:30 Harald: there is a limited number of behaviors for packetizers - we should enable these different behaviors, haven't looked at bringing a fully custom packetizers 17:06:41 Bernard: this may impact the discussion of the use case 17:07:02 ... another question about workers: in WebCodec, encode/decode would typically happen in a worker 17:07:17 ... would this imply bringing RTCSender/Receiver in workers? 17:07:35 Harald: unsure about that one 17:07:59 ... events aren't transferred 17:08:27 ... making objections transferable can prove tricky, as we've learned with MediaStreamTrack 17:08:53 Peter: +1 to considering these use cases in scope and calling for proposals 17:09:15 ... I would like to get clarity on whether custom packetizer is part of that though, e.g. for custom SVC 17:09:44 Harald: none of the use cases in my list require custom packetization, so such use cases would need to be added 17:10:29 ... I'm hesitant and somewhat nervous to expose packet levels to JS, esp without strong supporting use cases 17:10:51 Bernard: what should be the next steps? 17:11:06 Harald: run a CfC on use cases? if approved, then we would iterate on proposals 17:12:04 Jan-Ivar: the use cases could use a bit more specificity; I'm worried about having too many APIs to achieve the same thing 17:12:25 ... there is already a way to do relay where implementations could optimize decode/encode 17:12:44 ... although the modification use case is a good illustration of what additional would be needed 17:13:08 ... I don't see a problem with using streams in events for the control path 17:13:19 Youenn: +1 to the question wrt packet vs frame 17:13:44 ... if we want to a packet level API, we'll need to figure out the security model, which will lead to a very different path 17:14:13 Harald: I haven't yet seen the use case written up that warrants a packet-level API; I'm very happy to see it, discuss it and decide based on it 17:14:28 ... but at the moment, what I've seen needed is possible to do at the frame level 17:14:38 ... hence why I'm pursuing it 17:14:39 ... so let's see the use cases 17:15:23 Dom: do we want to wait for these additional use cases before CfC these ones? 17:15:44 Harald: no, I think they can live on their own; not clear that the packet level API would fully address them in any case 17:15:52 Topic: -> https://www.w3.org/community/sccg/ Screen Capture Community Group 17:15:52 [slide 41] 17:15:56 RRSAgent, draft minutes 17:15:57 I have made the request to generate https://www.w3.org/2023/01/17-webrtc-minutes.html dom 17:18:12 Toipc: -> https://github.com/w3c/mediacapture-handle De-adopting Capture Handle Identity 17:18:12 [slide 44] 17:20:03 [slide 45] 17:21:15 Jan-Ivar: the WebRTC WG has been in charge of APIs that produce or consume MediaSTreamTrack 17:21:34 ... I don't think it would be progress from moving specs from W3C WGs to a CG - it feels like a step backwards 17:21:45 ... it's been less than a year since we adopted the spec 17:21:59 ... Capture Handle actions is a supplement to identity, not an alternative 17:22:59 ... traditionally, we "de-adopt" a spec due to lack of interest; Mozilla is definitely interested in this API, so we don't think it should be de-adopted 17:23:27 Bernard: procedurally, are you suggesting a CfC to de-adopt Capture Handle? 17:23:46 Elad: what I want to ahppen is Capture Handle Idendity to be be incubated before being brought back 17:24:10 ... I think the Screen Capture CG would be the right place - it could be either by delegation or copy 17:24:26 Bernard: this would be limited to Capture Handle Identity? 17:24:28 Elad: correct 17:25:34 Youenn: I don't think the WebRTC WG approval is needed to fork the spec; the CG can do it on its own 17:25:51 ... I don't see value in removing it from the WebRTC WG 17:27:35 Jan-Ivar: with regard to disagreements, my view is that they've been minor - there is overall agreement on the direction 17:27:59 Dom: my preference would be to keep it in the WG since I think the disagreements are not critical 17:28:41 ... but I think a situation where the specs exist in two places is the worse situation for the community in terms of clarity 17:28:53 RESOLVED: Start a CfC on de-adopting Capture Handle Identity 17:29:00 Topic: -> https://github.com/w3c/mediacapture-screen-share/issues/255 Auto-pause Capture 17:29:01 [slide 47] 17:29:24 [slide 48] 17:29:56 [slide 49] 17:31:08 [slide 50] 17:33:04 [slide 51] 17:33:51 [slide 52] 17:34:40 [slide 53] 17:34:57 [slide 54] 17:35:40 Youenn: the use case makes sense and is worth saving 17:35:49 ... I think the API should be the level at the source, not the track 17:35:57 ... so probably in the capturecontroller 17:36:34 ... but we can dive into the API shape once we agree on solving the issue 17:37:04 TimP: does this cover audio as well? the reasons for pausing seem video orientated 17:37:26 elad: interesting question; it could support Youenn's point about capturecontroller 17:37:36 ... or maybe we need a source object as I've been discussing with Ben 17:37:57 ... it may be worth having separate control for audio & video who are perceived differently 17:38:29 TimP: this could be used in case your webrtc call gets mic stolen e.g. by a GSM call 17:38:46 elad: maybe that's already covered by the muted event, would need to look at it 17:38:52 TimP: let's look at audio in general 17:39:05 Harald: This use case is definitely worth solving 17:39:23 ... traditionally we haven't exposed Sources to JS, which is my worry everytime we talk about them 17:39:38 ... we might want to; maybe the CaptureController is the source 17:40:01 ... I would like to mention again preventDefault() in the event interface that allows to intercept the event default impact 17:40:40 Jan-Ivar: +1 to Youenn on bringing this to capturecontroller (which is indistinguishable from a source in a case of a single capture) 17:41:13 ... I don't think we should terminate output; events should be optional, default shouldn't terminate output 17:41:40 ... if this doesn't move to capturecontroller, I have other issues (e.g. confusion between muted/unmuted, paused) 17:42:23 Elad: wrt CaptureController, it makes some sense; but I have worries about transferability given that CaptureController isn't transferable 17:42:28 ... needs to be evaluated more 17:43:01 ... With respect to default actions, a core component of the proposal is to preserve the legacy behavior - if an event handler isn't set, the output isn't paused 17:43:37 jan-ivar: setting an eventhaldner shouldn't have a side effect though 17:44:09 Elad: slide 53 has a possible approach to this 17:44:30 Harald: preventDefault will help with that 17:44:45 Elad: sold 17:45:17 ... So next steps include evaluating MediaSTreamTrack vs CaptureController vs a new Source object? 17:45:23 Harald: +1 17:45:31 Topic: -> https://github.com/w3c/mediacapture-extensions/pull/77 MediaStreamTrack Frame Rates 17:45:31 [slide 57] 17:47:21 [slide 58] 17:48:49 TimP: uncomfortable with "decimated" which should relate to a factor of 10, not meant here 17:49:02 ... what do we think the developer would do with this information? 17:49:19 Henrik: you can measure deltas between cameras settings and what you're getting 17:49:25 ... you may reconfigure the camera 17:49:53 ... also useful for debugging - frames being dropped from camera issues or other issues 17:50:15 ... right now, hard to make sense of frames dropped 17:50:21 TimP: so mostly a diagnostic tool 17:50:25 henrik: yes 17:50:37 jan-ivar: what happens if track.enabled = false? 17:50:56 henrik: that needs to be decided - maybe stop incrementing counters 17:51:34 youenn: I understand delivered, decimated; is the total sum of frames those generated by the cameras? 17:51:46 ... if so, maybe instead of "dropped", we provide the total as "framesGenerated"? 17:51:52 henrik: fine with me 17:52:15 jan-ivar: what happens in low-light conditions? 17:52:31 henrik: framesGenerated would lower (in this new model) 17:52:59 henrik: hearing overall support with some proposed changes 17:53:32 Topic: Wrap up 17:53:38 Bernard: a number of action items: 17:53:46 ... - CfC on Harald's use cases 17:53:53 ... - CfC on de-adoption of capture handle 17:54:38 Elad: please chime in on the issue for auto pause to help with the next iteration 17:54:45 Youenn: an event on capturecontroller should suffice 17:55:00 Elad: maybe more is needed to distinguish audio/video 17:55:17 ... I'll flesh this out 17:56:37 RRSAgent, draft minutes 17:56:39 I have made the request to generate https://www.w3.org/2023/01/17-webrtc-minutes.html dom 17:56:45 RRSAgent, make log public 19:23:38 Zakim has left #webrtc