W3C

- DRAFT -

SV_MEETING_TITLE

19 Jun 2018

Agenda

Attendees

Present
Harald, Jan-Ivar, Philip, Sergio, Soares, AlexG, Youenn, TimP, EricC, Goran, Bernard, Dom, DanB_(remote), Carine_(remote), Suhas_(remote), Jianjun_(remote), Lennar_(remote), Seth_(remote), Nils_(remote), Aleksandar_(remote), Binoy_(remote), Gunnar_(remote), Andreas_(remote), Misi_remote
Regrets
Chair
Bernard, Harald, Stefan
Scribe
dom, stefanh, youenn, jib

Contents


WebRTC Stockholm 2018 F2F - Day 1

<dom> Slides

<hta> Correct hangouts link: https://today.talkgadget.google.com/hangouts/_/google.com/webrtc-wg

<dom> ScribeNick: dom

Bernard: [reviewing agenda for the day on slide 6]
... [reviewing agenda for tomorrow on slide 7]

Use cases and requirements for WebRTC NV - Peter Thatcher

Peter: we've been writing down all the proposals that have been sent, very liberally, including very recent ones
... we haven't signed up for any of them yet
... WebRTC is already being used (or could be used) in a wide number of use case (slide 9)
... we have use cases for WebRTC 1.0 - which NV would improve on (slide 10)
... (slide 11: existing/improved use cases) multiway call could be improved e.g. SVC

Bernard: temporal scalability is very popular in Chrome

Peter: (slide 12): browser to devices
... this pushes lots of new ICE requirements
... (slide 13) Data channels with large set of end points (e.g. IPFS, WebTorrent)
... this pushes requirements for data not clobbering A/V, pausable ICE, incoming ICE, low-buffering for large files
... (slide 14) client-to-server games (not covered by current RFC use cases surprisingly)
... game developers are asking for use cases where they want to send at fixed frequency at a low rate, low buffering
... (slide 15) VR communications
... repeats previous requirements, but adds requirements for more advanced encoding control (e.g. different framerate for depth vs video)
... (slide 16) Remote control
... (slide 17) Video streaming, either from a server or from a browser, with a particular focus on live video
... needs some more control to ensure a subtly different balance between quality and latency

Goran: how live is live?

DrAlex: in the streaming industry, the reference is 5s for cable transmission
... so we're looking here at below 1s latency

Goran: we need to be clear on what we mean by "live" then

Peter: another requirement I heard from Facebook is shared congestion control with HTTP
... (I think that would only be possible with QUIC, if at all)
... (slide 18) Filter use cases
... require access to raw media

Bernard: wouldn't the previous one also require it?

Peter: not necessarily... or maybe depending on how much control you need to provide; we'll talk about this when we dive
... into more details
... (slide 19) E2E encrypted video conference

Bernard: we need to get a clear picture of the security model here

Peter: here we're considering we trust the JavaScript, not the SFU

Bernard: I'm not sure in which cases this would happen - in most cases, the same entity runs both

Sergio: imagine a banking use case - the bank runs the app, but not the SFU
... also, you can deal with untrusted JS with the IdP

Youenn: I agree we need to investigate both
... we also need to consider the case for encrypted storage of media

Bernard: I think there are a lot of variations, and possibly incompatibilities with other use cases (e.g. access to raw media)

DrAlex: we have implemented a couple of e2e encrypted systems, we can give input
... we would likely want to reduce the scope

Bernard: I hear youenn and Alex volunteering to present a more detailed view during a later session tomorrow

Peter: not having access to the media can also be beneficial for SFU operators

Goran: encryption is one aspect, but not the whole story - e.g. you could trust the server code, but not the platform on which it is running

Peter: (slide 20) SFU in JS (à la mediasoup.org, but in the browser)
... (slide 21) ultra-low latency audio
... pushing requirements for even lower latency (e.g. by reducing or removing the audio jitter buffer)

TimP: you also need synchronized audio timelines

Peter: not encoding would save an additional few ms

Youenn: can't you do that with WebAudio and data channels?

Peter: but you need to deal with congestion control then

DrAlex: wouldn't sending unencoding samples be slower to send?

Youenn: you have to assume high bandwidth network for these scenarios

Varun: there are also foreseen usages for annoucements in airports, malls, where network speed is not an issue

Bernard: gaming use cases are another case that wants to push to very low latency, for streaming games

Peter: (slide 22) list of new use cases
... (slide 23) list of potential new requirements for media
... (slide 24) list of potentional new requirements for ICE
... (slide 25) which use cases do we want to sign up for? which requirements do we want to take on?

JIB: I have another use case which requires exposing the timestamps of captured data

Peter: an encoder API would also need access to timestamps

Youenn: there may also be a requirement to expose which frame is being rendered

Harald: WebRTC stats has a proposal to expose the timestamp of the last rendered frame, which could be used here with the timestamp of the stat

Sergio: in a healthcare scenario I worked on, you need per-frame accuracy

youenn: I think this would require raw decoder access
... this needs further exploration

DrAlex: could we make the video element smarter?

Bernard: let's not dive into the details now - instead look at the list of use cases and see which ones have support

TimP: we need to make sure use cases that were not originally described as WebRTC use cases but were buildable with WebRTC be documented, to make sure we don't break them accidently

Bernard: there is backwards compat discussion in the next session

Goran: I have a machine-learning classifier on audio / video

Peter: I think there is lots of simliarity there with access to raw media, and server-to-browser
... might be worth identifying it as a use case on its own

youenn: the learning could be used to improve the encoder?

goran: right - but also can be sent to the server

youenn: the "funny hat" use case is "just" real-time media pipeline
... your use case has different requirements
... not necessarily as focused on real-time processing

Bernard: this could be included in Harald's later session on broad data access
... let's look at the list of use cases and try and see if there are things we don't want to do

Harald: I'm hearing skepticism about whether E2E encrypted conference make sense
... this requires a better understanding of the security model; it also touches on the likeliness adoption of identity
... I'm not sure which attack the thing we know how to do is preventing
... I'm also skeptical on the ultra-low-latency audio
... it requires very strict control on the network
... speed of light gets in the way very quickly

Bernard: +1 to that - we've worked on this, and it's not clear we need additional stuff for improvemnts
... another one on which I'm skeptical is the IoT use cases - not sure there is much we can do

Peter: some requires battery saving, but there are also lots of powered devices which would benefit from improvemnts

TimP: also need to take into account the sleep/wake cycles - you want to keep the wake cycles cheap

Harald: we've had complaints about the 500 peerconnection barriers

Nils: for the "funny hat" scenario, my current guess is we would want to web-audio like API with nodes to modify video
... I don't necessarily see how this fits in the WebRTC WG per se
... except that you would may want to connect the output of these nodes to a peer connection, with some control of back pressure
... not sure if this fits in this WG
... likewise for the VR scenario
... this all feels like a separate WG - which would need to interface with PC

Youenn: +1
... I like the scenarios; I don't have strong feelings about where this happens
... clearly this WG would have requirements on interfacing with PC, but overall, this feels like a different set of expertise
... Even if a use case is very nice, we need to keep in mind the cost/benefit analysis
... e.g. SFU in the browser is a nice idea, but it may have a huge cost

Peter: there are lots of overlap in requirements across these scenarios - so covering a few of them would cover most of them

Bernard: a lot of the SFU-in-JS would be done via WASM-compiling existing C code with the right parameters

fippo: we did a complete SFU in JS - it's possible and fast enough

<drno> if anyone was taking: I couldn't hear him/he rremotely

Gunnar: I mentioned real-time text

Peter: I've included it in synchronized data

Gunnar: the key is transmitting text as soon as entered - not necessarily bound to RFC 4103
... you want to keep latency low; usually less strict than for audio/video, although we speech-to-text, the requirements are getting stronger
... we need to document how to handle this

Varun: with RTT-in-RTP, this would fail at the rendering time

Gunnar: there is no strict requirement on synchronization

Youenn: yes, a .5s delay in rendering wouldn't be an issue

Varun: but then RFC 4103 is no longer that relevant

Bernard: there is the SFU use case where this would still help

Peter: also in the case of congestion control
... summarizing discussions: opinions raised against ultra-low latency, SFU in browser, funny hats (maybe), questions about e2e encrypted video conference

Varun: a bit hard to have an opinion on the middle use cases (browser/server/devices)

Peter: another approach could be to look at the requirements (slide 23)

Varun: I find it hard to link the requirements back to the use cases

Peter: [re-reviewing requirements from the various use cases slides]
... [discussing ICE checks tweaks and control by the app]

Harald: we're out of time in this session - let's rediscuss this tomorrow morning

WebRTC NV API Level

<scribe> ScribeNick: stefanh

Re-starting after some coffee

Discussion on WebRTC NV API Level

scribe: See slides

<dom> [slide 27]

<dom> [slide 29]

scribe: componentize WebRTC
... but what level should the APIs be at?

<dom> [slide 30]

<dom> [slide 31]

scribe: we want to go lower than 1.0, but how low? ORTC is lower level than 1.0, but we should go even lower

<dom> [slide 32]

scribe: how low would too low be?

<dom> [slide 34]

<dom> [slide 35]

<dom> [slide 36]

<dom> [slide 37]

<dom> [slide 38]

<dom> [RS stands for "RTP sender" in slide 38; likewise "RR" for receiver]

<dom> [slide 39]

scribe: how far do we want to go?
... we should allow the application to choose what it does itself and what is left to the UA

Göran: do you have the same pic for a case with a QUIC transport?

<dom> [slide 40]

Peter: just change DTLS (?) to QUIC (or SCTP) in the picture
... I think D is the sweetspot. It allows you to inject your own codec for example
... as long as we supply a default D and E are the same

hta: crossing interfaces have a cost. The UA can probably do things more efficiently than doing it in WASM.

youenn: much of this can be done already (canvas etc.) but is not done because of inneffeciency
... a lot of work for something that will not be used
... maybe we should expose more knobs to existing things instead
... like more knobs controlling a HW accelerated codec
... we can probably satisfy the use cases without changing the model this much. I like B!
... and instead revise existing objects (like RTPSender) instead of changing the total model

Bernard: a lot of complexity in RTPSender. Much of what we want can be done in the B model
... main benefit of C is allowing to send media over non standardized transports (i.e. no RFC for it)

Youenn: I like interoperability, and I think user exp will be better with B

JanIvar: I'm a but worried by the componentization. QUIC is not mature enough. I like B since ORTC has been around, and we have experience
... I'm also afraid of JS main thread going into the media (or network) path - can create jitter
... should not use main thread

Peter: some push for workers

Youenn: see audio worklet, but this paths requires a lot of work to define and work out

Varun: if we move this thing to a worker it would still be costly. Need API more like "this is the pipeline", e.g. something between B and C

Sergio: we are mixing things in the discussion. I.e. both _what_ components and _how_ we use them
... for B is a no-go because of the RTPSender. B does not provide much compared to 1.0
... we have provided encoder APIs for years, no problem

Youenn: what about a more fine grained API to RTP

Sergio: I just want one encoder and be able to send to QUIC or whatever - i.e. split out from RTPSender

Youenn: maybe we should think about a way to enable use of one codec only

<dom> Sergio: being able to use one encoder for several senders would be a +

Varun: I like D because you can be quicker out than if you have to standardize
... in D you can e.g. build your own FEC in the script
... would allow me to be quicker to market
... with new features

Tim: one of the huge things we have accomplished is the rtcweb wire protocol. We should not destroy that

Youenn: what is the reason we can't standardize "good" things?

Bernard: usually good reasons why things are not implemented by browsers

Peter: not clear what we ultimately want.

Harald: In B it is not possible to use the codec without using RTP.
... that is possible in C

(discussion on QUIC and RTP)

Nils: the more boxes we put there the more implementation work and risk for interop issues.

Peter: I think it is the opposite. If the UA only has to do simple boxes and leaves the hard things to the app makes it easier for the UA

Göran: level depends on the use case, we may have to do both

Youenn: nothing prevents from doing all in script already now.

Sergio: implementing your of FEC and RTX is not appreciated by servers
... would disrupt the eco system we have

Bernard: providing more knobs to what we already have would go a long way
... in our experience

Varun: in 1.0 we've been using the IETF way and standardized things
... but with D or E we can be more agile and flexible

<burn> +1 for allowing more innovation. D level allows for all of the combinations that have been discussed, as long as there is still a way for optimized combinations to be done in hardware/in the browser if desired for performance.

Sergio: can destroy the server eco system

Tim: we could remove som complexity we could remove from 1.0 to make adding complexity more OK

<Misi_remote> Could be rfc8155 so TURN anycast autodiscovery important if we add lower level of STUN/TURN functionality?

Peter: moving over to ICE
... in PeerConn and an extension spec

<dom> [slide 41-46]

Peter: ORTC was one step lower to support forking
... FlexICE adds a bunch of knobs to ICE
... if you want to go completely crazy: SLICE

Youenn: describe ICE forking

Peter+Bernard: connect local candidates with remote candidates for different endpoints

<dom> [slide 47]

scribe: and turn candidates are not cheap so reuse is a good thing

Pros/cons slide

Peter: general feedback? SLICE is too low level?

Harald: note that there are a lot of things JS should not be allowed to use for security reasons

Bernard: +1

xxx: SLICE would only be interesting for testing

Peter: I tend to agree

Henrik: why do you want better control over ICE

Peter: control how ofter checks are sent, control what transport is used
... can be done with knobs, but many of them are needed

Youenn: WiFi vs. Cell is an interesting topic

Peter: Backw compatibility
... on the wire things should mostly be compatible
... FlexICE could lead to a non standard ICE behavior

<dom> [slide 49]

Peter: same if you do your own codecs

Harald: when we remove SDP we lose some backw compatibility - leaves it to the web app

Peter: backw compatibility in the API
... name may be the same, but if we add a lot of knobs it might be a new thing rather than the same object with more knobs?

Bernard: would we have to be able to build 1.0 on top of the new APIs?

Peter: PeerConn will be kept, so not sure NV must be able to underpin PeerConn

Bernard: good if the SDK can do both 1.0 and new stuff

Peter: I propose it is not a requirement that PeerConn must be possible to implement on top of NV

Youenn: I agree

Jan-Ivar: I think it would be cleaner if we could shim PC on top on NV

Harald: If we can shim a PC that supports the major uses we'd be fine

<dom> [our documented constraints in the charter are:

<dom> Direct control: The new APIs are intended to provide direct control over the details of real-time communication, where the application can directly specify how information should be transmitted, without any built-in negotiation semantics.

<dom> Standalone operation: The new APIs will be complete enough to allow applications to write solely to the new APIs to complete common tasks.

<dom> Backwards-compatibility: The new APIs will extend the WebRTC 1.0 APIs, rather than replace them. Applications that use the PeerConnection API will continue to function, unless there is a clear and compelling reason to deprecate specific 1.0 functionality.

<dom> Feature independence: Features may be introduced in the new APIs that are not available when using the PeerConnection API.

<dom> ]

Henrik: If we move away from SDP will Transceivers go away too?

Peter: yes!!!
... what about object names, same or new names?
... given that we expose many more knobs

Fippo: I'm struggling with that question

Bernard: good to keep names, makes you consious that they are the same objects?
... namespace discussion
... we'd have to throw exceptions

WebRTC in Workers - by Tim Panton

<dom> scribenick: youenn

Tim: Worker Interface slide
... worker allows writing in JS out of the main thread
... postMessage to be used to communicate with the web page
... question is how much we want to expose WebRTC APIs to workers
... variants of workers
... server side is not built into browsers but are useful in the context of streaming
... worklets

<suhas> I will have to drop off and miss the afternoon session .. wanted to share a thought on the identity session, it would be good to get inputs from Ekr or some one from identity before we make decisions today. thanks

tim: lifecycle: web workers vs service workers.
... as many web workers in a page. Closing the page, web workers die. Good for parallelism
... service workers shared across all tabs of an origin, lifetime is independent of a given page.

<dom> Tim Panton's slides on WebRTC & Workers

tim: service worker good for offline, like going in a tunnel and still be able to browse
... relevance to webrtc
... podcast recording and remove silences off the main thread (not realtime) so in a worker.
... fly to a warehouse, get the video to drive the drone. Barcode analysis is done later on asynchronously

persistent video call even though navigating from one page to the other

tim: data channel
... serve pages from behind NAT over the data channel to another device

youenn: another use case: service worker to receive video data through data buffer (webrtc CDN).

tim: slide before Benefits with a drawing
... Benefits
... build a pseudo VPN
... conclusion
... should allow webrtc in workers.
... need API access in workers. Which API to select is unclear
... WebRTC 1.0 is peer connection
... not clear for WebRTC NV
... additional requirement: transfer some objects, like media streams

transferring a peer connection is debatable.

scribe: ICE persistence could be done through service workers
... comments welcome

henryk: moving the ownership to a specific worker

<hta1> (hbos)

henryk: or multiple workers getting access to the same worker

tim: both

youenn: difficult to do out-of-process object access
... transferable might be difficult, data channel instantiation is fine

peter: transferring objects might be difficult and creating seems fine. Why would someone need to transfer a peer connection object from a page to a worker?

tim: it takes out some code to stop postMessaging data for every received event.
... so would simplify web app code

goran: was discussed several years ago. Browser vendors were hesitant at that time, service workers were mostly for HTTP.

peter: the smaller the scope, the easier it will be

youenn: not sure about transferring

tim: transferable might not be the right term
... using postMessage to pass a track instead from worker back to page.

jan ivar: other use cases. data channel in a worker. big file transfer in a web worker would be better off the main thread seems better

Harald: any reason for media API in a worker?

Tim: worklet model might be interesting for some of these. Not clear yet though.

peter: interesting to have a media pipeline using workers.

Tim: web audio workout is already a thing
... silly hat use case could be done in the same manner

Goran: service worker used to preload stuff. Same kind of things could be done with data channels and media tracks. Prefer to put code in a worker/service worker instead of a main page. More upgradable, better lifetime.

Peter: Seems interesting to use worklet for the media pipeline

jan-ivar: if we were to such an API, I agree that we would go with an audio worklet. We first need to decide whether to go there. If so, audio/video worklet makes sense.

Harald: perfect timing for the next topic

WebRTC NV encoders/decoders - Peter

peter: big picture

same as before

peter: presenting use cases.
... I care about allowing transmission on top of QUIC
... what are encoders and decoders?
... a track goes in and encoded frames come out
... the app needs to do something for each encoded frame to send it to the transport.
... simple cases, the app might not need to go into the path as an optimization
... what is an encoded frame
... video: payload (encoded pixels) plus metadata
... audio: encoded samples over a range of time, potentially several channels, metadata
... need API to control encoder
... What can the app control?
... listing parameters that can be tuned
... how does media get transported?
... to be in the media path or not to be
... possibility to use WHATWG streams
... AudioEncoder
... presenting potential API
... VideooEncoder
... presenting potential API

Henryk: what happens when you handover the encoded frame to the transport? Question about ownership if hopping to another thread.
... If handled by browser, no issue. If main thread, needs to hop and copy.
... Maybe worklet could solve the issue

Peter: worklet would define a function that would define the processing.

Henryk: and then native code would actually call this function?

Peter: Not sure, would need to study this more.

Jan-Ivar: audio worklet would probably register and native code would call this function directly

Peter: Another option C might be: not in media path but with worklet
... AudioDecoder description
... need to push frames and attribute to get the decoded audio as a track
... Question about jitter buffer.
... who does the job to reorder packets that arrive out of order

<dom> [skipping RTP discussion to jitter buffer slide 68]

Peter: In this model, it is the decoder
... in libwebrtc, audio decoder and jitter buffer are really tied together
... for audio, one audio encoded frame -> one packet. for video multiple packets for one encoded video frame.

Varun: why not reordering frames in JS and give the frames to the decoder?

Peter: more work for the web app. Second, this can be done in JS.

Varun: if decoder does all the magic of the reordering, why splitting things?

Peter: one main use case is to do processing in between.

Varun: might have to do browser checks if jitter buffer is different in Chrome and Firefox

Peter: Might be able to add API to remove any jitter buffer in decoder so that JS would need to do it.

Jitter buffer knobs could go in the decoder API.

Varun: ok, so this object could do by default the processing, and if you are smarter, disable the processing and do it in WASM/JS
... might want to clarify this in this document that jitter buffer is handled in decoder objects.

Tim: If you do order, the built-in FECs might fail

Peter: the idea would be to have a knob to disable jitter buffer

Tim: What happens when a packet is lost? Should there be another parameter?

Peter: Good question

Tim: Chrome is tieing things together for a good reason

Sergio: there might be a lot of differences between browsers depending on how they implement things. Might not be good for users.
... If we do not specify the jitter buffer processing, difference of browser behaviour would not be good.

Peter: right, we would need to define the jitter buffer.

Sergio: no realtime would be good too

Peter: encoder would make sense. not sure for audio decoder.

<dom> youenn: I'm not sure how these APIs would interact with Media Recorder (redundant), MSE (redundant), or EME (where raw media shouldn't be allowed)

<dom> ... given where we are with existing Web APIs, it's not fitting very well

<dom> ... whereas adding video processing to the platform would make sense, here it feels pretty redundant

<dom> Peter: we have these objects, but they're bundled into the RTP interface

<dom> Youenn: but they're not exposed

<dom> ... it feels if we add this, we should deprecate Media Recorder

<dom> ... from that point of view, it does not appeal to me

<dom> Peter: I don't think we could retrofit the Media Recorder API to fit this needs (whereas Media recorder could be rebuilt with this, maybe)

HenryK: separate question is whether you can reimplement your own encoder/decoder. These are two different questions.

Peter: 3 actually. Use same encoder for different transports. Processing in the middle (end-to-end encryption). And third one is bring your own components.

Tim: you can already do the third with web audio.

Peter: but not RTP

Tim: might not care since you would already need to rebuild a whole stack

Peter: main goal is to get access to hardware video codecs in JS.

Varun: is there any app code/prototype for this?

Peter: libwebrtc implementation is very close to this.
... mainly expose lower level pieces

Henry: exposing things, need to handle threads...

Raw Data Access - Harald

Harald: slide 71

APIs not elegant might turn into bad CPU usage

Harald: slide 72

alert() stops the main thread

need to go out of the main thread anytime you are in a hurry

presenting JS/WASM specificities and issues for audio/video processing

Harald: slide 73

separate the issues for in-pipeline (realtime) processing to from-pipeline (not realtime) scenarios

Harald: slide 74

in-pipeline: not much buffering, regular pace

from-pipeline: timing information is important, but not care a lot about when it arrives.

Harald: did a demo to wait for 20 ms, check the time.

on the main thread, jump between tabs... high variation

on a worker, variation is 4ms, accuracy increased by 10.

jan-ivar: implementing audio worklet right now. Script web audio processing node is main thread and is found too slow.

Harald: in scope of media capture task force.

Need to have capture APIs that work well enough with WebRTC.

scribe: task of members to determine where discussion would happen.

jan ivar: in-pipeline needs worklet.

Harald: some people started with audio worklet and moved to script process node.

Peter: track modifier?

Harald: sensible model.
... model of a transform stream.

Jan ivar: clone track and register worklet.

jan ivar: interesting to study audio worklet, some issues with thread pool

youenn: audio worklet is good to gather experience. WASM might be nice but probably need webgl/webgpu. Need also to make sure these pipelines work across many devices and predicably.

goran: would also like to mention tenserflow.

<dom> [lunch break until 1:30pm CET]

ICE

<dom> ScribeNick: jib

[slide 73]

[slide 74]

ORTC has IceGatherer

and IceTransportController. does two things: freezing, and bandwidth control

IceGatherer split out to enable forking

1 IceGatherer connect to multiple transports

[slide 75]

scribe: major use-case, popular in gaming
... games had mesh scenarios where they wanted control of ice candidates
... [slide 76]

[slide 776]

[slide 76]

scribe: similar to 1.0 IceTransportPolicy
... can control gather options, with different set of candidates filterered differently

getLocalParameters() gets you what you can use to send an offer

[slide 77]

scribe: can pass gatherer in
... don't allow multiple start()s
... addRemoteCandidate(), influenced by ortc setRemoteCandidate etc.

[slide 78]

Presenter: Peter thatcher
... either 1.0 IceTransport, or look at it as a simplified one not supporting forking
... compatible
... better than 1.0. has tradeoffs with ORTC
... control which candiates are gathered. e.g only gather wifi or cell
... control lifetime of candidate, keep turn candidante around indefinitely, or drop
... control frequency

[slide 79]

scribe: have debugged ICE a lot. Common: hey ICE's not working. no info in JS logs
... need info about when stuff is sent /received

[slide 80]

scribe: can gather() more than once
... if you want more control of how ICE restart works. only mechanism for adding new candidates
... now vs before
... can remove local candidates
... either enumerate networks, then pass them together, or enum all networks then remove those I don't like
... or keep candidates around
... removeCandidatePair() is similar to removeCandidate() but more selective, more controll
... finer control
... fork() IceTransport that copies local candidates
... networkInterfaceId so you can know and specify it, and the type, wifi/cell
... hbos: Why doesn't iceTransport know ???
... applications saying I want to use the wire
... youenn: need to check security
... networkInterfaceType is new information
... navigator.connection.. cat out of the bag?
... in getStats?
... network connection type
... dom: should user determine connection to be used?
... maybe: this one costs me money, this doesn't
... dom: I meant static setting from user
... maybe have a bit for don't get on my local network?
... other bit on whether it's expensive or not?

<dom> Adding "networkType" field to RTCIceCandidateStats. #259

scribe: setMinCheckInterval, sets a min, only go this fast
... setFrozen() = don't send at all
... means don't send checks
... setCheckPriority() , e.g. do relayed connections first, to connect faster. This pair sooner than this other
... [Tim] What state can you call these in?
... Tim: ^
... (missed answer)
... select(), allows having a backup candidate pair, throw away old one.
... nominate() more than once would require an attribute, draft expired

slightlly nonstandard

scribe: onchecksent/responsreceived valuable for debugging
... and when you give up
... setReceiveTimeout() ^
... .onreceivetimeout tells me when you've stopped receiving any packets
... answers: is the ice transport dead or not?
... [harald] this is automagic in the current model. Is the new api manual or automatic?
... compatible if you don't set anything
... tricky case: local candidates are thrown away by default. In order to have out of box same experience, new features incremental
... can always select() a new thing
... without extension for renomination, nominate() again would appear as aggressive nomination = bad
... [unknonw] can we repair ? repair/retain
... good point

lgrahl: ^

[slide 81]

[slide 82]

could implement full ice agent in webassembly

hbos: shouldn't checkpriority jsut work automatically?

peter: it's faster, small optimization
... some mobile apps do it

<scribe> ... new is forking, freezing, and one more thing

Goran: tell us about mobility use case

Peter: backup wifi on cellular
... switch over to cell
... see candidate close, immediately call select()

aboba: can move candidate between interfaces and keep connection up. could work in 1.0 as well

peter: switching requires NV
... and NV to retain candidate
... have to keep turn candidate warm
... checking happens in the background
... with SLICE it would happen in the JS. We're not doing that. I dunno
... which path do we take?

aboba: you say most of it is implemented in libwebrtc, exposed in apps, not web api
... could it be done as a 1.0 extension spec?

peter: I think you could except for fork()
... all except the fork. Good idea!

[slide 84]

scribe: we need consensus whether to add that (to 1.0)
... we don't want to spec something only one browser will implement

harald: I like extension specs

aboba: Ice spec is technically NV and an extension spec already?
... should we adopt extension spec?

harald: I think feeling is this is the way to go forward

varun: I'm a bit lost with extension spec how it will relate to 1.0
... we do all this work with it not being backportable

dom: we would probably consider this for extension to extension

peter: I wrote it as a partial interface

dom: are we going to break stuff with extension spec?

<burn> Alex will tell us in the next discussion :)

peter: will spend more time to thinkg about complications that may arise, e.g. calling gather() ...

varun: from adoption point, why are we adopting it, and what impact will it have?
... we should know what we are precluding

aboba: (the ICE thing) is a webrtc NV thing

peter: we could add these methods and say they can be called from 1.0

aboba: was there a decision?

harald said to verify on the list

harald: sense of the room is to adopt this

no decision on fork/no fork

Identity

Presenter: Harald

<dom> Identity - what next?

[slide 1] Status of this preso

scribe: this is based on may 3rd
... no further discussion has occurred on the mailing list
... chair had a call with orgs who objected

<dom> Identity in WebRTC - status and next steps

scribe: briefly touched on it there
... this is an attempt to get chairs perspective
... and ask WG what to do
... WG gets to decide

[slide 2]

scribe: Firefox has implemented
... Chrome, Edge, Safari not announced support
... missy pointed out cisco and Misi?

<mt_____> "announced no plans to support, or not announced plans to support?"

Harald: certain there has been no expression of intent
... will only say that

aboba: we generally say what we do

harald: process issues

[slide 3]

scribe: rtcweb says "identity required"
... and RFC 7478
... if we want to say webrtc 1.0 is implemented, then there's a normative requirement for identity
... we have no production experience with this one
... can it be attacked? we have seen nobody try
... had an implementation hackathon in London
... one implementation appeared to work
... another couldn't figoure out why
... in issue tracker there are 23 bugs
... identity related
... 9-10 months old, no submitted PRs or current editors understand security

ekr: there are some PRs now (as of this morning)

harald: charter says "to advance ... 2 implementations"
... interop ... should be demonstrated
... formalisms ^
... living standards is example of non-proposed recommendation we depend on in PRs
... WG strive for PR status document

[next slide] short summary

scribe: cannot advance to PR
... with identiy
... proposal in may 23rd was to move identity into its own document
... assemble team of people who think it's important
... not webrtc 1.0.
... discuss

ekr: normative reference?

harald: yes

ekr: in ietf this would not be ok

harald: true excet for downrefs
... there are such procedures for rfcs

dom: can comment on practice: prefer no down reference, but it happens
... expect it more often with growth of living standards

ekr: not ok to advance to PR with this new draft in state of nothing
... maybe ok with CR

dom: can't imagine why this would be refused

harald: we could get draft as CR with unmodified text from webrtc

martin: text is tightly coupled with current spec

harald: section is separate. 3-4 places where algorithm is refers into section

ekr: keep isolated streams?

harald: in

ekr: could probably live with this

martin: tolerate it
... if track? points are kept live then manageable
... on we march

dom: goal is to get identity work done
... browser and identity provider adoption needs to be addressed, to get a market

harald: if this seems llike an acceptable thing, two questions to the group:

shold we split section 9 into separate document?

do we have volunteers for editing new document?

ekr: presumably martin or me, can we get back to you?

harald: one of them has volunteered

ekr: or maybe cullen
... 3 persons

youenn: what about isolated media streams?

harald: stays in

dom: how do you demonstrate interop of isolated media streams without identity

harald: isolated streams was originally a concept in mediacapture-main
... as a concept not coupled to api
... will have to get back to it

ekr: could wipe that one out

<mt_____> I think that we could move isolated streams fairly easily. It's separable. Just based on skimming, I might have missed something though.

WebDrivers extensions for WebRTC Testing - Dr Alex

Presenter: DrAlex

<hta> I wouldn't mind moving it. we have normative references to identity anyway, so having a normative reference to isolated streams wouldn't increase the coupling.

DrAlex: to see if everyone would be ok with proposal for testing, if vendors are ok with us pushing itno corresponding specs

[next slide]

scribe: different interop combinations tested
... at Cosmo, opinion is to test all combinations
... avoid patchwork of solutions
... and different stacks
... want to avoid that pain
... by standardizing web driver

[next slide]

scribe: permission prompt cannot be accessed by JS
... cannot use existing spec, because not modal prompts
... need common way to deal with permission prompt
... web driver automation chapter in permissions spec
... apple prefixe specific web driver apis, 11 of them
... specific one for gUM

<dom> Automation chapter in Permission API

scribe: some subtleties with enumerateDevices
... passing permission prompt was enough to unblock 99% of tests

hbos: testing webrtc or gUM?
... media creation needed in tests
... need gUM to test gUM

youenn: in safari, getting camera access could cause issues with security, so by default it's only a mock

hbos: but webrtc 1.0 tests can be tested without gUM

DrAlex: was PR to do that. Could argue it creates dependencies on other areas
... but we accepted it
... VM devices are appealing because they are deterministic. like HW from OS
... doesn't exist on all OSes, especially on mobile, none on iOS or macos, not on Windows 10 edge

youenn: comment on web audio and canvas is fine for wpt, but there are more tests. web driver used elsewhere
... want browsers to test with real gUM media

jib: ok to test media flow in wpt?
... answer was yet
... s/yet/yes/
... look at standardizing web driver api in permissions spec
... sounds appealing

Varun: difference between fake and mock?

DrAlex: no
... will check with jib on permissions spec

Web Platform Tests update

<dom> Presenter: Soares

<dom> ScribeNick: dom

Soares: [reviewing stats for Jan-Jun 2018]
... a lot of activity going on
... [looking at open PRs, open issues]
... notable issues include the fact that in Chromium, WebRTC tests are leaking resources, and need an explicit pc.close
... when writing new tests, please remember to add this
... we have specific tooling to automatically add this clean up (in #11524)
... We discussed in the previous session the need to get around the getusermedia prompt
... we're switching to using captureStream and AudioContext for mock video and audio streams
... we've added a test helper (getNoiseStream)
... for browsers that aren't implementing captureStream yet, we fallback on audio (which is better supported)

youenn: we would like it if we could override getNoiseStream for safari

soares: the tests are currently failing before merging that PR

youenn: not when running in our own test setup where the mock streams make them pass

jib: could we make getNoiseStream browser-dependent? e.G. with a carve out for safari

youenn: I would prefer a switch based on whether webdriver is enabled
... but the alternative is probably OK as long as the check would apply to all webkit-continuosu integration systems (not just safari)

soares: PR 10885 looks at conformance for WebRTC codec support (based on RFC 7742)
... another open pull request is whether to have webrtc-stats directory

henbos: not sure there is value in it

dom: one value is to get visibility to how much of webrtc stats is implemented

henbos: OK, I'm convinced

foolip: there are no tests in this PR at the moment

harald: we would want to move the test that checks that all required stats are present with the right members
... I think I know which helps would need to be moved in the process

soares: [reviewing new WPT contributors]
... there is a bit of a backlog in reviewing PRs
... partly my fault, but would be good to get more contributors

youenn: we have some layout tests we could try to upstream

jib: we want to switch to use WPT more
... we have transceivers tests we would want to move
... but we're a bit stretched
... the export process from our own test repo to WPT is not always as smooth as I thought it would be

foolip: it could be improved - there are certainly more manual processes that we wish there was
... we have improvements in the pipe

dom: are there plans to update the data for the coverage system?

foolip: bikeshed has a new feature that detects tests without mapping to the spec; it would be nice to see that adopted in respec as well

alex: how much work would it be to update the coverage data?

soares: quite a bit

henbos: I want to argue for tests that check behavior atomically

KITE testing - AlexG

Alex: [reminder of what KITE is]
... Kite can run WPT WebRTC tests across all of our identified configurations over 2 days
... for 2-browsers tests - KITE can run e.g. appRTC with stats integration
... we have 3-browsers tests as well - e.g. with Jitsi
... this is running the tests in both directions
... in April, we've added tests for multiparty, multistream (based on Unified Plan), simulcast
... June 2018 updates
... most failures are in Edge because of webdriver mismatch, and because Edge's approach is slightly different from other browsers
... a number of bugs were identified in the process
... but no regression bugs were identified yet, despite the fact there WERE regression bugs

youenn: there was apparently a chromium version without h264 support - KITE would have caught this?

Alex: yes - we have a full codec-test, but it had started breaking down when Chrome added support for more H264 profiles
... this is in the process of being fixed
... this illustrates the fact that tests themselves need continuous maintenance

youenn: one possibility would be alarms that alerts browser teams of such failures

alex: we are looking at this - possibly through a filterable mailing list
... Another recent addition is network instrumentation
... there are a number of scenarios that are core to WebRTC quality which require complete control of the network
... and that for each of the clients, each of the servers, and each of the target OS
... that's not easy
... NTT had a great demo around that back in Tokyo
... but all existing methods have problems of scope, OS, links
... we are investigating the various approaches that would cover all the right combination of cases

Varun: if you test all the four browsers, does it really make a difference if they were tested in different platforms?

youenn: the implementation of WebRTC in safari/mac and safari/ios have differences (in partcular getUserMedia)
... so it's useful to test them separately

Varun: but for network instrumentation?
... I can imagine DSCP markings would be OS-dependent

Alex: it's not clear we have an answer to that question

Fippo: it's rare to have os-dependent networking stack

Bernard: Edge is completely different on Windows vs iOS/Android

Varun: my question is how many cases would be affected by that platform-dependence

Youenn: to mitigate my previous statement - the implemnetations aren't completely different
... you would still get a lot of benefit from testing only on Mac - you would still cover 80% of failures
... but clearly adding iOS would identify more

Alex: one of our concerns is starting on a path that cannot be extended to cover all cases
... This work is in progress - I hope we'll have results to show at TPAC
... One more thing
... KITE is supporting new use case - load testing
... broadcast and streaming is more challenging to test
... selenium is not made to do load testing
... we wanted to reach 1M parallel browser clients; we reached several 10K
... results are pending publication

TimP: what's the mix of desktop vs smartphones use cases? what are people asking you for?

Alex: our customers are most interested in mobile testing
... both apps and browsers

Goran: can you say more about mobile? do you instrument mobile networks?

alex: we don't have network instrumentation yet
... we use real devices

Goran: do you test mobility?

alex: not today
... we hope to get network profiles (3G, 4G) through network instrumentation
... next might come mobility testing

Goran: emulating mobile network is a challenge

Alex: also, we haven't had customers are asking for this yet

TimP: you might want to look at what OpenSignal is doing

JIB: support for getContributingSources would be good

Bashing tomorrow's agenda

Bernard: based on discussions today, we will do another half hour on use case and low vs high level
... next looking at transports
... then QUIC
... SCTP
... SVC
... right before lunch, E2E security drill-down
... early in the afternoon, looking at protocol dependencies
... then WebRTC Stats
... and then the rest of the afternoon will be open to new topics

JIB: readablestreams in data channels?
... maybe covered by lennart's session

Lennart: I'll cover this indeed

Peter: maybe we could look at Worklets for processing - I've started looking at this, if there is time

Harald: there was discussions this morning about which level we wanted to attack
... I would like to make sure we assign action items to team to start actual work

Bernard: would fit in wrap-up and next steps

Youenn: would be good to get an update on screen-capture: spec status, testing, next steps

Bernard: we may not have the right people to cover this during the F2F - maybe for a virtual interim

Varun: we could volunteer resources for screen capture

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2018/06/19 14:48:30 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.152  of Date: 2017/02/06 11:04:15  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/Misi/Misi_remote/
Succeeded: s/one-/per-/
Succeeded: s/neough/enough/
Succeeded: s/@@@/Gunnar/
Succeeded: s/XXX/Nils/
Succeeded: s/Workers/Workers - by Tim Panton/
Succeeded: s/??/henryk/
Succeeded: s/codec/code/
Succeeded: s/picutre/picture/
Succeeded: s/metdata/metadata/
Succeeded: s/atrack/a track/
Succeeded: s/missy/Misi/
Succeeded: s/prove ???/demonstrate interop of isolated media streams without identity/
Succeeded: s/jijb/jib/
Succeeded: s/youenn/soares/
Succeeded: s/to look/want to look/
Present: Harald Jan-Ivar Philip Sergio Soares AlexG Youenn TimP EricC Goran Bernard Dom DanB_(remote) Carine_(remote) Suhas_(remote) Jianjun_(remote) Lennar_(remote) Seth_(remote) Nils_(remote) Aleksandar_(remote) Binoy_(remote) Gunnar_(remote) Andreas_(remote) Misi_remote
Found ScribeNick: dom
Found ScribeNick: stefanh
Found ScribeNick: youenn
Found ScribeNick: jib
Found ScribeNick: dom
Inferring Scribes: dom, stefanh, youenn, jib
Scribes: dom, stefanh, youenn, jib
ScribeNicks: dom, stefanh, youenn, jib

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting

Agenda: https://www.w3.org/2011/04/webrtc/wiki/June_19-20_2018

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]