WebRTC F2F Day 2

23 Sep 2016




Bernard, Taylor, Peter, Varun, DanB, Jan-Ivar, AdamB, Dom, AdamR, Cullen, Stefan, Harald, Vivien, Dan_Druta, Eric_Carlson, Andy
Present on Webex
Maire_Reavy, Randell_Jesup, Tim_Panton, Alex_Chronopoulos, Shijun_Sun, Ekr, Patrick_Rockhill
Harald, Stefan
dom, burn, Stefan


Homework reporting

webrtc issue 782

Adambe: there is a new pull request for #782
... it takes the collection of information that was done synchronously and put it in the queued steps
... now we don't do anything when the function is called, but when the queued steps are taken from the queue, we decide what goes into the offer
... this is not entirely water-proof - the behavior varies depending on whether the queue is empty or not
... but we have removed a racy situation

HTA: this is pull request #820

Cullen: if I'm adding encoding to the transceiver after that, what's the outcome?

Adambe: we might want to test this a bit more

Peter: we've done some more thinking
... we think we either have to do everything async, or live with race conditions, or live with 2 set of states, one sync & one async (whether they are visible or not to the app)
... an option 4 would be to raise an error if addtrack is called a time where it would risk creating a race condition

Dom: if this situation is always the result of a programming mistake, throwing an error is appealing

Peter: not sure we have determined that

Adambe: on the point of making more things asynchronous - should encoding frobbing be so as well?

JIB: I'm not convinced that making everything async would solve this problem

Peter: it would solve the problem; also, having two states (async / sync) makes things very complex

HTA: another way to look at this: is this the simplest way to solve this?

JIB: I think it is

Cullen: the error throwing would seem more simple

Adambe: but it's not always an error

Peter: making addTrack / addTransceiver async would go a long way to solve this

HTA: we've probably spent enough time on this; please comment on the pull request

Cullen: we're not anywhere close to consensus

HTA: what are the properties of the solution you care about?

Cullen: I don't want to complicate things to solve what is essentially a programming error
... we seems to be having an alternative between a solution that doesn't work (proposed PR) and one that it is too complicated (making everything async)

Peter: it sounds like this is managing 2 set of states; one async hidden from the app, one sync visible to the app

<vivien> (Webex session is now being recorded ... I forget to start the recording before ...)

Peter: I'm not clear what changing the state in the sync state implies to the async state
... e.g. set the direction on a transceiver that hasn't been handled yet

AdamR: the error-throwing solution seems to be more and more compelling

JIB: but what's the condition?

Peter: addTrack is only allowed when the queue is empty... when the PC is doing nothing

Cullen: I think we should just say doing this is a bad idea and the results are undefined

JIB: I still prefer the explicitness of AdamB's pull request
... also, we keep changing our perspective on this topic
... we added the queue to allow calling addTrack after createOffer

HTA: we get consistent behavior with everything-async, message passing or with a lock
... capturing the state is a form of message passing
... if capturing the state is OK, the PR needs to be explicit in what state gets captured

Peter: I'm still not convinced that this can work for all cases
... there will be plenty of weird edge cases
... I prefer any of the other options I mentioned (accept the race, make addtrack async, throw an error)
... we would need a much more through analysis of the intersections between async and sync states to go in the direction proposed in the PR

HTA: it feels like writing test cases for these situations would be a good way to assess this

JIB: I agree with the assessment there may be dragons
... but this assumes that there is no dragon at the moment

Peter: I think this adds complexity without buying us anything

JIB: I can try to get into more details in the PR, but I'm not a createOffer/SDP expert

Cullen: I think the point is to go through all of our API endpoints and figure which attributes/methods would be taken into account
... define which state gets captured

Taylor: we also have race conditions in addTrack and setRD

JIB: but then we already have this problem

AdamBe: option 4 with throwing an error

Peter: when there is a queue operation

HTA: but the state of whether or not you can call addTrack is not observable

DanB: I think I would like to see the error case explored

AdamBe: if we decide to capture the state, we need to define what the state is
... if we decide to throw an error, we need to define when the state changes

HTA: one is message passing (capturing the state), the other is locking (throw)

Stefan: AdamBe will update his PR to define the captured state
... Peter will prepare a PR to propose the error-based approach

Permissions lifecycle for getUserMedia (issue 387, 389)

JIB: DanB and I divided up the work, based on the principle of 2 states: device accessible, device on-air
... we started from the assumption of booleans per capture device

hta: mozilla was initially strongly in favor of having per-device permission - would simplify if that's no longer the case
... the idea is to define the states, and then their impact on the user interface
... [reading from slide 23]

hta: we manage two internal slots per deviceOrKind, live & accessible, per page
... [slide 24]
... and we count the # of times a device is accessible (based on the "granted" permission), and the # of times a device is live (based on the # of obtained tracks)

<shijun> Question: can the indicator state be extended for screen capture? How would deviceOrKind look like in that case?

Dom: I'm not sure you need to make the accessible-on-live increment conditional

JIB: you need that for a browser that wants to make it clear that there is a per-use permission

HTA: you need to decrement on each track stop

Peter: this would be easier to understand as an example

JIB: HTA is right on the track counting bug

DanB: we started from the big picture but then realized we needed the more detailed description for tackling edge cases

JIB: we use counters for "accessible" due to the way we couple live and accessible

Peter: so accessible - live <= 1
... ?

JIB: yes - the +1 is for persistent permission

Peter: so it would be easier to describe as one integer and one bit

Cullen: so "live" is > 0 when there is any track that is not muted

Peter: so accessible = live + (persistent == true)

Stefan: you also need to consider cloning

JIB: the proposed change also considers the life-cycle of media streams

DanB: I'll start with examples (slide 29)

Taylor: you need to distinguish mute/stop in counting live / accessible

DanB: now getting more spec-ish (slide 26)
... I'm going to discuss this in terms of changes happening, not necessarily in terms of continuous indicators
... slide 27 looks at the proposed conformance requirements

Cullen: in the combined approach, how do you handle the transition muted to live?

DanB: we would have to decide

Dom: I like the requirements, but I'm not sure we should make them normative requirements
... instead we should describe them as our understanding if the right approach at the moment

JIB: one reason for making them MUST is to keep the same level of normativeness that we have

Peter: the reasonably-observable requirement, is that new?

DanB: yes
... reflecting something we had discussed before
... slide 28 looks at things that are in the suggestions category
... discussing among other things the match with hardware-based indicators
... and offering to replace transition-change by continuous indication
... my proposal would be to move this to implementation suggestions section
... which we will have to split between normative / informative

AdamR: I think the non-normative suggestions would fit in the privacy section

Cullen: rather than defining new states and variables, I would like to make reference to existing concepts
... e.g. # of muted tracks

Peter: I agree: the UI built on the state sounds about right, but we need to simplify the states

Cullen: I would prefer it to be built as a function of the current state rather than new state

TimP: I think it needs to continue to refer hardware devices, not just tracks

Shijun: it also needs to take into account screen sharing

JIB: some tracks don't have privacy issues (e.g. capture from canvas), some do (e.g. screen sharing)
... we need to be clear that this is only for getUserMedia-generated tracks

Dom: and need to document how to extend it in the extensibility section
... also, all UA implementors plan to follow these requirements? e.g. in full screen mode

[overall agreement]

Issue 380

Starting on the 380 homework

<dom> Remove redundant list-devices permission. #380

<dom> JIB: [slide 33]

[slide 34]

shows all legal states

hta: assuming you can try to change them independently.

jib: it's about what the browser can set, not necessarily what they allow the user to control
... removed anything where camera or mic is granted and deviceinfo is not.
... only questionable one is in yellow
... denied means blocked
... if user denied both mic and camera, maybe deviceinfo should be denied as well. except for speakers, so maybe we need a way to get speaker info

adamr: why isn't denied, denied, granted here?

jib: should be in the table

cullen: if deviceinfo is denied, you have to put camera in one of these states

hta: agree with bottom line as a legal state

cullen: if someone denies deviceinfo, why can't camera be granted

burn: it's about directionality of implication upon changes, not the specific entries in the table

cullen: if cam/mic are granted and you set deviceinfo to denied, what happens (addressed to hta)?

hta: if such a capability exists, then the only sensible thing to do is to turn off access to devices (based on strength of permissions)
... as described in the spec

cullen: i don't like the weaker/stronger concept. the deviceinfo permission should not have implication on the device permissions
... don't want user who disables deviceinfo to lose access to camera and mic

jib: it is not compliant for browser to do what you're asking for. maybe could add that.
... back in Feb deviceinfo was tied to gum permission.

dom: but it was not formalized. by now formalizing the weaker/stronger it may be a stronger claim than we now want.

cullen: jib is not saying whether rejecting deviceinfo causes an error or forces a change in existing cam/mic permissions
... let's figure out what we want and then decide whether to use the weaker/stronger feature of permissions spec

hta: spec shoudl allow UAs to implmeent such that when user wants to see labels they can

cullen: agree

hta: i made deviceinfo a separate permission so this could happen. We now want to make sure that this doesn't cause unreasonable actions having this separate permisison

cullen: what about having deviceinfo be granted by default but denied if you ever disable both camera and mic

hta: no
... want to ensure at minimum that access to device retains access to labels
... remove language about weaker. if cam/mic is open, permission on deviceinfo is granted (is what i want)

cullen: if only mic and it's allowed, then how do you change deviceinfo?

hta: permissions spec doesn't deal with per-use, per-realm, etc.
... browser gets to choose.
... we want that device access gets you label access. not willing to state more than that.

jib: you have to read permissions spec along with gum spec to understand what happens. browser can use any new information to determine user intent. so it's up to browser unless we spec something else for when permissions go away.
... we could say when cookies go away we revoke permission, for example.

stefan: does anyone want to change this?

cullen: if i clear all state and the permissions retain, I'm not okay with it

adamr: same is true for geopriv

cullen: some of ours are tied to specific cookie text in the spec

jib: weaker/stronger language puts requirementns on browsers for sub-permissions that they otherwise might not handle
... does'nt touch prompt state
... created for bluetooth with superuser mode being stronger than the lower permission

adamr: do camera grant, then camera deny will go to ?? and that's not intuitive
... if deviceinfo were defined to be strongest of the two device permissions cullen would be happy

hta: but i don't want that

cullen: as soon as you grant per-use cam or mac you get the labels for the realm is fine

hta: i want UA to be able to give permission to labels when no devices have permission
... i want this option, not to be required
... this could happen when someone has strarted hangouts, granted permission to all devices, goes away, comes back later and wants to see what devices are available. UA shuld be able to determine this is a long-term relationship
... the spec shouldn't force the behavior here

cullen: don't want one-time permission to become forever. would like to hear from hta when permission might go away

hta: would be happy to see pull request about if you clear domain info the permission should go to

cullen: is this the use case: if you got per-use or longer acess to cam or mic, label access granted and is permitted to last only as long as device ids are available and no longer
... we already have spec language for device id persistence.

much general agreement

ooops, Peter has a question

peter: if you granted mic you also got cam labels, right?

cullen/all: yes

adamr: if you let the horses out of the barn the cows will follow them

switching to cullen on errors needed in IDP section

Errors in webrtc-pc

<vivien> (slide 48)

Cullen is pointing out that there are many kinds of errors that can occur

Cullen: meaning actual errors, not errors reported necessarily
... createOffer/createAnswer is where you would get these

[slide 49]

more errors

Cullen: if you are using skype but twitter is your idp, need to be able to tell user whether problem is with twitter or skype

[slide 50]

dom: can't you send whatever you want in a rejection?

(going back to first slide)

cullen: there are more kinds of errors than webidl has
... first one is like an xhr error, but they give you the error code on the HTTP response

adamr: for idp we return OperationError, which can't send info back

dom: given that this is about idp do we want to wait for ekr?

cullen: no, this is a general error concern. ekr and martin would both say we need to return more info

adamr: peer connection actually has the idploginerrror

<dom> Sort out requirements around IdpLoginError

jib: there are many dom exception errors. what additional info is there?

hta: we need info beyond just the name

dom: need to define where we need errors we don't have now, where we need errors beyond idl errors, and where we need additional info in our errors beyond the name

cullen: how do other parts of JS handle this?

bernard: these are just a small number of examples out of a much larger set. we need a way to do this.

hta: i see two right now that need extra info: load and login

cullen: maybe you solve by returning a single operation error whose extra info has the distinction between those two

jib: don't believe this is a problem beyond idp. maybe we could add one idp event, for example.

hta: don't want to define general mechanism to solve particular problems

cullen: martin and I have been talking on this, and both of us find WebIDL limiting here. We would like to return a JS object. Others say they don't like this because it doesn't map to WebIDL
... we don't care how to return info, but we don't know how.

bernard: i have a similar slide for simulcast. this is not just an idp issue.

cullen: i only reviewed one section of spec for this -- there are many more!

jib: want to see non-idp examples too.

hta: one possible solution in 1.0 is to define something that looks like what we think we should have, like we did for gUM.

dom: if there are 500 more like this we need them listed.

cullen: so how do we move forward here?
... should we try to map to existing IDL errors?

adamr: yes, it's not that hard.

cullen: but existing ones are used for too many of these such that we can't distinguish among them

hta: i like the idea of prototyping a new error in JS that returns info.

burn: issue is being able to distinguish among errors that you used the same WebIDL name for

dom: but there's also a UA-definable message field for each, but not machine-readable

(missed some discussion)

burn: need to go through the steps i listed before. do we need new type of error, and do we need additonal info for the error.

cullen: dan, will you create template for a new error just like overconstrained?

burn: yes

<vivien> [lunch time we resume at 13:00 UTC+1 (in 55 minutes)]

Issue 720/ PR 738 Getting the fingerprint of an RTCCertificate

Bernard: [slide 65: Getting the fingerprint of an RTCCertificate]

Bernard: there were comments that this may need to be async, but not sure why
... another approach (slide 66) is to have a method based on the algorithm identifier
... but it is undesirable because of the need to discover which hash functions are available
... and because you need to parse the identifier from generateCert
... thus the PR proposes a read-only frozen array of fingerprints
... provided by the UA

Stefan: Martin indicated that it needed to be async and thus a method

adambe: the attribute could be a Promise instead?

HTA: so the UA would generate fingerprints at the same time as certificate generation

EKR: this is an OK answer; I'm a bit confused as to why the IETF removed the one-fingerprint req

Cullen: we should go back to the IETF on this
... Re async, I'm not quite sure why we need that - maybe computing time?

EKR: these algorithms are super fast

Stefan: we should merge this and raise new issues for async / dictionary

Sort out requirements around IdpLoginError #555

Cullen: we agreed earlier that DanB would define the framework we need to create custom error objects


Issue 764 / PR 758 / PR 786

AdamBe: 758 is merged
... and 786 is still open
... it raises the question of how consumers deal with ended tracks
... in this case, with the Sender, we make it look as muted
... should we specify that behavior for all consumers?
... with additional info that it can't be unmuted

Peter: is there a difference with a transceiver with a null track and one with an ended track?
... or a muted track?

TimP: there is a method in Web Audio where you might care about the difference between muted and ended

AdamBe: [reading the expected rendering in getUserMedia]

Peter: but that's only for local rendering

Dom: we need to figure out whether we want a general description of expected behavior for ended tracks in consumers or do it on a case by case basis

Peter: in this case with the sender, I think we need to be clear what packets get out of the sender
... if you addTransceiver with no track, you should get no packets sent
... except maybe with the VAD?

HTA: [testing in chrome] we have a different behavior for a local and remote stream render
... the remote stream uses a frozen frame rather than black frame

JIB: in the back/front camera swap example, a frozen frame might be better than the black frame

Peter: I think HTA is arguing we should send the RTP packet for black frame for an ended track

HTA: [testing with enabled false] then it's consistent [?]

Peter: so an RTPSender with an ended or muted track, it sends a black frame

Cullen: I think between frozen and black, most systems end up using frozen

Peter: Even if you mute?

Cullen: I guess it depends on the specific use cases
... that's the difference between hold and mute in traditional systems

HTA: I would be fine that gum says "mute" freezes the frame

Peter: If an RTPSender gets a muted track or an ended track, it sends the black frame
... an inactive track or a stopped sender doesn't send any packet
... for video

Cullen: the 911-case is only when sending audio, you don't do silence suppression in the audio codec
... but if I mute my 911 call, it's still muted for 911 no matter what

AdamBe: from the consumer level, mute and disabled are the same thing

Peter: if this is because of the track, we send a black frame; if it's because of the sender, we don't send anything

Dom: do we need to add something more general for consumer of mediastreamtrack?
... and thus have advice about it in the extensibility section of gum for new consumers?

JIB: is there any way to freeze the frame remotely?

Bernard: yes, with active=false

Issue 763 Simulcast Errors

Bernard: [slide 70]
... this is similar to what we discussed earlier around error management
... one case here is the max number of simulcast streams an implementation supports

Peter: you can't add new encodings with setParameters at the moment

Bernard: right, but you can define multiple parameters for the encodings you have
... Right now, we only have InvalidModificationError or RangeError

Cullen: here is the case for programmatic handling of the error, is scaling down your simulcast attempt
... you would want to know e.g. "I can't do these encodings"
... e.g. "I managed to get up so many encodings"

DanB: would getting back one of the encodings that made it fail be enough?

multiple_people: yes

Dom: is there any error reporting API for this kind of problem we could get inspiration from?

Cullen: could look into what we have

DanB: what you need to know is what not to try again

Cullen: the parameter that caused the error to be raised

JIB: can't you do it the other way?
... ie. start with simple and upgrade until it breaks?

AdamR: no, because it will start sending right away

vr000m: I think starting with reporting the point of failure is reasonable

dom: implementors feel confident they can report that type of errors?

multiple_people: yes

dom: two aspects to this: define a custom error, and find how to package the detailed information of the error

JIB: one possible concern is that this error handling will lead to interop issues since browsers are likely to fail differently

Bernard: I'll be taking a stab at this for setParameters once we have DanB's work

DanB: it would be useful to know how many of these errors we will need
... I think we will start with a generic "RTCError" that will then be specialized

Bernard: I'll send a PR that will report the parameter that caused the error to be triggered

Issue 698 JSEP/WebRTC mismatch on empty remote MID

AdamBe: I think this is pretty much solved
... on the Transceiver we have the mid attribute for which we generate a value when not provided [slide 71]
... JSEP was inconsistent, I filed an issue and Justin fixed it
... so we'll just update our reference and be happy

Issue 624 Upscale policy

Cullen: [slide 72]
... the IETF spec says you can't upscale; examples where this matters is QR code or ultrasonic code
... but there was a suggestion we need to relax that restriction
... I have submitted a PR to enable such a policy
... which raised issue: default (should it be true or false)? how does it apply to different type of streams? is it a boolean or an enum?

Bernard: is this for a encoding? or an entire sender?

Cullen: good question; I would imagine in general you want it at the session level
... this is a sender property

TimP: so this sounds like a "no-human-consumption" stream bit

Cullen: no, I think even for humans the not-allowed approach is more likely to be what you want

Alex: but the renderer could still upscale?

Cullen: this is specifically about what get sent, not what get rendered

Peter: we need to be explicit about what the knob is about

Cullen: for audio, you would want sample rate or sample size; for video, video size
... you can always downscale, because you can detect that the sampling size is insufficient
... but when upscaling, you can't detect that you are being "lied" to

EKR: I think this is likely to be difficult to implement in practice, e.g. due to the audio stack

Cullen: I can't think of any platform where the browser couldn't access the values I mentioned
... video sample size would be difficult, but it's intentionally left out here

Bernard: does that need additional encoding parameters when upscale is allowed?

Peter: I think upscale allowed only matters from an SDP perspective

JIB: from a WebIDL perspective, default to false for a boolean is OK

Peter: I think an enum would give us more future-proofing

Cullen: so an enum
... make it more explicit to what it applies

Peter: do we want to distinguish audio/video?

Bernard: it's already separate since there are different senders per kind

Cullen: sounds like a plan

[consensus this is the right approach]

EKR: how would you expect a browser to react to this if it didn't understand the attribute?

Cullen: if you don't allow upscale, that attribute can be safely ignored

EKR: that's a smaller answer than what I was asking about, but I'll figure that out later


Issue 760 ufrag+mid end of candidates

Peter: [slide 74]
... what we propose is to add a ufrag member to the ice event if you opt-in to it with using an optional ufrag parameter to addIceCandidate
... the issue is that if you don't opt in to it it works sub-optimally
... the tricky bit is that inside the candidate object, there is the SDP mid that enables to identify the transport

HTA: this is in pull request 819

Peter: this is the best option we could come up with

EKR: can you say more about the ufrag & icetransport question?

<vivien> Trickle ufrag in ICE candidate events #819

Peter: for a regular candidate, the mid is used to unmuxing the ice transport
... but for end of candidate, there is no mid, so my PR suggests to use ufrag for both levels

EKR: that doesn't align with what we've agreed in JSEP
... in 4.1.16 of JSEP it's pretty clear that you need to have both mid and ufrag in end-candidate

-> https://rtcweb-wg.github.io/jsep/#rfc.section.4.1.16 4.1.16. addIceCandidate in JSEP

" The pair of MID and ufrag is used to determine the m= section and ICE candidate generation to which the candidate belongs"

<ekr> https://reviewboard.mozilla.org/r/80636/

<ekr> oops, wrong link

<ekr> http://rtcweb-wg.github.io/jsep/#sec.addicecandidate

<ekr> "This method can also be used to provide an end-of-candidates indication to the ICE Agent, as defined in [I-D.ietf-ice-trickle]). The MID and ufrag are used as described above to determine the m= section and ICE generation for which candidate gathering is complete. If the ufrag is not present, then the end-of-candidates indication MUST be assumed to apply to the relevant m= section in the most recently applied remote description. If neither the MID nor the m= index

Peter: I think that when we wrote that, we were assuming we would be able to put the mid somewhere; but since our thoughts on the topic have evolved

EKR: I'm not in favor of assuming ufrag can't be re-used across ice transports
... that's not compatible with what we do in Firefox

Peter: does JSEP say anything about how ufrag are chosen?

<ekr> https://tools.ietf.org/html/rfc5245#section-15.4

<ekr> The "ice-pwd" and "ice-ufrag" attributes can appear at either the session-level or media-level. When present in both, the value in the media-level takes precedence. Thus, the value at the session-level is effectively a default that applies to all media streams, unless overridden by a media-level value. Whether present at the session or media-level, there MUST be an ice-pwd and ice-ufrag attribute for each media stream. If two media streams have

EKR: for starter, ufrag can be used as a session level attribute
... this is an ICE issue

Peter: but this only matters when we have trickle ice?

EKR: this looks a broken approach to handle this problem

Peter: all the options we had in Seattle had downsides

EKR: this seems like a pretty significant downside

Peter: ok, we'll propose an update to this approach then

HTA: the problem is not with the event, but using ufrag at the end of candidate marker
... this problem applies only to trickle ice right?

Peter: right

JIB: the fake candidate idea in 760 had the "ugly" downside; we could make it less ugly (e.g. resend the same candidate twice)

EKR: FWIW, you guys in Chrome use the same ufrag for 2 m-lines

Issue 812 RTCIceGatheringState definition

<vivien> RTCIceGatheringState definition #812

Bernard: we use the same model for ice state at the aggregated and individual ice transport level
... which is problematic [slide 75]

Peter: the proposed solution [slide 76] is to define individual ice transport states and then how these individual states aggregate to the PC level
... [details of the states on slide 76]

<ekr> I wonder if we would be good to have the ufrag in this state

Peter: this would be similar to the other aggregated state description we have

Bernard: this also fixed another issue in the process; it didn't imply that getting into completed signaled end of candidate

Peter: note that pool-based ICE gathering is not affected by this

[consensus that the proposal is OK modular some wordsmithing]

Bernard: and this also fixes Issue 808

Test suite

Alex: this is an update on the WebRTC test suite
... I'll also describe the limits of the current testing set up and its limitations in the W3C system
... yesterday, I described we only have 2 ongoing test suites: getUserMedia and WebRTC
... WebRTC is less advanced than getUserMedia
... we have 5 test cases for WebRTC, mostly around the PC API
... [slide satellite view]
... [slide APIs view]
... we only have test for peer connection, and very superficial testing mostly at the IDL level
... part of the problem is that a big part of the spec is hard to test within the context of the W3C test harness
... [slide Manual Single page tests] results for March this year (similar to what I presented on gUM yesterday)
... [slide automated single page tests]
... some tests are harder to automate than others
... [slide automated single page tests #3]
... [map of manual single page tests across os]
... [map of automated single page tests across os]
... [slide "intermediate conclusions 1"]
... you have to distinguish different type of tests
... traditionally, W3C tests have been single-page test
... Web Sockets have added the need to connect to a server component
... WebRTC needs browser-to-browser
... you can do some testing within the same page
... but lots of tests need to be done across browsers, and with the network components (incl proxy, nat, etc)
... Mozilla have a python tool to emulate these various network conditions
... the tests are currently not written to handle the case with two roles for the browsers (callers / callee)

Alex: IMTC which has a mission of interop took on its own to add more edge support to adapter.js
... thanks to Bernard

Alex: [slide Interop Tests] these are tests running across browsers
... [intermediate conclusion (2)]
... we're missing some of the interesting cases (e.g. desktop to mobile)
... if we were to test everything, we would have ~500 cases
... currently we test 6% of that
... the road to progress has been to improve w3c tests, add edge-support in adapter.js, add more browser support for web driver
... we also need better support from web driver (e.g. to manage prompts)
... for testing simulcast, we'll need an SFU component
... Jitsi was offering to help testing simulcast

Bernard: we've been looking at open source SFUs, there aren't many of them
... and they usually come with big dependencies that makes it a bit harder to isolate

Alex: what's the plan for Chrome to test these new features à la simulcast?

Peter: I don't know

AdamR: not sure about Firefox either

TimP: our experience is that it's much easier with a central service you talk to
... that's why jitsi has been attractive
... the other thing useful is that Web Audio enables for instance to detect if audio is being received

HTA: likewise, you can analyse a video with a canvas
... Regarding Web Driver, any news from them?

Dom: they're focusing on finishing their specs, but would accept extensions spec if someone works on it

Alex: as we agreed today

EricC: from apple, I'm not sure what our plans are yet

Bernard: I wish there was a way to test without a central server

Cullen: would edge in ORTC mode be able to receive simulcast?

Bernard: probably; would it work in Chrome?

Peter: probably

Alex: Simulcast is the only case where we would require a SFU (and even then maybe not as we just discussed)

Dom: fwiw, from a W3C perspective we only need to test features at the API level, and don't have to test all the combinations

Alex: but still you have to test the network layer for e.g. ICE

TimP: we've been talking about setting up services with specific behavior (e.g. invalid DTLS)

<vivien> (break time)


Harald will present

Harald: changes since Sapporo
...see slide
...time definition for remote times has been defined

Harald: ToDo: webrtc-pc linkage, conformance reqs, registry maintenance
...Implementation status: see slide
...New stats
...QP statistic - issue #57
...Circuit Breaker - issue #61
...ICE pacing
...the process for new stats includes filing a bug
...ICE pacing is issue #62
...extension mechanisms (PR #43)
...consensus model

Cullen: what consensus?

Harald: good question

<vivien> Added procedures for new stats #43

Bernard: there is also an issue related to IPR

<stefanh> (discussed by people)

Harald: living doc as registry
...use dated version if a stable ref is wanted

AdamR: what is W3C's pos on living docs?

Dom: best approximation is: editor's draft will be updated frequently, snapshots going through the TR process to do the IPR review etc., do that and get stats 1, stats 2, ....

Varun: conformance to what? TR version

Cullen: I would like some of the stats to be optional.

Harald: let's focus on how to add new stats.
...different models considered

Ekr: <scribe did not get>

AdamR: where is window.RTCPeerConnection registered.

Harald talking about alternatives in slide

Harald: believes that the model in PR43 is the simplest given circumstances.
...process is: file a bug, convince the WG that the stat is needed, and the editors will add it.

Cullen: I would a general extension mechanism for stats
...so that another stds body could add a stat

Varun: citable ref needed.

Cullen: I think I'm asking what reasons to say no are?
...if the reason is: looks like not enough of people will implement - that would not be a good reason.

AdamB: in IANA you also have to convince editors

Harald: We should be careful since we have so little experience, can't write rules yet

AdamR: what about "vendor name prefixes"?

Cullen+Harald: doesn't work

Ekr: we could do random numbers for names

Ekr: I don't understand what you are trying to acheive Harald

Varun: (gives an example of where units did match up even though names matched - needs some reviewing before adding)

Harald: strong push back against pre-fixed names in Chrome team

Conclusion: Cullen to propose rules (perhaps modeled after IANA expert review)

Cullen: want a process that would work after WG has been closed.

Dom: what do we do about stats that we do not expect to not be widely implemented?

Cullen: must be part of the snap-shot doc that makes it to Rec.

Dom: don't want the situation where a large portion of stats are not tested.

Cullen: will be removed if there are not two implementations.

Harald: how to split between webrtc-pc and webrtc-stats
...do not spec the same thing in two places
...I think webrtc-pc should have the API and only the API
...webrtc-stats should have the object definitions
...a PR for this is under construction
...conformance is a property of webrtc-pc
...see PR#59 for proposal
...a PR for webrtc-pc not yet done

Varun: we need consensus to remove section 8.5 from the PC document.

Harald: does this look alright?

meeting: yes!

Harald: open issues re. stats (see slide)
...#20: if you don't support a stat don't report anything at all

Jan-Ivar: some stats will not be available until some time has gone
...what to report?

Dom: for new stats a starting value must be defined

Harald: #26 what happens when things stop? Proposal: things are frozen (includng timestamp)

Dan: this has already been decided.

Harald: Issue #49: should there be an API version flag? My proposal: no

(people agree to this)

Harald: guidance for extending objects version extending stats
...for future work: define getStats on objects

Dan: split issue #295 - consists of two parts

Harald: yes

Dom: should we have split up stats per object already?

Varun: it very convenient to be able to get all stats from one single object (the PeerConnection)

Dom: TAG might to complain, but I understand the advantages of have one object with all stats.

Peter: if you have severla objects there is the issue of sync between stats. Thinking about something like a stats collection object which you could thow all objects you want stats from into.

Varun: likes that idea

Harald: anything else about stats?

Bernard: webrtc-pc does not ref webrtc-stats

Dom: when can -stats go to CR?

Harald: changes discussed today minor, should be ready before webrtc-pc

Dan: not clear to me how we will make the distinction between the living and the snapshot document
...and how to maintain
...one idea: have different maturity for different stats in one doc

take 2 on ICE w/ufrag

Peter: take 2 on ICE w/ufrag
...looked back at what discussed in Seattle
...add a new event for end of candidates
...see slide for proposal

Ekr: (scribe missed details, but saying it would not work)
...existing endpoints would not expext ufrag in candidates
...we can't add things I don't expect things to work

Peter: why would that fail?

<shijun> we lost the TPAC room in Webex

<ekr> My position here is as follows: unless the spec says that a given structure is extensible and that people need to accept a new added field, then you can't just add new fields

<ekr> The endOfCandidates part is fine

<ekr> It's adding the ufrag to onIceCandidate() and addIceCandidate()

<ekr> that is the problem

<ekr> Slide #108 is fine

<ekr> Slide #107 (addIceCandidate) is not

<hta2> what combination of client & browser is #107 a problem for? (old client / new browser, new client / old browser, or both?)

<ekr> Both

<hta2> new client on old browser will see the added field disappear at WebIDL processing (that's how dictionaries work)

Peter: we need this to go to completed state

Jan-Ivar: what happens if you send the last candidate two times?

Taylor: you may not gather at all

<ekr> hta2: your claim is that if I pass random new directionary fields to addIceCAndidate it will be ignored? Do you have a cite for this?

IIRC, that's the WebIDL dictionary conversion algorithm that does this

(it "removes" anything that it doesn't recognize)

<ekr> can you please provide a cite?


(step 5 only appliies to "For each dictionary member member declared on dictionary, in lexicographical order: ")

<ekr> OK. It seems like it's still a problem in the JS

<ekr> In old client/new browser case

Peter: we have not been able to find a solution that doesn't break, does not require opt-in and is not a hacky mess; have to choose

Jan-Ivar: hacky for whom?
...should not complicate simple demo

Ekr: (did not catch)

<ekr> This is only an issue for FAST FAIL for ICE restart. I don't think prioritizing a simple demo is something important

Dan: for people who care about ice-restart, what is the likelyhood for them not parsing candidates?

Peter: option X: add an RTC config to select

Jan-Ivar: seems to me if things break for experts is less a problem than if simple examples break

Dom: I'm OK with the opt-in

Harald: unable to reach consensus, we should work more

<ekr> Doing somethign hacky with the candidate line makes me pretty sad, if it means duplicating the final candidate

Peter+taylor: two options, we need to pick one

(discussion in room)

Peter: what about deprecating onaddicecandidate and replace with a new event?

Harald: I would like to move the event down to transport.

Closing comments

Harald: good progress in meeting. We're now in the wide review process of webrtc-pc, looking forward to the input we expect to get.
... we will have more virtual interim meetings

<stefanh> (closing meeting)

<stefanh> Thanks everyone!

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/09/26 23:24:13 $