See also: IRC log
[reviewing PR 292 on slide 13 of Peter's "WebRTC 1.0 objects at 2015 f2f" slides]
Peter: initial intention of
onerror was to signal dtls errors
... now covered by the new connection state
Peter: PR 292 offers warning / fatalerror event that would report other type of errors
Justin: these errors would be not actionable by the app
Justin: only point would be
logging and we already have entry points for that
... it would be just free form text
Mathieu: some times things don't
work with no indication as to what is not working
... I wonder if such a mechanism could not help detect these
issues
Martin: you would still get in a failed state
Ekr: we get reports from people
"things are busted"
... it would be nice tho have a hook for the app to be told
where things are clearly wrong
... but I don't feel strongly about it
... there are e.g. some NSS errors we could bubble up with
this
... we could as well bubble them up in the console
Martin: that's what is done with XHR with CORS errors
Cullen: debugging idp errors is a nightmare at the moment
Martin: I don't think the two points (warning/fatalerror) are particularly needed since the pc would go to the failed state
Justin: it used to be that we didn't have callbacks on some errors; but now we have the right transition states so I don't think we need this
Mathieu: issue with just reporting in console is only applicable for errors in the development env
Dom: I'm not hearing much support, and we have plenty of other work to do, suggest to drop
Harald: show of hands of who want this?
[a couple of hands up]
RESOLUTION: we drop PR 292 (generic error management)
DanB: for the record, the people who raised their hand are people with experience trying to get people to debug their app
<jesup> I was trying to raise my hand but things moved on before I could. And hard to hear people speaking away from the mic due to distortion and volume
<jesup> vivien: sure. I wasn't asking it to be spoken into the room. I was noting it for the record here since things had already moved on. I was trying to vote per the request but before I could decipher what was said and type it people had decided. FYI
[Justin presenting "SessionDescription accessors" slides]
Justin: this is for getting
access to the pending and current remote session
description
... currently, you can only get either the current or the
pending description
... not the 2 simultaneously
... PR #225 proposes to offer both current and pending
description, with pending null if none are pending
Justin: Jan Ivar has an
alternative proposal
... stable vs "last-set"
... [review of pros/cons]
Ekr: I prefer #225
Cullen: the way to look at this
is that we messed un in our initial design
... #225 gets out of our mess with a clearer set of values that
can be easily understood
Harald: anyone who prefers Jan-Ivar model?
[no one speaks up]
RESOLUTION: No change to session description accessors (ie leave 225 as is)
[Martin presenting, reviewing https://github.com/w3c/webrtc-pc/pull/283]
Martin: the basic premise is that
stuff arrive even though you didn't negotiate it
... for debugging purposes, you want to know when media
arises
... you might also want to generate an offer/answer in this
case but less important
... it might be that the other side has a defect (e.g. sending
media in the wrong codec)
Ekr: I thought this was about
race conditions rather than defects - defects can go to the
console
... e.g. receiving media before receiving the answer
... it's likely to happen when you using @@@
... let's use this as our guiding use case
Martin: the PR needs to explains more what to do with the event in case of renegotiation
Cullen: as long as you receive packets that match your initial offer, you should be able to deal with the received media even if the signaling hasn't closed the looped yet
Ekr: there are 3 cases: the defect, the race condition, @@@
Justin: we should fast forward the content when the stream gets actually created (?)
Mathieu: why do you care about
getting media that early?
... how do you populate the mediastream?
Cullen: offer/answer is a 2-ways
handshake
... imagine a re-offer - the remote media will change as soon
as that offer is received, before the answer comes back
Ekr: this can only genuinely happen when your offer is not behind a NAT
Cullen: we assume that the ICE
and DTLS channels have already been set up
... I agree that the mediastream setup would be tricky
Alexandre: so what do you do with the media before the answer has been received?
Justin: as soon as you have sent the offer, you should be able to play the media
Alexandre: how long would you keep up with that un-signalled media?
Bernard: the answer has also the DLTS fingerprint - how do we manage that?
Justin: let's keep this as a
separate issue
... I think for the renegotiation case, we need to move to a
situation where we have a rtpreceiver as soon as you have sent
the offer
Cullen: so we want to close this PR and replace this with an early rtpreceiver
Justin: there is still the
question of what the msid should be
... the question about how to deal with the case where the dtls
fingerprint isn't solved yet though
... we shouldn't have different security properties based on
whether an event fires
Ekr: a partially validated state
isn't something we want to go in
... we could buffer the data as an acceptable solution
Justin: unauthenticated media should not surfaced to the app - it should be either buffered or dropped
Mathieu: with renegotiation based on changing certificates, how does this impact the buffering of unhandled media?
Justin: I think it's the same
bucket as the initial dtls fingerprinting
... do we need to deal with errant media?
Cullen: the IETF spec says that if you're getting RTP that doesn't match your offer, you drop it
Justin: I want this to be very
explicit
... we also need to determine what happens with SSRC
changes
... suggest we punt this to the IETF
[Reviewing https://github.com/w3c/webrtc-pc/pull/289]
Cullen: we have had agreement on this for a long time, but without a specific proposal
Cullen: PR 289 proposes to add a
new attribute to the pc to define the ICE pool size with a
default to 0
... a non-0 value allows to gather ice candidates faster, to
the cost of generating more ICE traffic
Justin: main question is whether
you can set this only initially or can change it later
... [reading from jsep]
... so we may need to have this in setConfiguration as
well
... in which case this PR is not sufficient
Peter: what happens if I have 2 transports and set the pool size to 10?
Justin: there will be 8 candidates that won't be used but will be kept warm
Harald: the semantics of having
candidates in the pool need to be described somewhere
... but beyond that, is the control surface exposed on the
right object? is is the right api?
Mathieu: what happens if you create 3 m-lines and then set the pool size to 1?
Ekr: sounds like a no-op
Justin: the only thing needed here is to add the property to setConfiguration
Cullen: I've added a comment to a PR
Justin: should we use short or long?
Harald: I recall the advice is to use long unless there is a specific restriction to the underlying value space
[Bernard presenting "SSRC API for WebRTC" slides]
[review of PR 300 https://github.com/w3c/webrtc-pc/pull/300]
Martin: the voiceFlag - do we need this?
Cullen: it is often not very reliable, but wrote it up in case someone want it badly
Martin: I think we should remove voiceFlag
Harald: so with the timestamps: we collect this data?
Cullen: the spec proposes a magic value of 10s of collection
Justin: why are we using these timestamps rather than using the immediate average?
Bernard: we looked at this in ORTC; the idea with this approach is to avoid having people poll very frequently to get an accurate picture
Justin: 10s for a real-time display sounds odd
Cullen: 1s woud work better for you?
[agreement that 1s is sensible]
Harald: audioLevel - is that the latest value?
Cullen: I'm editing the PR as we speak
Justin: if there are no csrcs, @@@
Cullen: we could add a requirement for the browser to compute the audioLevel when not available from the mixer
[agreement that this would be a good addition]
<jesup> can someone try leaving and re-entering, and/or check mic hardware there? The noise is making it really, really hard to follow remotely
[break]
[Peter reviews use cases worked with the various warmup proposals as shown in the "Use cases for addTrack/createSender/createReceiver/addMedia" spreadsheet]
Peter: with createNullMedia,
there is no way to know if the other side has stopped except
reading the SDP
... not clear either how to deal with 187
... I've also added the race condition use case
Cullen: 2nd and 3rd feel quite
similar
... I suggest we should first decide between 1 vs 2/3
... and then pick between 2 and 3
Martin: based on the earlier
discussion, I'm happy to go with 3
... I don't think there is any signficiant value in patching
stuff up
... the early media use case is compelling enough
[several people agree 3 is right]
Harald: we should define addTrack in terms of addMedia if we go with 3
Mathieu: if you're already
receiving video and you addTrack video to send
... you have to create a new receiver, or can you reuse
it?
... what's the behavior of addtrack?
AdamR: currently addtrack reuses the sender even if the receiver is in use
AdamR: if we define addTrack in terms of addMedia, how do we deal with the mediastream arguments?
Harald: addTrack would be defined as addMedia and then replaceTrack
Martin: I'm a little uncomfortable with the notion of having 2 or 3 ways of doing the same thing
Harald: I've no problems if they are only shorthands
Martin: but we need to make sure they have the same semantics
Peter: it's true that we would have to define how to deal with the streams in addTrack - but doesn't have to be doable in javascript
Justin: how do you put a track on hold?
Peter: that's not part of this PR
Martin: this was initially
described as frobing the send/receive attributes
... I'm somewhat unhappy with the abstraction but we need to be
able to do that
Justin: we need to make sure we get in the right direction and this won't crumble when we add these additional use cases
Cullen: I think we can agree this is the right direction and still need to polish it to address all the use cases
Stefan: how does this fit with our goal to go to 1.0? we need to set a clear deadline
Martin: I think we should merge
with what we agree on, and do the follow-up in a subsequent
PR
... we also need to understand the lifetime of identifiers
Justin: so we're proceeding with
a couple of pull requests; we need to list explicitly the
issues that will need to be addressed
... this includes the msid for early media
... what else needs to be addressed?
Stefan: there is also dealing with the mediastream arguments in addTrack
Peter: this is only an issue if
we need to make addTrack implentable in JavaScript
... we could also set an additional parameter to the dictionary
arguments of addMedia
Mathieu: one concern: "media" is a terrible overloaded term
Peter: we had a suggestion of
RTCSdpMediaSection
... so, we know we don't want addMedia, we want something else
TBD
... How do we deal with removeTrack?
Harald: we remove the track from the sender
Justin: it would be the equivalent of setting send to false
Peter: so removeTrack is not
sending anymore but still receiving?
... I was kind of hoping we would get rid of it
Harald: if we define it as replaceTrack(null), then we should define replaceTrack(null) as doing what we need
Peter: I'm not sure what's the
benefit of removeTrack
... so addTrack is "send/receive'
... and removeTrack is stop sending?
[ Peter's "WebRTC Objects - recap day 2" slides]
[Peter presenting updated PR 278 for connectionState]
Peter: what state do you get in when all the transports close? should we go back to 'new'?
Martin: I think we need an additional 'closed' state
Cullen: we could rename 'new' to mean 'not yet connected' and then use it also for all closed
AdamR: how about 'idle'?
Justin: I'd be fine with using 'new'; prefer to 'idle'
Dom: we could start with 'closed' as websockets?
Justin: don't like it
[coin flip]
Peter: 'new' it is
... moving on the PR 273
... propose an enum with the attribute
degradationPreference
Martin: might want to bikeshed on this in the PR
Peter: on to codec reordering
[agreement this is the right approach]
Peter: on to
rtpsender.getCapabilities
... this would allow to determine whether a browser can e.g.
join an existing conference in a given codec
Ekr: this is better than to go through createOffer but it's a very marginal improvement
Martin: it might actually help us with the codec preference use case
Ekr: given that we have RTPSender, this looks good
jan-ivar: I think the name getCapabilities() is confusing since we use it for constrainable interfaces
Martin: I don't think that's a
problem
... why is this is on sender?
<jesup> not that it matters to the discussion, but we do have some level of resource reservation for hardware codecs in mobile. However, it may not be tied properly to the Promise/etc.
Peter: you want it on receiver too?
Martin: yes; checking the receiver in general might be better for the use case we described
Stefan: we still don't have a proposal for reordering pre-negotiation
Martin: I can work on a proposal once we have the two proposals for codecs merged
[Bernard presenting "Simulcast in WebRTC 1.0" slides]
Bernard: based on my
understanding, we only talk about simulcast from the
perspective of the sender
... we have 3 options: have an option in createOffer (from
adamR), use RtpSender.setParameters (from Peter), or stay with
what we have (track cloning)
... in the last option, you track clones for each of your
variation
... the browser might want to optimize and detect this as a
simulcast, but in the dumb case, it would create 3
m-lines
... advantage is that you don't have to change anything in the
spec
... you have per-encoding control through the constraints
... some apps are already doing this - not sure their devs are
very happy with it
varun: if you monitor the tracks, you can adjust
Alexandre: I'm not sure that
getStats() give you the right info for that at the moment
... having a way to indicate to the browser that you want these
tracks to be linked would be better
Bernard: we said we would get different m-linees; looking at the bundle draft, it wasn't clear whether they could share the same payload type
Cullen: if they come with
different constraintsm they would have to be different payload
type; not so for the bitrate
... having to have the JavaScript adjust the bitrate over time
will interfer with congestion control; doesn't seem good
Martin: the problem with this option is that we don't have a clear signal from the browser side to properly optimize this
[AdamR presenting "Option A"]
AdamR: all this does is to add a maxSimulcastCount on RtpSender to give the signal
Martin: this would have to go through setParameters
AdamR: ok
Justin: this looks like
OfferToReceiveVideo
... it sounds like it brings a lot of implicitness
Cullen: delegates the negotiation to the other side (the SFU)
AdamR: it's fairly simple to spec
out
... but it doesn't give a lot of control
... there was also the approach that Harald suggested: always
send e.g. 3
[Peter presenting "Option B"]
Peter: you use setParameters() to add add more encodings
Harald: this assume you don't need to indicate simulcast before doing createOffer
Peter: it doesn't change the SDP
Cullen: it's not consistent with
the IETF documents
... you can't send stuff you haven't negotiated
... if you receive 2 ssrc from the same port, the ietf say
discard one of them
... we would need to signal that this is simulcast
... this is no longer rtp
AdamR: nor SDP
Cullen: I think it's a great API surface, but it needs to interact correctly with SDP
Peter: I don't want to go there in 1.0
Bernard: you could trigger onnegotiationneeded when setting this?
Peter: no other setParameters has this effect
Justin: backing up a little, what
we want is a workable solution for the 1.0 timeframe
... this proposal doesn't have dependencies on unfinished
drafts as option A
... it relies on SFU that expect this
Bernard: how about an hybrid of A and B? you define your number of simulcast count ahead of the negotiation and then use this proposal to avoid the dependencies on the SDP draft
Cullen: what's blocking the SDP simulcast draft?
AdamR: there are controversies around overloading the payload type?
Cullen: could we get this settled by the end of Yokohama?
Justin: feels unlikely
Cullen: the API surface looks OK, but not signalling this in SDP is wrong
Peter: but the hybrid approach would address this right?
Martin: why not just fix the simulcast draft?
Justin: it creates a dubious dependency
Harald: I would object to add a dependency to another unfinished draft
Justin: getting simulcast right
is complicated
... there are a lot of details that are hard to get right
Bernard: this has been going on for a decade
Alexandre: if we were going for that hybrid approach with an SDP close to the current draft, without putting a hard dependency to the draft to avoid the delay, would that be acceptable?
Justin: if that's possible yes
AdamR: but that means this group is starting to define SDP semantics
Harald: we could define the API
surface without depending on the protocol wire
... what's important to me is that you can ship a conformant
implementation without @@@
Justin: it feels like we don't have a mature proposal for 1.0
Cullen: I think we need this for 1.0
Ekr: I think we should figure out how far we are to consensus before getting in the in/out discussion
Bernard: current status (option
C) is not completely satisfactory
... option A relies on the MMUSIC draft
Ekr: is the only problem with C
that the browser can't be smart about it?
... its major advantage is that it doesn't require any new
SDP
... could we reuse that SDP with the other proposals?
Bernard: there is a gotcha
there
... @@@
Peter: we could go to an option D where you clone tracks and then frob the framerate scale
Justin: option C is not compatible with scalable coding, so it's a non-starter for me
Cullen: agree
Harald: it's also perfectly possible
to define this as an extension spec
... and bake it separately from the spec
Dom: +1
Cullen: would not be happy with this given that browsers are shipping something in this space
Ekr: what are the obstacles on converging?
Justin: I would be much more
confident that we could make progress in a separate spec
... clearly it's nowhere near a slam dunk in terms of
consensus
Ted: do we already know whether an extension spec would go toward a or b today?
Justin: I would prefer B
Dom: I think option A is
basically optimized for 1.0 shipping
... so if we were going to an extension spec, B would seem the
logical focal point
Justin: B' seems to be the right approach
Cullen: B' sound good, but I want this in 1.0
Stefan: who would object to have it as a separate draft?
Cullen: I would
Harald: having a separate draft doesn't prevent bringing in back
Ted: as an rtcweb co-chair, I wanted to ask if requesting MMUSIC to have an interim before the Yokohama meeting so that they can have input before your TPAC meeting
[nods]
ErikL: we don't have consensus, which means simulcast is out for now
Harald: there seems to be consensus on B' being the right starting point
Cullen: how about we merge something with indications there is no consensus yet?
Harald: I don't want to wait until
this gets decided before we finalize the in/out
... if MMUSIC says they can have an interim, we could delay
till TPAC
... if not, it's out
Cullen: I think the out decision would have te be made by the the WG, not automatic
Ekr: we should work on PR, see where the MMUSIC is going, and decide whether the proposal has consensus, there is consensus it's not ready, or there is no consensus, and then make a decision on a call
Harald: we could say we should
schedule a call once we have an answer from MMUSIC
... we would need a baked proposal 1 week before the call
<jesup> hear
Ted: there needs to be a 2 weeks
advance notice for the MMUSIC interim
... plus the time for logistics, earliest would be 4 weeks
ErikL: so we say it's out?
Cullen: no, it's not yet decided whether it's in or out
DanB: it's the same as others: until we have a ready PR, it's not going to be merged
ErikL: so it's undecided
Stefan: when should we plan to have our a call?
Dom: let's settle this offline
[meeting resumes]
[Stefan presents "In/out 1.0" slides]
Stefan: discussing PRs that are
accepted
... discussing 291
Peter: for 291, nothing needs to be messaged. confirm?
Stefan: 271, warmup is part of the transceivers.
AdamR: related to 279, are we deprecating the offerToReceive?
Justin: yes.
Stefan: 300, is going address
CSRC issues.
... 284, review ICE errors offline.
Martin: will read and confirm if 284 looks good.
Stefan: we haven't confirmed if we like addMedia as the name of the API
Justin: there is a distinction if people have reviewed the PR and looks good, and names and nits need to be fixed
Stefan: RtpSenderInit, in or out?
Peter: has been outdated with transceivers. close PR.
Stefan: are we done? i.e., no more features to be added.
Cullen: are we done? we are need diagnostics and stats to be added.
janivar: PR 301 is still needed.
Ekr: not overly worried, if we are missing some information with getStats.
DanB: can we make the decision: better error reporting and conformance statements are not considered new features, just simple enhancements
Dom: Mediacapture went to LC.
... Some comments are still pending.
... l18n looked okay
... comments on error reporting was raised.
Dom: got comments from Privacy
WG, not entirely sure if this LC or not. therefore still
waiting for clarification.
... TimedText WG, input was to clarify scope
... Also got input from Audio WG.
AdamR: volunteered to write the audio output...
[Stefan presenting "How to handle future constraints" slides]
Stefan: discussion about
registry if this is the right way forward or not.
... recapping requirements
Ekr: why are we using meeting time to discuss this.
Martin: this is largely a management problem, it depends where you want to document this.
Cullen: how would this work?
Martin: find the right venue to get this done.
Ekr: 4 people want this, and they will figure out what they want.
Cullen: what if they do not want to come to this organization.
Dom: we could use company prefixes
Cullen: but we want other people
to also be able to use the same constraints.
... I am okay for setting the bar as specification needed, but
someone should make this proposal.
Ekr: registries are needed(?) when there are volume of people wanting to reserve a code point or implement something
Cullen: we know that we do not want to have collision in the namespace
Dom: there is a clear IPR process
for W3C, but not the same outside it.
... let's just do what we have been doing for other W3C process
for example, attributes, etc. We could just follow that. i.e.,
make it specification needed.
Harald: Perhaps we need to go through the process of: do we need a registry? is IANA appropriate?
Dom: it was not clear if the registry has complete list or list of what needs to be done to be compliant
Ekr: the good thing about using a registry is that it lists all the attributes in one place. and you can see the links of the documents, that define them.
Martin: we have several documents that use the constraints, e.g., depth, screen capture, etc
DanB: I wrote a draft that will have all this information.
Ekr: I would like the chairs to resolve this.
Harald: show of hands for 1) registry or no registry. 2) IANA or not IANA.
DanB: the registry document says: specification needed not necessarily a w3c recommendation.
Ekr: w3c recommendation is a high bar.
<jesup> I agree with ekr
<sherwinsim> No registry
Harald: asking the question (1) registry or not registry: 8 vs 5
<jib> I voted we do not need registry
Harald: we have to decide, we don't seem to have consensus.
Dom: there is an alternative,
that chairs can make a decision. and those who feel strongly
can object
... let's take the formal vote
Cullen: we might have to get a Yes/No question. so the question would have to be framed that way
Harald: so a PR will be created that will add the registry and the vote will be to accept the PR
[ "Timed Media WG discussion" email from Dom]
Dom: we could decide to move some of our work, all of our work, or none of our stuff. Related to media capture, media capture from element, screen capture. etc
Dom: is there any opinions?
Mathieu: I see potential for that group to take on work related to recording but not really clear if something from here should move.
Martin: capture from element could move
Cullen: Cisco cannot join Timed Media WG because of IPR issues, so I would recommend not moving Screen Capture to that group.
Dom: what would happen to the work is started there, if there is interest to do it here ...
Harald: is there anyone here who expects to be working in Timed Media WG?
[--- no one raised hands ---]
<jib> what was the result of that hand-raising?
<varun> @jib: formal vote IIRC.
[Harald presenting "WebRTC NV - where do we stand?" slides]
Harald: this is the new charter
text, during the recharter process.
... what comes next: predict the stumbling blocks so that we do
not stumble.
... 1.0 and NV are going to interoperate.
... SDP is not required. However, it must expose everything for
SDP to be created.
Ekr: I dont think that this is really true: i.e., being able to do SDP via NV.
Harald: what are the contentious issues.
Bernard: non-mux, parallel forking.
Cullen: is this something we would need an API or what would be the resolution?
Bernard: it was tricky not
necessarily contentious.
... modes of SVC.
Ekr: PERC
... people may ask for pluggable congestion control for
datachannels
... so something to think about are application replaceable
components, since the design of ORTC looks like something that
can be glued together.
Harald: codec version control?
Ekr: what is it?
... we are under the impression that we will enable vc-x when
we are ready
... hence, I believe the expectation is to have pluginnable
codec.
Martin: this is similar to font
Justin: fonts are not executable...
Martin: font files are turing complete.
Peter: getStats() might be contentious
Ekr: the amount of control that you can have on the unsecured client side. that needs to be re-considered?
Harald: we want to do simple additions, but should we dream bigger?
<jesup> FYI: pluggable codecs are quite doable. Hardest is dealing with negotiation aspects and config and packetization, but the pluggable codec could provide a JS module that handles some of this to the browser, or we just define some APIs at the plug interface for handling that. Not every possible codec is doable, but most should be.
Ekr: we have done a lot of work for v1.0, I believe v1.1 should be a relatively quick turnaround.
Justin: v1.1 is not the end, there will be v1.2, v1.5, ...
Ekr: reason to do NV is to get rid of things that were problematic to do in v1.0.
AdamB: perhaps, yes. I dont see it on the screen
<jesup> I have a warning about low bandwidhth/cpu/etc instead of video
<adambe> same here
<adambe> it tires to reload now and then
<jib> same here
DanB: people have requested extensions. Some of the extensions are fundamental. When we started WebRTC it was for real-time communication. Perhaps we need to revisit this from those use-cases
Harald: we got a use-case to
playback things faster than real-time, for media processing.
However, we should be cognizant about the goals of the
WG.
... "We (chairs) need to make a plan to make a plan to execute
a plan".
... running ahead of time, doing slides from 15xx session
[no slides]
<vivien> https://github.com/w3c/mediacapture-fromelement
[discussing #11]
Martin: we were running a timer, when the item would get the canvas. This made the graphics people unhappy, because it was doing extra draws.
Harald: if nothing happens in the canvas, what is the timecode on the frame when something happens.
Martin: we get the timecode when we get the data. i.e., we have a flag on the canvas. when the flag changes, it pulls the frame and gets that timecode.
Mathieu: the screencapture does the same thing, so there shouldnt be an issue.
Martin: exactly, except
screencapture runs at the framerate.
... there is an interesting bit here. when the origin is not
clean, it will not capture anything.
Mathieu: what happens when there are multiple draws in a single render loop.
Martin: in a JS code, you have
several draws or a promise that does micro tasks. But this
thing only calls when it is in steady state.
... if you want something more complex then you need to
manually do frame capture.
<jesup> Capturing my comment: This is better than the original spec. Also, it's similar to a screencapture - you don't want to generate no-difference frames
[discussing #12]
Martin: in the main spec, when capturing from the video element, the tracks would appear or disappear based on when they are played.
Martin: i.e., if you had
multiple audio tracks, you'd get a copy of those tracks and not
the mixture of those.
... explaining the diff...
DanB: if the stream is paused. would there be a paused?
Martin: Yes, if 10s of mute,
then there is no data for that. However, there is mute flag
which would mean that there is no data for that period.
... explaining what happens when you do seek into those mute
periods.
Mathieu: then there is a way to plug mediastreams into peerconnection.
<burn> I understood martin to say that 10s of mute will turn into 10s of silence/black/etc. in the capture
<varun> @DanB: was that due to the flag or actual silence data/black screen.
<varun> correct my notes if needed.
<burn> it would match the user experience of n seconds of silence/black
Martin: I would like someone to have a look at this. Andreas is implementing most of this.
[Jan-Ivar presenting "fixing getStats - object and maplike" slides]
Jan-Ivar: dictionaries have copy
semantics, so what you get back is base class and not deeper
object
... second issue is that getter is legacy tech. recommendation
is to use maplike which give you keys that are useful for
looking up other stuff
... this change would be breaking, but now would be a good time
to make the change
Cullen: which version of webidl contains maplike?
Jan-Ivar: es6 has maplike. it's already in webidl.
Harald: compatibility with getter interface is easy. just define a legacy getter.
Jan-Ivar: agreed
varun: how much effort would this take for the browsers to support this? Don't want incompatbility.
Jan-Ivar: we already have binding code. in PR 301 we have support for it. For FF should be easy
<vivien> maplike in the webidl editors' draft
Harald: don't know about maplike in
Blink
... effort is small. does group like this? specifically users
of getStats. What do you think of this API change?
varun: we already have two code branches, so if you can converge that would help. maplike is fine if all support it.
Mathieu: definietly seems easier for enumeration
Cullen: this is a problem area for us too (the changing implementations)
Martin: we accept patches
Jan-Ivar: adapter.js could polyfill better here
Justin: hopefully adapter.js will get smaller . . .
Harald: looks like we should accept maplike. see 3 approaches on webidl: 1. webidl should fix dictionaries. 2. replace dictionary with interface 3. the proposal from jib, to define an idl type that is the union of all idl types
Jan-Ivar: one webidl expert said the
union type was a good idea
... could be done with a typedef
Harald: doesn't solve problem. can't have one-line typedef that says this dictionary and all sub-dictionaries. that union would currently have something like 30 units.
Cullen: how would it work to add the 31st?
Harald: yes, that wouldn't be included automatically
Jan-Ivar: being able to return a union of all the classes sounds like a good thing.
Harald: suggest going with object solution but note that we would like webidl support for the descendent dictionary union
Jan-Ivar: that's what 301 does
Harald: everyone okay with object?
<jesup> I'm in favor of this PR (no surprise)
(no objections)
ErikL: what do we want to do for tpac?
Justin: let's figure out who has what work to do. peter do you have anything?
Peter: I have some slides showing a possible path from where we are to a model where you can control everything without peerconnection. could be input for 1.1.
Cullen: what will happen as these issues resolve? what happens after last call comments are resolved
Dom: we will ask the WG to review it. Hopefully this will happen within a few weeks.
Harald: hope for a gUM revision that is "finished" that the group can review in a few weeks and go to CR before TPAC
Dom: for those of you with google docs, spreadsheets, etc please send a pdf of them for archives
[Peter presenting "WebRTC 1.1 at 2015 f2f/TPAC- future steps after 1.0" slides]
[Peter shows slides 1 - 3]
Peter: iceTransport would have gather and start methods
Ekr: how do you bind multiple ice transports in a single context
Peter: on the last slide
Peter: Then for dtlsTransport, there are start and stop methods
Ekr: will you show params for these later?
Peter: maybe
Harald: on iceTranport, where is pool size?
Mathieu: why can't you just start gathering and make that the pool size
Cullen: sounds like a great config option
Harald: goal of option in 1.0 is to make a pool that transports can pull from.
Peter: ORTC has a way to do this
Ekr: let's say I have an ICE transport and put a dtls transport on top of it. what's the behavior we can expect?
Peter: dtlsTransport would know the state of ICE
Justin: start doesn't mean send, it just means start working
Ekr: is the model such that the dependencies between them are automatically managed?
Bernard: yes
Peter: rtpSender - setTransport handles bundle/unbundle
Mathieu: what about rtptransciever?
Peter: not needed here because
there is no SDP
... send starts the sender. for RTPReceiver, you have a
receive() method
Harald: are these always connected to a transport but you can change it?
Peter: yes
Cullen: if i have transports over different interfaces, can I move a stream with no interruption?
Bernard: sholud be able to
Ekr: would you do it as two separate transports?
Bernard: if ICE is as capable as we want, no?
Peter: RTP example. gather ice candidates, start transports, the send and receive
Justin: something like 10 methods to provide all this control
Jan-Ivar: don't see promises here. was that intentional?
Peter: yes
Ekr: what must be
synchronous?
... seems like some state info is missing
Justin: state info is already (still) here from 1.0
Peter: sctpTransport + DataChannel - a dataTransport is an SCTPTransport. Need to be able to get and signal SctpCapabilities. Can have more than one association.
adamb: regarding promises, how does a track get returned for receiver.receive ?
Justin: you don't get media until O/A finishes, but track exists from addTrack as today
Jan-Ivar: how do you learn of failures?
Bernard: they all have onerror
Peter: dataChannel example.
create and start ice, dtls, sctp,. datachannel itself just like
today
... (very busy dictionary slide) want more info on ice
candidates and codecs is primary change
AdamB: why can't current
dictionaries look like this?
... (today)
Peter: i first tried to create
this, but I thought it might conflict with current object use
for iceCandidate
... if it's safe to add these, then I'm happy to :(
Harald: could just make all of them readonly, existing ones too.
Martin: making these interfaces
would be terrible.
... I have a PR that will remove the interface and replace it
with the dictionary. We could have rules for where we look for
this info.
Peter: I will make a PR.
... (and a few other things slide) there would be a getStats
and onerror on everything. Need RTCDTMFSender, something for
identity, and a way to say that iceTransports belong together
and need to be checked in a certain order.
Martin: do you add them or does it act as a factory?
Ekr: you should be pacing your gathering and checks
Justin: this allows grouping for different sessions as needed.
Harald: two different ways to do this. (missed these)
Peter: with optional index can decouple creation and checking order
Justin: this provides freezing control but not necessarily global pacing. also provides grouping for bandwidth estimation purposes.
Cullen: if global pacing is useful in needs to apply across the browser and not just here
Ekr: yes and no. we don't do it that way. we pace individual PCs and then have a global circuit breaker
Justin: global pacing the browser needs to enforce overall. this API provides local context and control.
Jan-Ivar: you had signaling in your example, is theer onnegotiaionneeded?
Justin: up to the app to do itself if/when it wants to.
Martin: you could make your own SDP and O/A if you cared to
Cullen: might need info to negotiate what codec details need to be set
Peter: codec capability info
contains all of this.
... there is a preferredPayloadType to make it easier to figure
out which payload type to choose
Bernard: very useful when you have capabilities and not settings to exchange. when you call receive you need to know what payload type the other side is going to choose.
Martin: this dictionary is very
close to the setting object structure. why don't you just call
this payloadtype so you can pass it in to the setter.
... ssrc?
Peter: if you have mid you don't need ssrc
Ekr: does this support the combinatorial explosion of profiles, fec parameters, etc.?
Peter: there is no good way to express that
Martin: implementatnios will support listing some subset of capabilities down the road. there are many variations of possible capabilities
Ekr: what are the defaults?
Peter: don't know
Ekr: are these dictionaries special in some way or can I synthesize them myself?
Martin: can you just change values before sending it to the browser?
Peter: if the values are consistent with the capabilities, then that's fine
Ekr: if they were interfaces and
constructed there wolud be a disincentive to mess with the
values. We will see apps that will hardcode h.264 and vp8 in
the list of codec capabilities.
... this would be harder with interfaces.
Jan-Ivar: since you have onerror,
instead of having a method that has an interface that takes a
bunch of attributes, just have a bunch of attributes that are
read at the end of the event loop
... we didn't have onerror before, so it's the presence of that
handler that would make this feasible.
Harald: we are at the end of our session. We resolved some things and are going to encourage people to work hard on over the coming weeks, not months. Chairs will take responsibilty for moving the registry decision onward.
Stefan: do you all have time to put into all of this over the coming weeks? It seems that there is a short list of people who have to do all this. Martin, Justin, Peter in person.
Peter: No problem.
Justin: is there a list of which areas people need help with so others can volunteer to help?
Martin: yes, a list of important items with whose responible would be really helpful.
Justin: is Travis still busted?
Dom: I fixed the link checker bug, but I'm working on something less brittle.
Ekr: can you restart Travis?
Dom: yes
... I will look into this. Ignore Travis for the moment.
... TPAC registration don't forget. Following our rechartering you have to rejoin the
group if you haven't. (your company)
Stefan: Thanks to Microsoft for hosting.
[meeting adjourned]