WebRTC Teleconference

03 Sep 2013

See also: IRC log


+1.407.421.aaaa, Dan_Burnett, +49.441.6.aabb, +, tuexen, +1.403.244.aadd, stefanh, +, +1.858.651.aaff, fluffy, +1.940.735.aagg, christer, Jim_Barnett, +1.630.423.aaii, +1.831.426.aajj, +, +61.2.809.0.aall, silvia, +358.942.72aamm, +1.561.923.aann, +44.190.881.aaoo, Dan_Druta, +, +1.650.275.aaqq, +1.831.426.aarr, hta1, matthew, +1.425.893.aatt, jesup, ekr, +1.908.541.aauu, +1.908.559.aavv, JeromeMarcon +1.267.934.aaxx, JeromeMarcon, pthatcher, Dini, dom, Matthew_Kaufmann, Martin_Thomson, jib, adam, [IPcaller]
adambe, hta1


<stefanh> agenda proposal: http://lists.w3.org/Archives/Public/public-webrtc/2013Sep/0005.html

<gmandyam> Stefan, Giri Mandyam present on phone and IRC as an observer

<stefanh> scribenick: adambe

stefanh: I posten a link to the slides
... we should go throug the minutes from the last meeting
... (walkes us through the agenda)

<stefanh> mintes from last meeting: http://lists.w3.org/Archives/Public/public-webrtc/2013Feb/0026.html

stefanh: last meeting minutes are from February in Boston
... can we approve the minutes?
... ok, they are approved
... next thing, implications of the IETF decisions
... juberti will talk us through the unified plan

<ekr> He doesn't sound any more dalek-like than usual

stefanh: I was asked to talk about what will be the next version of JSEP
... do we have any slides?

juberti: no slides (short notice)
... it has been pointed out that we need normative behavior for createOffer/Answer/setLocal...
... I've talked to fluffy about it
... and we will have a new version in two weeks
... (of the JSEP draft)

<matthew> my SIP-to-PSTN call was terrible, but SIP directly sounds great

<matthew> and martin, i think you associated my [IPcaller] with you

hta1: do you see any other blockers
... ?
... there are some issues that needs to be clarified in the unified plan before they can go into JSEP
... these issues are minor
... and shouldn't block a release of a useful JSEP draft

ekr: (scribe didn't get this)

<ekr> adambe: I was saying we need some way to indicate which flows are first class and which ones were bundle only


<jesup> still here

<matthew> no, martin was first

stefanh: juberti, do you see any API changes as a result of this work

juberti: not really

<matthew> i was on via SIP-to-PSTN with a caller id of 831-426-xxxx, then i dropped and reconnected with SIP direct after martin

juberti: we're nailing down unspecified stuff

<matthew> if only there were some sort of identifier that an IP caller could be known by

stefanh: no more questions?

hta1: we're looking forward to a new draft

fluffy: nothing further to add
... we have been focusing on the big stuff
... roll-back, rehydration will not be finished in the upcoming release

stefanh: next thing that was decided on the IETF meeting is that SDES is out
... no changes to the API

juberti: on consequence, what do we need to expose regarding the certificate

ekr: I can take an action to come up with a proposal

<ekr> action, ekr to come up with a proposal for access to DTLS meta-data

<dom> ACTION: eric to come up with a proposal for DTLS meta-data [recorded in http://www.w3.org/2013/09/03-webrtc-minutes.html#action02]

<trackbot> Created ACTION-86 - Come up with a proposal for dtls meta-data [on Eric Rescorla - due 2013-09-10].

<ekr> dom: thanks

hta1: stefanh, it's your turn to talk about transport related APIs

<stefanh> http://lists.w3.org/Archives/Public/public-webrtc/2013Sep/att-0006/Transport_related_API_needs.pdf

stefanh: yes, I'll post a link to the slides
... we've set up a wiki page called "transport control" were we gather information
... example of control: pause/resume
... priority

<matthew> http://www.w3.org/2011/04/webrtc/wiki/Transport_Control

<ekr> matthew: thanks

stefanh: (is going through the list on the wiki)
... people want to set bitrate/bandwidth
... last thing is to disable bundle

juberti: I don't see anything about simulcast

stefanh: that's a good input
... we are now allowing two tracks in the same stream that use the same source
... the app can configure the tracks to have different resolutions

juberti: that won't work for layerd codecs
... the tracks will get different ids

stefanh: that's good input
... please send this input to the list

jesup: we need some relation between the tracks

<hta1> Martin: What happens if we want to control everything that's part of the SDP? How do we rule them out of scope or bring them in scope if you want them to be inside?

<dom> scribenick: hta1

<martin> I'm concerned that some of the things in the SDP offer are not reflective of the capabilities of the browser. If we want to permit some of these alterations, then it's going to be difficult to discover browser capabilities just through an SDP offer.

<matthew> i think there's two problems. 1) why is ptime (for instance) not on this list (how did it get to be out of scope) and 2) how for things that can be set to values other than what is in the offer can we know what are valid values? (example is if i create an offer and it has ptime 30, i have no info about whether or not ptime 5 is valid)

<matthew> (valid as input for setlocaldescription)

cullen: need to consider what we need to manipulate, and providing API surfaces to avoid SDP munging if possible.

<juberti> agreed

cullen: example: ptime - opus has the minptime and maxptime parameters, which could be managed by sdp munging.

<martin> ptime is interesting for large BDP, because it might offer some latency benefits

hta: suggest to start with the bandwidth issue.

jesup: bandwidth bug has had no action in ages, because we have had other issues.
... initial bandwidth and target bandwidth are different issues.

juberti: starting bandwidth is important for fast start.

cullen: fast ramp-up will get pushback from the transport people at ietf.

juberti: starting and minimum are other interesting bandwidth - quality below minimum may want to disconnect

<stefanh> +1 to juberti

<martin> justin, do you regard minimum as a "if you go below X, don't bother" sort of setting?

<matthew> i believe comment 22 now applies. having the application be in control of how bandwidth is allocated to its streams is a great idea.

randell: application guessing is guaranteed to be an imperfect guess. there is no great solution.

<martin> jesup, disagree about the last mile being the determining factor

<Cow_woC> (Sorry for joining late, how far down the agenda are we? Did we reach "V2 API discussions: How to handle"?)

juberti: we can't prevent people from writing bad apps. they will set the values to what they believe they can use.
... what we should not do for 1.0 is to manage the bandwidth up or down based on packet loss. This shoudl be within the runtime.

<Cow_woC> silvia: Thank you.

jesup: api should allow to set bandwidth allocation between flows, current bandwidth should be exposed, but not to exceed available.

juberti: basically agree.

ekr: we should not allow the application to generate more bandwidth than it would normally be entitled to.

<Cow_woC> hta1: What about being able to specify (what we believe) to be the minimum bandwidth usage? Meaning, I am starting a 1080p video chat. I don't want the video to start at 50kb/s because the users will get blurry/choppy video. Alternatively, find a way to get the vendors to scale up bandwidth usage *much* faster.

<martin> it would be bad if an app could start sending at 100Mbps

<jesup> hta1: Right, you can only reallocate bits among flows (or reduce total bits), not increase the number of bits (in my proposal)

ekr: you shouldn't be able to set initial bandwidth so that you can generate high traffic before we know that it arrives.

<martin> jesup's bug is https://www.w3.org/Bugs/Public/show_bug.cgi?id=15861

<matthew> martin: that depends on how you define "bad". there's whole communities of folks who'd love such a capability.

<Cow_woC> Question: How do we get 1080p video chat to start smooth/sharp within 3 seconds as opposed to 1 minute which it is now?

<matthew> cow_woc: run 1080p worth of data traffic for 5 minutes before you want to make a call

<jesup> Cow_woC: that's an issue of the congestion algorithm: it can be more agressive at the start of a call

<matthew> (to the same destination you'll be calling or called from, of course)

<Cow_woC> matthew: That's a non-solution as far as I'm concerned :)

Cow_woC: you wishing it does not make it possible, unfortunately.

<jesup> You also can do *rough* packet-train guestimate to help guide the starting rate.

<matthew> if we care about that shared medium, we should be TCP-friendly.

<Cow_woC> jesup: Fine, how do I mandate that? Right now I have no way to guarantee that my users will get a good experience nor any guarantee that the problem will ever be fixed for any given vendor. I'm looking for the specification to mandate something here to ensure a decent user experience.

<matthew> and TCP-friendly isn't compatible with an international 4k video call starting instantly at full resolution

juberti: might want to let the initial BW control the initial bandwidth estimate.

<ekr> Cow_woC: the problem is that this is not compatible with the stability of the Interet

<jesup> Cow_woC: We can't mandate the internet (or a congestion algorithm) will produce a specific result

<jesup> Please join RMCAT :-)

<Cow_woC> ekr, matthew: I don't want to break the internet :) but the question then is... how come I can download a 1GB file from Dropbox at crazy speeds? The download speed ramps up almost instantaneously. If HTTP uploads/downloads can do that, why can't WebRTC?

<ekr> It doesn't ramp up almost instantaneously.

<ekr> It actually takes a number of RTTs.

<martin> hta: we should treat application input as "what the app desires", but there needs to be a hard limit that is determined by our algorithms, and that's what RMCAT is for

<jesup> Cow_woC: and the current ramp-up is a low sloter than it *needs* to be

<Cow_woC> ekr: It's on the order of 3 seconds, not 5 minutes like matthew mentioned... Clearly WebRTC has a problem.

<ekr> Cow_woC: moreover, rate control on video streams has to have a lot more hysterisis than data

<martin> jesup, whether ramp-up is too slow or not is probably subject to conjecture

<ekr> because of the way that video works

<Cow_woC> jesup: Fair enough. Is there anything we can do on the specification level to ensure all implementations are more aggressive on this end?

<ekr> Cow

<ekr> _woC: yes, join the RMCAT WG

<jesup> Cow_woC: File bugs with Chrome and Mozilla, and join RMCAT

<matthew> i didn't say it would take 5 minutes.. but i did say that 5 minutes of stream would be sufficient :)

cullen: sending for 10 seconds at full HD bandwidth is unacceptable on a shared network.

<ekr> cullen++

<Cow_woC> matthew: In my experience, it is at least 1 minute.

<jesup> I *think* the current impl doesn't ramp as fast as it can at the start; after you've established an idea of the max rate you want to ramp slowly when going past that

cullen: ramp-up is one of the reasons why applications want to send early media.

<jesup> hta1: reasonable.

hta: proposal - set the bandwidth as a constraint on PeerConnection. It's simple, it gets ut out the door, we can extend stuff later.

<jesup> And the other layer of objects makes some sense, but may be more work to specify

<ekr> hta1: that works for me

<martin> stefanh: everyone wants max, but there are mixed views on min and initial values

<jesup> hta1: It is

stefanh: min bandwidth can also be useful, and simple.

justin: 1.0 with max sounds good. mixed views on whether initial is useful.

<jesup> hta1: It's meant to solve the general problem with multiple streams

topic switch: codecs

<juberti> what do you suggest that min bandwidth would do on a peerconnection?

<juberti> if the estimated bandwidth drops below the minimum, what happens?

cullen: about codecs - people have been adding codecs, which is obviously bogus. the valid operations are removing and reordering codecs.
... the question is if we need a specific API surface for this.

cullen: the people who say they need this have not been able to describe (to cullen) compelling use cases for this.

<ekr> Cow_woC: start speaking

<Cow_woC> Sorry guys, I'm new to this system.

<dom> Cow_woC, are you on the call?

<Cow_woC> If bandwidth drops below minimum, I propose triggering a callback

<Cow_woC> Sorry, only on IRC

OK, then you cannot participate in the verbal discussion.

<Cow_woC> I made a proposal a few weeks ago, asking for "fence conditions". You register min/max bandwidth and a callback gets invoked if this condition is violated. The callback then modifies the fence conditions and the process continues.

justin: case heard is that people can't do SWB so only want to offer WB or narrowband.
... use case is where there are browsers on both sides, running the same app, and app dev wants to control the codec

cullen: we may want to configure the session for real low latency.

justin: controlling music mode vs speech mode is a poster child.

<adambe> Cow_woC: are you able to join the call (find phone info in this mail http://lists.w3.org/Archives/Public/public-webrtc/2013Sep/0005.html)

martin: stats API could be an useful place to expose what the congestion control algorithm currently thinks is available.

<fluffy> +1 Martin

<Cow_woC> adambe: Trying now.

<Cow_woC> adambe: (thank you!)

hta: sdp munging can't control Opus music / speech mode. We need API surface to control it.

<jesup> hta1: we were planning on something like that

<martin> it sort of can: music mode can be achieved with a=max-ptime=5

justin: punting these features to 2.0 may be the simplest thing to do here.

<martin> But it's hard to know if doing that it possible...

justin: suggest punting all the codec control things to post-1.0

cullen: strong desire for controlling silence suppression.

justin, randell: we already have that.

cullen: reorder or remove codecs - don't do it in the 1.0 timeframe.

justin: agc is a topic that we need to consider. it's not really a transport control thingy.
... bundle, bandwidth should be global constraints on the peer connection.

<jesup> justin: agc, AEC, noise suppression are all more getUserMedia issues than peerconnectionissues, IMHO

cullen: see a need for priority. should be a property on the mediastreamtrack.

justin: mst is the wrong place, since it doesn't really connect to the transport. We don't have a good place to set this property.

<silvia> if we can get bandwidth limitations into global constraints, that would be a good step forward

<matthew> comment 22 definitely applies now. there should be an object that reflects the transport separately from the object that represents the media, then we could talk about prioritization APIs on that transport object.

cullen: priority should reflect into DSCP levels.

justin: relative priority may be good enough for version 1.

<martin> matthew, yes. This applies to bandwidth, priority, and all sorts of stuff.

<Cow_woC> matthew: Agreed.

<matthew> the problem with the "v2 API discussion" is that there shouldn't be this v1 API

<matthew> so clearly the numbering is wrong

<Cow_woC> matthew: :)

stefanh: people want priorities on tracks.

<jesup> ekr++

ekr: priorities don't sound too hard. and they're useful.

dand: priorities sounds like apps developers need it in the first release.

<ekr> no objection

<stefanh> no objection!

<burn> publish!

hta: suggest making the current editor's draft the WD.

No objections noted.

stefanh presents v2 discussion - chairs reserve right to move discussion to wiki or separate list.

<matthew> the people working on the "v1" should have better self-control and keep working on it, instead of reading the other messages

cullen: no benefit in starting the v2 discussion on this mailing list.

stefanh: don't want to push people away

cullen: we have formed a new group in the W3C, we should be pushing them in that direction.

<matthew> the discussions might have gone even better if the people working on the current abomination would refrain from responding to those messages, too

<burn> the people working on "v2" should have better self-control and stop working on it, instead of generating lots of messages :)

ekr: use case discussions about what we are sad about in the current api are good to have.
... we can have endless debates on which dot version things should be in.
... triage between v1 and v2 features seems completely appropriate for this group.

<matthew> i don't care which list we have the conversations on, because we'll just follow the one that works for us

cow_woc: be careful not to make architectural decisions in 1.0 that prevent certain features in version 2.

<matthew> feels like we've already made those decisions

<matthew> for some value of "we"

cow_woc: if v1 makes too many promises you can't remove those promises in v2.

cullen: all proposals claim that v1 can be built on top of v2. I don't want to be back to discussing low level APIs.

<burn> cow_woc, I agree that v1 must not prevent a v2, and to that extent it's important to have joint discussions. Officially, there is likely to be no official v2 work in this group until v1 is closer to done, regardless of where proposals leading to v2 are developed.

<martin> hta, that's a very strange statement to make: old applications should be able to use new features ???

hta: worried about losing some properties, such as future-proofing of applications.

<ekr> martin: why is that strange?

<martin> because those features weren't requested, perhaps?

<ekr> For instance, I would like browsers to automatically use VP9 if it was added

<burn> btw Cow_woC, who are you? I can't hear well on the call.

<ekr> Or HEVC

<matthew> "v1" won't be done until there is a complete specification of what comes out and goes in at all the SDP API interfaces. the likelyhood of that being complete in the next 2-3 years is nil. so why not start on the next one now?

<Cow_woC> burn: Sorry, calling on Skype.

Cow_woC: <not able to capture that argument>

<martin> if I write an application, I would have expected that the application continue to work as written, without surprising changes. New codecs are something that I might have left in the hands of the browser, in which case, no problem.

<martin> I was thinking about new features of a different nature than just codecs.

<ekr> martin: well, I would expect that for instance, BUNDLE would work if introduced later

<matthew> 2 minutes over. leaving.

<burn> Cow_woC, who are you? We need it for the minutes as well. (sorry if I missed it)

<martin> bye

<ekr> burn: cow_woc is Gili

<Cow_woC> hta1: I was trying to say that a proposal was made a few months ago that SDP should be an "opaque token" instead of the specification promising the use of SDP or explaining what's inside it. This prevents v2 from changing the meaning of SDP or removing it altogether.

stefanh: what I hear now is that we should limit discussion related to v2 and focus on finalizing version 1.

<Cow_woC> ekr: Yes, that's right.

<martin> ekr, I would hope that the browser wouldn't add bundle if I had been successfully using it without.

<ekr> martin: Hmm… That's not what I would have expected

<ekr> Would you be sad if it added AES-GCM?

Summary of Action Items

[NEW] ACTION: eric to come up with a proposal for DTLS meta-data [recorded in http://www.w3.org/2013/09/03-webrtc-minutes.html#action02]
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.137 (CVS log)
$Date: 2013-09-04 07:25:23 $