IRC log of webrtc on 2012-06-11

Timestamps are in UTC.

07:11:42 [RRSAgent]
RRSAgent has joined #webrtc
07:11:42 [RRSAgent]
logging to http://www.w3.org/2012/06/11-webrtc-irc
07:12:19 [juberti]
http://plus.google.com/hangouts/_/google.com/webrtc
07:12:34 [juberti]
that should work for non-google.com users
07:12:43 [martin]
martin has joined #webrtc
07:13:28 [hta]
yes, but there's nobody in that hangout.
07:13:51 [dom]
RRSAgent, draft minutes
07:13:51 [RRSAgent]
I have made the request to generate http://www.w3.org/2012/06/11-webrtc-minutes.html dom
07:15:27 [dom]
trackbot, start meeting
07:15:34 [trackbot]
RRSAgent, make logs world
07:15:36 [trackbot]
Zakim, this will be RTC
07:15:37 [trackbot]
Meeting: Web Real-Time Communications Working Group Teleconference
07:15:37 [trackbot]
Date: 11 June 2012
07:15:47 [dom]
Agenda: http://www.w3.org/2011/04/webrtc/wiki/June_11_2012#Agenda
07:16:00 [dom]
Meeting: Web Real-Time Communications Working Group F2F
07:16:44 [dom]
Chair: Harald_Alvestrand, Stefan_Hakansson
07:20:09 [dom]
Present: Harald_Alvestrand, Stefan_Hakansson, Magnus_Westerlund, Ted_Hardie, Tim_Terriberry , Anant_Narayanan, Dan_Burnett, Dan_Druta, Dominique_Hazael-Massieux, Cullen_Jennings (remote), Justin_Uberti (remote), Adam_Bergkvist, Jim_Barnett
07:20:42 [fluffy]
https://cisco.webex.com/cisco/e.php?AT=WMI&EventID=195618382&PW=eeaef7985d44&RT=MiMxMzA%3D
07:20:54 [dom]
(there are more people in the room, but I can't identify them visually; if you know any of the missing names, please type "Present+ Name")
07:23:03 [ekr]
I have a cable, magnus
07:23:36 [nstratford]
Present+ Neil_Stratford (remote)
07:23:42 [derf]
Present+ Stephan_Wenger
07:23:52 [GangLiang]
Present + Gang_Liang(remote)
07:23:58 [salvatore]
Present+ Salatore Loreto (remote)
07:25:54 [derf]
Present+ Jonathan_Lennox
07:26:10 [dom]
Present+ EKR_(remote)
07:26:10 [derf]
Present+ Randell_Jesup
07:26:20 [derf]
Present+ Maire_Reavy
07:27:20 [dom]
s/Jesup/Jesup_(remote)/
07:27:30 [dom]
s/Reavy/Reavy_(remote)/
07:28:03 [derf]
Present+ Mary_Barnes
07:29:30 [anant]
juberti: joining the hangout via that URL always says "you are the first one to join"
07:29:38 [dom]
-> http://www.w3.org/2011/04/webrtc/wiki/File:WebRTC_interim-june-2012_PeerConnection_API.pdf JSEPified PeerConnection API (slides)
07:29:49 [dom]
s/juberti:/juberti,/
07:30:33 [DanRomascanu]
DanRomascanu has joined #webrtc
07:30:34 [dom]
Scribe: anant
07:31:29 [dom]
-> http://www.w3.org/2011/04/webrtc/track/actions/open WebRTC open action items
07:31:34 [dom]
ACTION-11?
07:31:34 [trackbot]
ACTION-11 -- Daniel Burnett to add Constraints API to API spec -- due 2012-01-12 -- OPEN
07:31:34 [trackbot]
http://www.w3.org/2011/04/webrtc/track/actions/11
07:31:38 [anant]
Welcome to the W3C interim! Coffee at 10:30, lunch at 12:30 CET
07:31:50 [anant]
Administrivia, going through action items
07:32:07 [anant]
stefanh_: Action 11 is ongoing, more to be discussed today
07:32:07 [dom]
ACTION-12?
07:32:12 [trackbot]
ACTION-12 -- Daniel Burnett to add Stats API to API spec -- due 2012-01-20 -- OPEN
07:32:12 [trackbot]
http://www.w3.org/2011/04/webrtc/track/actions/12
07:32:12 [anant]
stefanh_: ACTION 12
07:32:27 [derf]
Present+ Gonzalo_Camarillo
07:32:31 [Mauro]
Mauro has joined #webrtc
07:32:35 [dom]
ACTION-12: Harald's proposal http://lists.w3.org/Archives/Public/public-webrtc/2012Jun/0040.html
07:32:40 [trackbot]
ACTION-12 Add Stats API to API spec notes added
07:32:45 [anant]
harald: I put some comments, feedback welcome
07:32:52 [Martin_]
I'm not sure that the mic is working
07:33:17 [anant]
burn: capabilities was discussed in terms of the constraints. there needs to be a quick check on the list before we can put it in
07:33:56 [anant]
dom: I agree. there is the sysapps WG whose one of the charter items is to define how web applications should be given access to privileged APIs
07:34:20 [anant]
burn: it would be good to look at that group. I want to make sure we don't wait on a model from that group before being able to put capabilities in our document
07:34:27 [dom]
ACTION-16?
07:34:32 [trackbot]
ACTION-16 -- Eric Rescorla to propose how to tie into identity frameworks for comms partner verification -- due 2012-01-12 -- OPEN
07:34:32 [trackbot]
http://www.w3.org/2011/04/webrtc/track/actions/16
07:34:38 [anant]
stefanh_: next action, putting identity information
07:34:54 [anant]
ekr: I am a little behind on that, working on it now. I can have that by 2 weeks or so
07:35:01 [JonLennox]
JonLennox has joined #webrtc
07:35:01 [dom]
ACTION-16 due June 25
07:35:06 [trackbot]
ACTION-16 Propose how to tie into identity frameworks for comms partner verification due date now June 25
07:35:17 [anant]
stefanh_: next action (21), belongs to the media capture task force. to draft initial requirements
07:35:30 [anant]
stefanh_" propose to not discuss it further here
07:35:43 [anant]
fluffy: I had a question about that, what is the plan here?
07:36:00 [anant]
stefanh_: the plan is to move it from this tracker and opened into the mediacap tracker
07:36:02 [dom]
s/21/25
07:36:06 [dom]
ACTION-29?
07:36:06 [trackbot]
ACTION-29 -- Cullen Jennings to change all numeric constants to be enumerated strings -- due 2012-06-15 -- OPEN
07:36:06 [trackbot]
http://www.w3.org/2011/04/webrtc/track/actions/29
07:36:35 [anant]
fluffy: largely we've taken the first stab at moving most of the stuff, but we're still waiting for the respec2 move
07:36:53 [anant]
action 39: repsec2 move
07:36:53 [trackbot]
Sorry, couldn't find user - 39
07:37:14 [anant]
burn: there is no need to move respec2. dom made changes to the gUM doc, but we can do the same changes to the webrtc doc
07:37:24 [anant]
burn: respec 3 = respec 2 + extra module
07:37:24 [dom]
ACTION-29: mostly done, waiting for new respec version
07:37:24 [trackbot]
ACTION-29 Change all numeric constants to be enumerated strings notes added
07:38:03 [dom]
+1 on moving WebRTC to respec v3
07:38:05 [anant]
burn: it worked well with gUM, so it's worth trying with the webrtc document
07:38:28 [anant]
stefanh_: action 42 is also mediacap
07:39:18 [anant]
stefanh_: shall we move on to the next part of the agenda? JSEPified PeerConnection API
07:39:24 [anant]
adambe has slides on the discussion
07:39:40 [dom]
-> http://www.w3.org/2011/04/webrtc/wiki/File:WebRTC_interim-june-2012_PeerConnection_API.pdf JSEPified PeerConnection API (slides)
07:40:42 [anant]
adambe: slides about our current PeerConnection API, I didn't really know in what form to do this. start with a simple example
07:41:03 [dom]
i/adambe:/Topic: JSEP in PeerConnection/
07:41:42 [anant]
adambe: the code in the example has never been run, so there could be issues. but here's my view of how this API could work right now
07:42:04 [anant]
as few lines as possible, the straightest line possible between a two-way audio/video call
07:42:38 [anant]
the overview shows each step in the process
07:42:50 [anant]
subsequent slides will go in detail for each part
07:43:49 [Spencer]
Spencer has joined #webrtc
07:44:16 [anant]
who you are calling is left to the web application. variable signalingChannel is a way to send data to the other side (somehow)
07:44:53 [anant]
create a peerconnection, a way to handle ice candidates as they come in, use signalingChannel to send the candidate from the event over to the other side
07:45:26 [anant]
handler for handling what happens when you get a stream from the other side. in this case, we simply show the video in a video element
07:45:32 [dom]
(as far as I can tell, the current editors draft doesn't allow "null" for the IceServers configuration param in PeerConnection param)
07:45:58 [anant]
part2: use getUserMedia to get access to the local media and create an offer or answer (based on role)
07:47:31 [anant]
part3: handling incoming messages sent through the signalingChannel. three types of messages: "offer", "answer" and "candidate"
07:47:57 [nstratford]
nstratford has joined #webrtc
07:48:14 [anant]
anant: what is the SessionDescription constructor?
07:48:33 [anant]
adambe: that's how it is in the spec right now, it converts the string to an object
07:49:55 [dom]
anant: we can't add "SessionDescription" as a global object; we could either make it a sub-interface of PeerConnection, or avoid a constructor altogether by using a string
07:50:37 [fluffy]
I'm not sure that th mix in front of anat is working
07:50:42 [Martin_]
mic
07:50:44 [fluffy]
it was ealeir but seems to be off now
07:51:57 [anant]
adambe: we have a constructor to go one way, and stringifier to go another wa
07:52:09 [anant]
the object is the place to add those, and it's the placeholder
07:52:51 [nstratford]
Would it be possible for someone to relay slide numbers into the chat for those of us trying to follow along without video or slides in webex?
07:53:32 [stefanh_]
harald switched to the right slide in hang-out
07:53:33 [DiMartini]
this is the last slide
07:54:56 [anant]
dom: I don't know if there's another WebAPI that does that
07:55:15 [anant]
harald: get it out of the global namespace as an action item, and we can do the specific proposal
07:55:25 [anant]
adambe to consult with dom and make a proposal
07:55:45 [dom]
ACTION: adam to move SessionDescription and IceCandidate out of the global namespace
07:55:49 [juberti]
PeerConnectionSessionDescription
07:55:50 [trackbot]
Created ACTION-43 - Move SessionDescription and IceCandidate out of the global namespace [on Adam Bergkvist - due 2012-06-18].
07:56:18 [dom]
juberti, I think the idea was more to have PeerConnection.createSessionDescription() or something
07:56:28 [Magnus]
Magnus has joined #webrtc
07:56:52 [anant]
adambe: let's see what happens when someone calls. the start method takes a boolean in this example (true for outgoing, false for incoming)
07:59:34 [dom]
anant: on the receiving side, you're calling navigator.getUserMedia, and then createAnswer, without having received the offer
07:59:52 [dom]
adambe: the caller side has called start(true); the callee side hasn't done anything yet
08:00:46 [ekr]
q+ ekr
08:01:32 [ekr]
q+
08:01:57 [stefanh_]
can't hear ekr
08:01:58 [hta]
ekr, please unmute
08:02:15 [ekr]
cullen problem
08:03:13 [fluffy]
If this helps at all, I have a bit of a call flow at
08:03:14 [fluffy]
https://github.com/fluffy/webrtc-w3c/raw/master/call-simple.png
08:03:20 [fluffy]
it has some known problems
08:03:28 [dom]
ekr: what is the sequence of events on the answerer side?
08:03:49 [dom]
juberti: [answering, but not audibly]
08:04:24 [dom]
fluffy: I suspect this hasn't been carefully thought about
08:04:32 [anant]
fluffy: originally we didn't have the split and we didn't have setRemoteDescription
08:04:50 [anant]
juberti: do we still need pc.remotedescription as an argument to createAnswer?
08:05:09 [anant]
fluffy: what will drive this requirement is a rollback on update, but so far we don't have call flows that require this
08:05:51 [anant]
adambe: you do addstream, and when you call createAnswer, you use the added stream on the pc as the source of information, and then you pass a separate offer as another input
08:06:17 [anant]
adambe: it might make better sense to get both added streams and remote description from the pc
08:06:19 [ekr]
My general ask to the authors is that they need to provide a definitive answer to every such questoin.
08:06:36 [anant]
harald: the difference I remember is that with createAnswer(arg) you have not committed to accept from the other side
08:06:47 [Martin_]
There is a potential bug in the example code if the browser calls the getUserMedia callback inline because the remote description wont have been set when that happens.
08:06:49 [anant]
i.e. no guarantee to call setRemote after createAnswer
08:07:04 [anant]
ekr: I'd like to hear definitely whether or not I should call setRemoteDescr...
08:07:09 [anant]
juberti: you don't right now
08:07:34 [anant]
fluffy: this is clearly an issue where I haven't heard strong arguments one way or another. but obviously need to be defined
08:07:39 [anant]
lots of different alternatives
08:08:15 [anant]
if people have a preference for one or the other it would be great to hear
08:08:42 [anant]
ekr: one of the things I need to do is to introspect the offer. is one of the ways I do this is via setRemote that's fine, but we want to try and limit side effect to calling setRemote
08:08:59 [fluffy]
Do people hear Justin & EKR OK ?
08:09:05 [JonLennox]
Yeah
08:09:06 [anant]
juberti: my expectation is that it doesn't really have a lot of side effects. unless there's both local & remote description nothing will happen
08:09:08 [anant]
fluffy: yeah
08:09:58 [anant]
juberti: why would we not automatically generate an answer when calling setRemoteDescr…? the answer lies is some outlying cases where there might be modifications required. but if we are always passing in the same description, then we should.
08:10:09 [anant]
action on juberti & cullen to deep dive on the possibilities here
08:10:09 [trackbot]
Sorry, couldn't find user - on
08:11:04 [dom]
ACTION: juberti to deep dive on setRemoteDescription with cullen
08:11:04 [trackbot]
Sorry, couldn't find user - juberti
08:11:10 [dom]
ACTION: justin to deep dive on setRemoteDescription with cullen
08:11:10 [trackbot]
Sorry, couldn't find user - justin
08:11:38 [dom]
ACTION: cullen to deep dive on setRemoteDescription with justin
08:11:39 [anant]
fluffy: what happens when you call setRemote? are any added stream callbacks called?
08:11:43 [trackbot]
Created ACTION-44 - Deep dive on setRemoteDescription with justin [on Cullen Jennings - due 2012-06-18].
08:11:55 [anant]
juberti: the stream stuff is pretty clear, when setRemote is called, it triggers the callback
08:12:23 [anant]
fluffy: we might need two different callback. onstreamproposed/onstreamaccetped?
08:12:49 [anant]
harald: is it even meaningful to reject a stream? if the other end is sending me a stream, I can either take the data, or cause the data to not be sent
08:13:06 [anant]
ekr: I don't reject the idea, but the spec talks quite a bit about what happens when a stream is permanently dead
08:13:41 [anant]
ekr: when I get told there's a stream on the other side, I can either accept, get media, or I've been told there is no more media
08:13:58 [anant]
stefanh_: but you do hear about all this stuff. you get events on the stream
08:14:19 [anant]
ekr: what about the event where I get a video with h.264 but I constraint to only vp8 via setLocal
08:14:54 [anant]
juberti: you plugged in setRemote, got streams with audio/video. I don't know if you have track event, but you get stream event, you would then listen for onended, and these things are not negotiated
08:15:00 [anant]
ekr: conversely, when you accept, what happens?
08:15:09 [anant]
juberti: we talked about having an event where media start arriving
08:15:21 [anant]
stefanh_: you get an unmuted for incoming data
08:16:01 [anant]
ekr: what's the UI? in incoming call request, want to display audio/video, open a screen big enough, but don't want to do it until we know we can display it. but also before media actually arrives
08:16:36 [anant]
harald: tentatively, I think, when you get media stream (onstreamadded), and then you get media track events saying that the stream has ended because it cannot be delivered
08:17:05 [anant]
fluffy: using muted is bogus, because the other side may actually be muted so there's no data
08:17:31 [anant]
ekr: I tend to agree with cullen, but I can be convinced that it can be made to work
08:18:11 [anant]
juberti: we need to know one way or another if the negotiation completed or not
08:18:56 [anant]
ekr: that may imply the main thread has to block until the negotiation finished?
08:19:00 [anant]
so we'll need another event
08:19:11 [anant]
fluffy: so we'll add another event
08:19:17 [anant]
juberti: we need a state machine for streams
08:19:26 [ekr]
harald++
08:19:29 [yang]
yang has joined #webrtc
08:19:48 [anant]
harald: we need a state machine for streams and a state machine for tracks. audio track will be perfectly fine if we can't agree on the video codec.
08:20:06 [anant]
fluffy: the per track state. do you want us to do that as an extension of the tracks in this document or in the gUM document?
08:20:36 [anant]
dom: I think it belong in webrtc document
08:20:59 [anant]
harald: we have to have a state machine in the gUM document, but extend with more event and state in this document
08:21:06 [anant]
first we should figure out what events and states are
08:21:18 [anant]
fluffy: the high level use case is to see if negotiation failed or suceeded
08:21:34 [anant]
juberti: media arriving and muting should all be explicit
08:22:07 [anant]
adambe: as in the spec right now, things are very fluffy "when a stream has enough information to know it succeeded it should unmute"
08:22:48 [anant]
adambe: addtrack event is after setRemoteDescr… is called. is that really a stream in that case? it's something to be negotiated. I think we should have a mediastream only when the negotiation has completed
08:23:22 [anant]
fluffy: this ia an alternative approach, but we can make it work. there are some corner cases that we need to handle
08:23:35 [anant]
fluffy: I think we can deal through all those issues.
08:23:54 [anant]
fluffy: in the media stream's object it won't be in the video or audio track list (for smellivision)
08:24:28 [anant]
ekr: what's going to happen when we add non video/audio tracks to streams?
08:24:49 [dom]
partial interface MediaStream { attribute Smell smelltracks; }
08:25:12 [anant]
fluffy: does it buy you anything to have separate "tracks" attribute? and not just video/audio track sets
08:25:21 [anant]
stefanh_: it was to align with the media elements spec
08:25:43 [anant]
fluffy: a track should tell you what the type is
08:26:16 [anant]
harald: we discussed it a lot, and there was no case where it was simpler to have one set of tracks than 2 sets of tracks. I don't want to reopen that discussion
08:26:36 [anant]
harald: ekr I think that if you want to have a dictionary instead of attributes, throw yourself at it
08:26:50 [Martin_]
var audiotracks = alltracks.filter(function(x) { return x.type === 'audio'; })
08:27:21 [dom]
+1 to harald
08:27:32 [stefanh_]
+1 to harald
08:27:37 [anant]
harald: I think it is actually extensible enough that we can add later when we need them, but adding earlier than needing them is not pleasant
08:27:56 [anant]
ekr: I'm not suggesting that we add them right now, as a programmer it's not ideal to have things named like this
08:28:23 [ekr]
I usually argue that the valid number of objects is: 0, 1, and infinite
08:28:26 [juberti]
audiotracks = pc.tracks("audio")
08:28:44 [ekr]
juberti++
08:29:14 [anant]
jim barnett: we need a JS API to introspect the offer, and you don't get a stream object until you accept
08:29:54 [dom]
Present+ Richard_Ejzak
08:30:06 [anant]
richard: you still need a way for the browser of responding to an offer based on what it's capabilities are. my interpretation is that createAnswer is the way to do that
08:30:40 [anant]
the question is, if there is enough information to the JS for it to know if it can accept the answer or not. there's a little bit of a chicken and egg there
08:31:22 [Martin_]
there is a problem with the haptic track
08:31:23 [hta]
ekr, I wouldn't mind too much if we defined tracks as tracks { audio[], video[] } rather than audiotracks[], videotracks[]. That's what I was driving at with "dictionary".
08:31:34 [ekr]
hta: that would be preferable to me.
08:31:43 [Martin_]
+
08:31:50 [ekr]
how do we make it so
08:32:05 [juberti]
that works for me too
08:32:19 [hta]
ekr, type up the IDL you want and send it in.
08:32:25 [ekr]
Willdo.
08:32:38 [dom]
s/hta:/hta,/
08:32:52 [timpanton]
timpanton has joined #webrtc
08:33:19 [anant]
ekr, fluffy: we cannot hear you
08:33:42 [fluffy]
sorry
08:33:58 [ekr]
We were discussing how awesome this AV technology is
08:34:54 [anant]
adambe: we can discuss this issue separately later
08:36:30 [dom]
ACTION-44: Adam can help with when streams should be dispatched
08:36:35 [trackbot]
ACTION-44 Deep dive on setRemoteDescription with justin notes added
08:36:42 [anant]
time for coffee! 15 minute break
08:39:17 [nstratford]
Hangouts still doesn't work for many of us - please don't turon off WebEx!
08:58:52 [anant__]
anant__ has joined #webrtc
08:59:16 [anant]
adambe: continuing example slide
09:00:34 [anant]
after setRemoteDescription is called, on the callee side, getUserMedia is called to select a local source
09:01:10 [anant]
if we're lucky, media will start flowing and the streams can been displayed
09:01:56 [anant]
ekr: there are a lot of events been thrown out of this API. only the ICE event fire in the example, what about the others?
09:02:22 [anant]
adambe: there's a subsequent slide that discusses the ICE events
09:02:40 [anant]
ekr: but there's also PeerConnection events. it's unclear to me when all these events fire
09:03:09 [dom]
(and do we need all these events?)
09:03:24 [anant]
adambe: I don't think it's clear when some of these events fire
09:03:38 [anant]
ekr: I can go in and enumerate when I think these events fire, do people want that?
09:03:47 [juberti]
ekr: perhaps you could mark up the sample i wrote up with the event times? https://docs.google.com/document/d/1L3lMBINuLn2S7EO4APWTUaPcUMV0Ke1q2zCzZaYYba8/edit
09:03:51 [anant]
fluffy: proposal, why don't ekr and I take as an action item to annotate when the events fire
09:04:02 [ekr]
ekr has joined #webrtc
09:04:26 [anant]
fluffy: sometimes ICE is per track and other times it's for the peerconnection, deliberately not cleared up in the spec yet
09:04:29 [dom]
ACTION: cullen to annotate the callflow diagram with events fired
09:04:29 [trackbot]
Created ACTION-45 - Annotate the callflow diagram with events fired [on Cullen Jennings - due 2012-06-18].
09:04:34 [juberti]
ekr: https://docs.google.com/document/d/1L3lMBINuLn2S7EO4APWTUaPcUMV0Ke1q2zCzZaYYba8/edit
09:04:41 [dom]
s/ekr:/ekr,/
09:04:56 [ekr]
juberti: thanks
09:05:22 [anant]
adambe: regarding events and examples, most of the events, we don't really need
09:05:41 [hta]
Tech note: I'm having someone look closely at the hangout. Could those who try to join post the email addresses here?
09:06:02 [anant]
fluffy: it's true you don't need them, but the customer for the spec is the browser implementers who need to know when to generate them
09:06:08 [anant]
hta: anant@mozilla.com (for hangout)
09:06:23 [anant]
adambe: I have suggested some discussion topics
09:06:43 [anant]
adambe: how to tell we have enough candidates? in the trickle case
09:07:48 [ekr]
nacl... awesome
09:08:33 [anant]
adambe: do we need to talk about that before we decide what the events are associated with?
09:08:53 [anant]
juberti: the "null" event may have independently of a candidate
09:09:03 [anant]
fluffy: in nearly every I think that is the case
09:09:29 [anant]
ekr: it is passing effectively a domstring to the interface for each candidate, then null may be okay, but for JSON, what would you do
09:09:50 [anant]
juberti: if some ICE candidate is an object, then this would have the m-line for SDP
09:10:02 [anant]
and real objects can be null in the DOM
09:10:28 [anant]
ekr: there are fast path lines, there is the case where I have two interfaces but can't write to one of them
09:11:07 [Martin_]
harald needs to use a mic
09:11:31 [anant]
harald: for the implementation, we faked it by just doing a timeout. the browser should not decide when it's enough that we got all the candidates
09:12:08 [anant]
either we define what's enough, or we leave it to the application
09:12:13 [anant]
ekr: but the browser does know!
09:12:33 [anant]
fluffy: I think harald makes a good point, enough is not the right thing here, the question is when is ICE done? there are no more candidates
09:12:54 [anant]
ekr: the technical state in which every candidate fails or succeeds, happens about 40 seconds later...
09:12:58 [anant]
fluffy: that's the event we are talking about
09:13:39 [anant]
juberti: the application needs to either have its own timeout, but if it gets told if I have everything ahead of that, I'm not going to wait that long
09:13:56 [anant]
fluffy: so we need an event that does this ICE session is done
09:15:20 [anant]
ekr: the relevant event here is: I have now received STUN answers or given up on every possible candidate. min time: 0, max time: ~40-60 seconds after
09:15:41 [JonLennox]
The max time depends on your rtt estimate to your stun/turn server, doesn't it?
09:15:45 [anant]
adambe: so we need something in between 1 candidate and 40 seconds later.
09:15:56 [anant]
ekr: how would you tell the browser this
09:16:19 [anant]
I would set a timer at the beginning for roughly at 4-5 seconds, and timer or callback firing would send out the offer
09:17:06 [anant]
juberti: I think it should be really short, if it takes more than 5 seconds for a candidate, you probably don't want to use that candidate
09:17:25 [fluffy]
I think the action here should be that we define what happens when the browser wishes to indicate that it it is not expecting any more candidates to be produced - at that point it will indicate it by doing the following ?
09:17:33 [anant]
ekr: one thing it might be relevant here, would it make sense for the application to control
09:20:23 [anant]
ekr: there are only two relevant events, I got 1 candidate, or I'm done.
09:20:46 [anant]
juberti: anything in between is hard to specify
09:20:59 [anant]
if you using trickle candidates, that will always work better than timeouts
09:21:40 [anant]
fluffy: so null or events?
09:22:13 [anant]
fluffy: it seems to me that the code you want to write is different in the case where you get a real candidate than when you get this event, you might want two different callbacks
09:23:04 [anant]
juberti: there's gathering, connecting.. there's really not any linear state progression in ICE. you can't get away from an explicit callback
09:23:40 [JonLennox]
I think there are separate state machines for gathering and connecting
09:24:08 [anant]
adambe: to sum up, people seem to agree that there should be some information on the candidates so the app can decide when to send something off. is the 40 second event useful to anyone?
09:24:29 [Martin_]
JonLennox, I agree
09:24:41 [Martin_]
the new issue is whether there are different state machines for the multiple different flows that might be created
09:25:13 [anant]
juberti: the last event is when the browser has finished ICE and it's got all the candidates that is can get
09:25:17 [anant]
fluffy: that often happens in <50ms
09:25:21 [anant]
so we definitely need that event
09:25:42 [anant]
juberti: the middle event is not needed when you use trickle candidates, and it's essentially a timeout
09:26:07 [anant]
jonathan: one issue where the 40second timeout is potentially interesting, trying to connect and nothing is working
09:26:22 [anant]
at the NULL you can switch from pinwheel to failure message
09:26:41 [anant]
juberti: that makes even more argument for the "now I think it's a good time callback"
09:26:58 [anant]
stefanh_: I have a problem with this in-between event because it depends on the other side
09:27:41 [anant]
adambe: so theres an event for every candidate, and one final event
09:29:07 [JonLennox]
You can always try the host candidates — you never know, they might work.
09:29:25 [Martin_]
and there are always host candidates
09:29:56 [anant]
stefanh_: resolution is: there will be 1 event for each candidate, and one event for "no more candidates".
09:29:56 [dom]
PROPOSED RESOLUTION: there will two kind of events: one for each candidate (to allow trickling), one when the browser has exhausted all possibilities
09:30:18 [anant]
the middle event is left up to the application
09:30:23 [dom]
RESOLUTION: there will two kind of events: one for each candidate (to allow trickling), one when the browser has exhausted all possibilities
09:30:27 [anant]
adambe: next topic is the renegotiation event
09:31:26 [dom]
[Shouldn't we make PeerConnection derive from EventTarget, to make it possible to use addEventListener/removeEventListener in addition to on... functions?]
09:31:32 [anant]
the idea here is to have a callback or an event that would help the developer to know when to actually create a new offer on answer
09:31:39 [anant]
+1 dom
09:32:04 [anant]
ekr: I don't know if this is needed, but I have a question, but will this be fired whenever addstream is called?
09:32:12 [anant]
fluffy: yes, that's the current thinking
09:32:47 [anant]
adambe: the name should probably be negotiationneeded instead of renogotiationneeded since it can happen the first time too.
09:33:01 [anant]
ekr: so if I add two streams, I get two of these callbacks?
09:33:12 [anant]
juberti: the callback only fires when it's actually needed
09:33:58 [anant]
adambe: this is quite a big topic, I don't know if we have enough to discuss it here. should the callback be triggered or not depending on the state...
09:34:33 [anant]
ekr: this is problematic in the naive implementation of gUM that is calls onaddstreams when it responds. now I call gUM twice if the negotiationadded is called twice
09:34:52 [anant]
juberti: only setLocalDescription changes the state, so calling createOffer without setting it won't call it
09:35:33 [anant]
adambe: if you do two addstreams in the same event loop iteration, it should only result in 1 event
09:35:59 [anant]
Martin_: but this would be in the gUM callback which almost certainly isn't in the same event loop iteration
09:36:34 [anant]
ekr: but what happens when I get this callback when I'm still waiting for createOffer to return?!
09:36:43 [anant]
adambe: perhaps we need more call flows & examples before we can dig into this
09:37:18 [anant]
stefanh_: we're only discussing here for tracks or streams, does this also happen when hardware is removed/added, or there is a browser-level mute?
09:37:27 [ekr]
Executive summary: I'm worried about race conditions.
09:37:42 [anant]
but we haven't decided which way to do these, and lot of other things to consider
09:38:19 [anant]
ekr: i'm not against this functionality just that it's defined in a way that doesn't result in problems
09:38:37 [anant]
juberti: this call flow seems to make sense to developers on webrtc-discuss
09:39:30 [anant]
adambe: to make the API easy to use, this is important, but it's not crucial for the functionality
09:40:08 [anant]
???: do you have a any notion of replacing a stream or changing the characteristics? it introduces nasty issues
09:40:31 [anant]
it's not unheard of to replace a m=audio line with another completely different line
09:40:34 [JonLennox]
Speaker is Paul Kyzivat
09:40:46 [dom]
s/???/Paul_Kyzivat:/
09:40:59 [anant]
you have to keep both streams live and then decide which one to keep after a while
09:41:25 [anant]
the question is: the model you're talking about, maybe it's not rich enough to handle those cases? what do you do to your stream to change a codec?
09:42:19 [hta]
anant or dom, can you try joining the hangout again?
09:43:29 [ekr]
ekr has joined #webrtc
09:43:29 [anant]
burn: as far as constraints are defined now, the browser can change the stream midway as long as it satisfies the constraints, even if it need a codec change
09:43:56 [hta]
the "first one here" is another bug...
09:44:34 [anant]
fluffy: one use case is when the browser switching to a narrow/wideband
09:45:19 [anant]
richard: if we just look at need to renegotiate in SDP, I don't think we want to support changing the media type for an m-line, in WebRTC. it would be OK for us to say once you've defined characteristics with a media line then make it be immutable
09:45:38 [Martin_]
port => 0, or a=inactive
09:46:56 [anant]
richard: if you need to renegotiate in order to add a new media line, you also want to list all codecs that are present in other lines, when creating an offer you want a list of all capabilities
09:47:19 [anant]
juberti: in some cases you do, in some cases you don't. in the JSEP draft I say the cases where you'd need a full offer
09:47:34 [anant]
but for some cases where you are only adding one track you don't need the full offer
09:47:46 [anant]
richard: doesn't the application need to be able to define that/
09:48:24 [anant]
???: does this renegotiation happen, for instance, when direction of an m-line is changed?
09:48:35 [hta]
Tech interrupt: It seems we can get people into the hangout, but we need to invite them explicitly by email address, and only some people manage to invite them. Ping me if needed.
09:48:36 [anant]
juberti: the only way to change the direction is via client setLocalDescription
09:48:42 [Martin_]
s/???/Andrew Hutton/
09:49:21 [anant]
juberti: the whole idea is that when you get this negotiation callback, the developer creates an offer and ships it off
09:49:42 [anant]
stefanh_: I would like to conclude this discussion… we are moving into IETF territory
09:50:08 [anant]
the consensus seems to be that we need this callback, but editors need to define in what cases
09:50:45 [anant]
adambe: other topics: constraints that we can add, new global object IceServers, createProvisionalAnswers, ICE restart
09:51:41 [anant]
adambe: when we get a stream, how many places in the API can I have an effect on the workings of the system? what are the possibilities of introducing conflicting constraints
09:52:02 [anant]
if we count tweaking sdp from string and back to object, we have 5 places, and it feels like a lot of places where we can tweak
09:52:37 [anant]
juberti: 1, 2, 3, 4 are all needed, and 4 and 5 seem the same to me
09:53:19 [anant]
ekr: is there a 4 in the spec? do we need it?
09:54:04 [ekr]
sdp.tweakOffer = function(f) { this = f(this); }
09:54:32 [anant]
juberti: there will always be cases where we won't provide what the application wants (and they have to do it by hand), but for streams and gUM they are seperate.
09:55:37 [anant]
adambe: I agree that the intention is to modify separate things, but we have to be careful that we don't introduce conflicting constraints
09:56:11 [anant]
the reason or adding #4 is that, we should provide APIs to tweak the SDP
09:56:39 [ekr]
my brain seems to be failing: where is the constraints algorithm currently defined?
09:56:45 [anant]
fluffy: I agree no-one should parse the SDP on their own, but I'm hoping that constraints will cover all the things we need to do
09:56:51 [dom]
ekr, in getUserMedia
09:57:20 [dom]
ekr, http://dev.w3.org/2011/webrtc/editor/getusermedia.html#methods-3 more specifically
09:58:21 [anant]
ekr: I'm less concerned about 1vs2 than I am about 2vs3
09:58:43 [dom]
stefanh: wouldn't it be confusing if one constraint in getUserMedia could also be set/overriden in addStream?
09:59:22 [ekr]
oh, I see, it's just not where I expected. Thanks
10:00:21 [anant]
juberti: we need to have a clear indication about what constraints go into which API calls. you can't pass ICE restart into gUM
10:01:01 [dom]
(I think this means the constraints registry should make which constraint for which context abundantly clear)
10:01:34 [anant]
juberti: there's a 2nd parameter to the constructor where you put ICE constraints
10:02:00 [anant]
fluffy: some are perfectly willing to put relays for audio, but not putting video. constraints will be different for two different cases
10:02:29 [anant]
fluffy: let's do an easy one like aspect ratio. If I set aspect ratio in #1, will that be remembered, or do I call it everytime?
10:02:46 [Markus]
Markus has joined #webrtc
10:03:08 [anant]
juberti: 1 gets carried over to 2; but if I add the stream to two different streams, then I can override
10:03:51 [ekr]
correction: 5 doesn't exist
10:03:51 [anant]
harald: this particular point illustrates that setting constraints have to fade at some point, because in the current setup it is easy to define conflicting constraints
10:04:08 [anant]
ekr: 5 exists, 4 doesn't
10:04:17 [ekr]
oh, you're right
10:04:35 [fluffy]
Ted has got the video reflected into webex for the folks on webex
10:04:59 [anant]
burn: I think that there will be subtle differences in interpretations of constraints in the different cases unless we define the context
10:05:06 [anant]
dom: does the registry have a context?
10:05:15 [anant]
burn: currently doesn't but we can add it once we know what we want
10:05:31 [dom]
s/have a context/ask for context for constraints/
10:06:16 [anant]
adambe: for ICEServers, we have two suggestions: list of string, list of list of strings
10:06:18 [DiMartini]
DiMartini has joined #webrtc
10:06:19 [ekr]
so, I think we still didn't work out the merge algorithm
10:06:24 [anant]
dom: first easy change is to make it a dictionary
10:06:28 [ekr]
OR when it's needed.
10:06:30 [fluffy]
we can't hear whoever that was
10:06:33 [anant]
adambe: I think you're right
10:06:39 [ekr]
CAn the chairs keep this issue open?
10:07:09 [stefanh_]
it will be kept open
10:07:18 [ekr]
stefanh_: thanks
10:07:46 [dom]
(the actual syntax would be "DOMString[] servers", not "DOMString servers[]")
10:10:06 [Mauro]
Mauro has joined #webrtc
10:12:17 [anant]
adambe: do we have any requirements of different ICE constraints on different servers
10:12:59 [anant]
harald: that might make sense
10:13:57 [Martin_]
can someone explain how SRV interacts with this while you are at it?
10:14:21 [hta]
action anant: write up a spec for IceServer object, and compare
10:14:26 [trackbot]
Created ACTION-46 - Write up a spec for IceServer object, and compare [on Anant Narayanan - due 2012-06-18].
10:14:32 [anant]
dom: in your example, PeerConnection has null as the value, the draft doesn't allow null.
10:14:45 [anant]
anant: I think we should allow null and the browser should have defaults.
10:15:06 [anant]
adambe: for createAnswer, do we need the offer argument or can it automatically grab it from the pc?
10:15:09 [JonLennox]
Martin_, I'd think that'd be defined by the STUN/TURN URI definition?
10:15:12 [dom]
(on top of make it nullable, we should also make it optional then)
10:16:32 [anant]
fluffy: no-one could come any reason for why we couldn't remove the argument
10:18:29 [Martin_]
JonLennox, it's pretty vague in the STUN URI draft
10:18:30 [dom]
ACTION: Anant to provide a code example showing continuation for createAnswer
10:18:31 [trackbot]
Created ACTION-47 - Provide a code example showing continuation for createAnswer [on Anant Narayanan - due 2012-06-18].
10:19:00 [JonLennox]
Martin_, should be fixed there then
10:19:29 [Martin_]
JonLennox, I'll take it up with the authors
10:21:57 [anant]
harald: this the 3rd redesign in 3 months, and I dont' want a redesign without a compelling reson
10:22:30 [anant]
6 months ago, I would settle for appealing reasons, but at this point I'd rather have a compelling reason
10:22:46 [anant]
fluffy: we haven't designed error handling yet, this may fall in this category
10:22:54 [dom]
s/in 3 months/in 6 months/
10:23:25 [anant]
harald: want to get into SdpType before lunch. having them twice is wrong, we should settle that
10:23:30 [anant]
ekr: we should have a new method call
10:23:36 [ekr]
that was sarcasm
10:24:18 [anant]
harald: we should try the polling method. who would like to have SdpType inside or outside?
10:25:45 [anant]
harald: 1st question: do you have an opinion?
10:26:00 [anant]
7 opinions
10:26:19 [anant]
how many prefer to have the type inside the sdp object: 5
10:26:40 [anant]
how many prefer to be outside: 2
10:27:07 [anant]
conclusion: put the sdptype inside, remove the additional parameter
10:27:21 [anant]
harald: we can have the discussion about mutability later
10:27:38 [Martin_]
You missed the fourth and fifth questions, which are who thinks that the colour of the bike shed doesn't matter
10:28:06 [anant]
ekr: certain things are errors, but mutating it to wrong values is an error
10:29:23 [anant]
lunch!
10:32:00 [juberti]
derf i would prefer that it be mutable, but yes, that could be a less elegant workaround
10:32:51 [juberti]
it shall be green: http://mamdblueroom.files.wordpress.com/2010/11/bikeshed2.jpg
11:26:56 [stefanh_]
people starting to gather in the Kista room
11:29:46 [DiMartini]
DiMartini has joined #webrtc
11:30:02 [burn]
scribe: burn
11:32:36 [burn]
Topic: Statistics API proposal (http://www.w3.org/2011/04/webrtc/wiki/images/7/7d/June_11_Stats.pdf)
11:33:14 [adambe]
adambe has joined #webrtc
11:33:33 [burn]
hta: vital need for statistics, but often left until the last minute, so i wrote something
11:34:22 [dom]
-> http://lists.w3.org/Archives/Public/public-webrtc/2012Jun/0040.html Stats API proposal, from Harald
11:34:32 [burn]
hta: statistics not intended for end user, mainly for service provider. Is everything actually still working?
11:34:54 [burn]
... since service provider's only access is API, stats should be there
11:35:25 [burn]
... should reuse meanings in other statistics collection approaches
11:35:59 [Mauro]
Mauro has joined #webrtc
11:36:15 [burn]
... MediaStreamTrack is the core unit for collecting stats. Feedback from recipient to sender is important.
11:36:59 [burn]
... since all of the data we care about is time-varying, need to timestamp everything
11:37:37 [burn]
... means we will need to sync clocks (or equivalent), but lots of world knowledge here.
11:38:29 [burn]
... user JS calls GetStats() on pc, then callback returns info
11:38:46 [ekr]
ekr has joined #webrtc
11:39:07 [Martin_]
Martin_ has joined #webrtc
11:40:22 [burn]
... model includes a pointer to track, local/remote data sets, data items are key/value pairs with keys in a new (?) registry
11:40:59 [burn]
... define some MTI stats such as packets and bytes, IP:Port
11:42:01 [burn]
... anyone can propose new statistics for registry. Need to distinguish between unsupported statistics data item and no result for that item.
11:42:21 [burn]
... need aggregated statistics (MediaStream, all PC)
11:42:52 [burn]
... maybe schedule periodic callbacks as well. The latter two may not need to be in version 1
11:43:31 [burn]
... one challenge is that not all info is known to browser
11:43:56 [fluffy]
one comment on OS audio path, echo cancelation often estimates the round trip
11:44:57 [burn]
... another is that synchronized stats are needed for aggregation, but can't always exactly correlate sender and recipient data
11:46:12 [burn]
... (jumps to "issues solved elsewhere") JS solves this
11:46:38 [burn]
anant: setInterval doesn't control when callbacks occur
11:47:36 [burn]
dom: you made this async because collection can take time?
11:48:25 [burn]
hta: if i can't guarantee getting back to you within 10ms, i shouldn't block. sometimes may need to call out to external module that could take time, although usually it won't.
11:48:34 [burn]
adam; can you say "collect for 10 secs"
11:49:12 [burn]
hta: don't want to. count in the core and use callbacks to compare and do the calculation
11:49:20 [burn]
s/adam; /adam: /
11:49:41 [burn]
dom: in zakim, eg, can ask who is making noise and it will wait for 10 secs
11:49:45 [burn]
hta: should be done at JS level
11:49:54 [burn]
stefan: have you been thinking about the data channel?
11:50:03 [burn]
hta: no
11:50:11 [burn]
stefan: i don't think we should have stats
11:50:31 [burn]
cullen: web sockets doesn't' have stats but is visible to browser
11:50:51 [burn]
randell: info is useful to app. bytes queued are available in websockets
11:50:54 [dom]
s/browser/server/*
11:50:57 [dom]
s/browser/server/
11:51:08 [burn]
cullen: at least need bytes xmitted and received
11:51:17 [burn]
randell: per data channel, or global?
11:51:20 [burn]
cullen: not sure
11:51:54 [burn]
hta: difference from media is in data channel app sees the bytes, but not for media
11:52:19 [burn]
cullen: want to know what happened on network
11:52:28 [burn]
randell: there could be other useful info
11:52:59 [burn]
magnus: about data channel, also have partial reliability option. may need to know reliability stats
11:53:24 [burn]
hta: RFP for ???? MIB exists?
11:53:34 [juberti]
juberti has joined #webrtc
11:53:37 [DanRomascanu]
nobody implements that AFAIK
11:53:48 [derf]
s/RFP/RFC/
11:54:08 [burn]
hta: (continuing with slides) another challenge is model problems
11:54:13 [MagnusW]
MagnusW has joined #webrtc
11:54:41 [burn]
... eg, where to count in FEC streams, where stats go for removed streams, how you count for multi-stream tracks
11:55:09 [burn]
s/????/SCTP/
11:55:15 [dom]
-> www.ietf.org/rfc/rfc3873.txt SCTP Management Information Base (MIB)
11:55:25 [burn]
adam: where are counters in the first place?
11:55:56 [burn]
hta: conceptually they are attached to a MediaStreamTrack. You need a handle to the track to get data
11:56:04 [burn]
dom; why not just leave the object
11:56:07 [burn]
adam: +1
11:56:15 [burn]
s/dom; /dom: /
11:56:21 [dom]
s/why not just leave the object/why not put the stats method on the track object itself/
11:56:27 [burn]
adam: it can remain as an ended or finished track
11:56:43 [fluffy]
I like HTA idea of never removing a track
11:57:12 [burn]
jonathan lennox: there are post-repair stats for IPC (??)
11:57:40 [derf]
s/IPC/RTCP/
11:57:43 [burn]
... there are also multiple remotes. result of tomorrow's discussions may make this more complex
11:58:13 [burn]
hta: don't want to support transport relays on multicast in v1 or rule out doing it in the distant future
11:59:06 [burn]
... with multi-stream tracks, how do I count only once even though only sent once
11:59:58 [burn]
ted: just count once. if you count for a particular track, you are right. However, adding up counts for all tracks will not add up to the number of bytes sent. Not a problem as long as app author knows what they did
12:00:25 [burn]
justin: track in multiple streams might be sent more than once due to different encodings
12:00:36 [burn]
randell: could be different processing on tracks too
12:00:47 [burn]
justin: should show up multiple times
12:02:18 [burn]
hta: maybe instead of MediaStreamTrack as selector, could query track for what to query to find out about its stats. Then ask PC for the info.
12:02:36 [burn]
anant: what is same stream/track is added to multiple peer connections
12:02:41 [tuexen]
tuexen has joined #webrtc
12:02:57 [burn]
cullen: sounds too complicated. better just to know what are all the objects to query
12:03:08 [burn]
stefan: why can't this go on the track?
12:03:24 [burn]
... its all on the receiving side
12:03:39 [burn]
(several): disagree
12:03:58 [burn]
stefan: then the sides need to agree in advance on this info
12:04:02 [burn]
hta: yes, RTCP
12:04:53 [burn]
magnus: need a clear model for how to handle multiple encodings of same media source.
12:05:47 [burn]
justin: on remote side, what would they see if you had different encondings? Two tracks, right? Because different SSRCs. Maybe then we need to clone track rather than using multiple times
12:06:12 [burn]
cullen: this would get with propagating use up to gUM for camera changes, etc.
12:06:42 [burn]
(missed some)
12:07:36 [burn]
randell: adding add'l semantics on top of media stream tracks that already exist.
12:08:15 [burn]
... network media tracks add info on local streams/tracks
12:08:44 [burn]
... tracks in PC are not necessarily the same as those returned from getUSerMedia
12:10:33 [burn]
anant: make media stream tracks immutable so you can't change their characteristics after creation. it has fixed properties. if you want to display different resolutions in different images, then those are different tracks. can derive one track from another.
12:10:45 [Gonzalo]
Gonzalo has joined #webrtc
12:10:57 [burn]
justitn: but if want to change resolution, will need to create a brand new track.
12:11:06 [burn]
ekr: what if other side changes resolution
12:11:17 [dom]
s/justitn/justin/
12:11:27 [GangLiang]
GangLiang has joined #webrtc
12:11:40 [burn]
justin: benefit of making immutable? 1-1 identity is nice, but why does that mean you can't change an existing track
12:12:16 [burn]
anant: avoids having to change constraints that may conflict for derived tracks, where we would have to distinguish between changeable params and others that arent
12:12:23 [Martin_]
I was observing that there are four MediaStream sub-types; LocalIdealMediaStream, LocalPacketizedMediaStream, RemoteIdealMediaStream, RemotePacketizedMediaStream
12:12:25 [burn]
.. can deal with remote changes differently
12:12:57 [burn]
randell: if track is sourced from video element, source-encoded, then you change the track?
12:13:11 [burn]
derf: this could happen at every frame!!!
12:13:25 [derf]
s/frame/keyframe/
12:13:27 [burn]
anant: should be forced to create a new track if characteristics change
12:13:37 [burn]
justin: can happen just by grabbing scroll handle
12:13:43 [Ralph]
Ralph has joined #webrtc
12:13:48 [burn]
randel: encoder might do this itself
12:14:06 [burn]
anant: SDP doesn't have all that?
12:14:13 [burn]
(several): no
12:14:41 [burn]
jimb: perhaps anything is SDP shouldn't be changeable, but everything else is okay?
12:15:05 [burn]
cullen: SDP does specify an envelope within which you can operate. I would still expect to be able to change SDP
12:15:41 [burn]
randell: request resolution changes may be able to happen without SDP changes, sometimes might.
12:15:47 [burn]
ekr: benefit of immutable?
12:16:51 [RRSAgent]
RRSAgent has joined #webrtc
12:16:51 [RRSAgent]
logging to http://www.w3.org/2012/06/11-webrtc-irc
12:17:04 [dom]
RRSAgent, draft minutes
12:17:04 [RRSAgent]
I have made the request to generate http://www.w3.org/2012/06/11-webrtc-minutes.html dom
12:17:05 [burn]
anant: ?? has fixed size. video doesn't know hat resolution is being received on track. more complex now in fixed output if track is changing under the covers.
12:17:10 [burn]
randell: already handled today
12:17:11 [dom]
s/??/video/
12:17:30 [burn]
justin: happens for html you download too
12:17:46 [dom]
RRSAgent, make log public
12:18:03 [dom]
RRSAgent, draft minutes
12:18:03 [RRSAgent]
I have made the request to generate http://www.w3.org/2012/06/11-webrtc-minutes.html dom
12:18:07 [burn]
justin: want to avoid downscaling
12:18:38 [Ralph]
Ralph has left #webrtc
12:18:46 [burn]
randell: always latency between UI resize and change in the source. ALso may not cause a resize (say if different parties have different sizes for same stream)
12:19:33 [burn]
justin: may go from small to large display and need fuller sending, but that doesn't change other small images.
12:19:36 [burn]
... many reasons for this
12:19:42 [fluffy]
I want to insert myself on Q
12:20:14 [Zakim]
Zakim has joined #webrtc
12:20:17 [dom]
q+ fluffy
12:20:22 [burn]
stefanwenger: may or may not be value of renegotation for change of resolution, but there are *many* SDP params that can change (framerate) during stream lifetime
12:21:02 [burn]
... idea that stuff that sits in SDP without renegotiation not true for 264 and, i believe, VP8
12:21:19 [derf]
s/stefan/stephan/
12:21:26 [burn]
cullen: we agree that two different windows is two tarkc objects. we just don't agree with immutability of a track
12:21:55 [burn]
jimb: what is immutability? can a track change from audio to video? of course not, so that's one kind of immutability
12:22:29 [burn]
hta: will modify proposal to have another layer of indirection so that in simple case we can get just one piece of info back but to allow more complexity
12:22:59 [burn]
dom: question about privacy. some of the info available (remote ip and port) might be additional.
12:23:14 [burn]
hta: don't see anything yet that hadn't already been exposed
12:23:31 [burn]
... did say that data must be possible to be anonymized
12:24:13 [burn]
anant: API is getStats, callback. Perhaps instead should be event that can be registered for regular returns
12:24:36 [burn]
hta: concerned about timers that no one is still around to listen to
12:25:06 [burn]
richard: RTCP also has ??? that should be returned / received
12:25:31 [Martin_]
s/???/application data/
12:25:32 [burn]
randell: data channel API would be better way to transmit such info.
12:25:59 [burn]
hta: if we find later that there is other info available in browser that other browser needs, RTP may be way to communicate it
12:26:21 [burn]
hta: application data has multiple meanings
12:26:37 [fluffy]
+1 lenox
12:26:45 [burn]
lennox: app data is stuff for your app, not something standardized. if standardized, not "applicaitn data"
12:27:14 [burn]
ddruta: question about remote sources for stats. where does app connect.
12:27:27 [burn]
hta: whatever is sending RTCP reports .
12:27:41 [burn]
druta: should we have param that specifies URI?
12:27:58 [burn]
hta: perhaps could extend that way, but I need to see the use case before we go beyond remote browser
12:28:17 [burn]
stefan: what's next?
12:28:33 [burn]
hta: will come up with new proposal that can handle multiple stats per track.
12:28:42 [burn]
dom: will be separate spec, or part of main one?
12:28:48 [burn]
hta: if quick, should be part of main doc
12:29:03 [dom]
q?
12:29:09 [fluffy]
q-
12:29:52 [dom]
Topic: P2P Data API
12:30:36 [burn]
scribe: DanD
12:30:38 [hta]
hta has joined #webrtc
12:30:47 [DanD]
Topic: Data API
12:31:16 [dom]
-> http://www.w3.org/2011/04/webrtc/wiki/images/4/45/WebRTC_interim-june-2012_Data_API.pdf P2P Data API slides
12:31:29 [dom]
RRSAgent, draft minutes
12:31:29 [RRSAgent]
I have made the request to generate http://www.w3.org/2012/06/11-webrtc-minutes.html dom
12:31:57 [DanD]
adambe Showing example from the slides
12:32:34 [DanD]
.. example creating a datachannel with an active peerconnection
12:32:42 [dom]
s/adambe/adambe:/
12:33:46 [DanD]
fluffy: We need to add the same thing that we do for media for data
12:35:03 [DanD]
jesep: there will be no offer answer for datachannel
12:35:32 [burn]
s/jesep/jesup/
12:35:43 [DanD]
fluffy: I'm on board with this proposal
12:36:39 [DanD]
anant: Complicates the case as it combines the everything in one connection
12:37:17 [DanD]
adambe: we talked about negotiation call back
12:37:51 [DanD]
.. you will only have to create an offer for the first channel
12:38:41 [DanD]
Richard: Why isn't data treated like the other media?
12:39:12 [DanD]
hta: We had this discussion on the mailing list
12:39:41 [Martin_]
mic please
12:39:56 [DanD]
dom: there are differences between media and data
12:40:39 [DanD]
hta: I proposed for unichannels for datachannel
12:41:26 [DanD]
Richard: It seams to be the need to create a construct datachannels
12:43:01 [DanD]
fluffy: we need to write down and we need to negotiate the lines in SDP. We're going in the right direction
12:43:56 [DanD]
adambe: You are right. It can be a container for multiple datachannels
12:44:39 [DanD]
fluffy: how do I know how to receive datachannels?
12:44:40 [DiMartini]
DiMartini has joined #webrtc
12:45:15 [fluffy]
@dan - you get a callback on the PeerConnection that tells you there is a new data stream
12:45:32 [fluffy]
you need some out of band info to know what it might contain
12:45:38 [fluffy]
I think we can do a little better than than
12:47:01 [DanD]
Ted: I agree with Cullen. Designing it on the fly in the room is not productive
12:47:32 [DanD]
jesup: I can write up a proposal
12:47:53 [DanD]
adambe: we have a facility but is not in Javascript
12:48:10 [dom]
ACTION: Jesup to write up possible directions for datachannels in peerconnection and relationship with media streams/tracks
12:48:10 [trackbot]
Sorry, couldn't find user - Jesup
12:48:16 [DanD]
burn: It seams that we're treating datachannel as a track
12:48:38 [DanD]
.. we don't have a container to hold all the datachannel
12:49:05 [dom]
ACTION: Stefan to pester Jesup to write up possible directions for datachannels in peerconnection and relationship with media streams/tracks
12:49:11 [trackbot]
Created ACTION-48 - Pester Jesup to write up possible directions for datachannels in peerconnection and relationship with media streams/tracks [on Stefan Håkansson - due 2012-06-18].
12:50:29 [DanD]
justing: datachannels are very application specific
12:50:38 [dom]
s/justing/justin/
12:51:01 [DanD]
fluffy: I'd like to challenge this. CLUE might be able to use this
12:51:54 [DanD]
hta: We need to add the use case for data channel standardidation
12:53:08 [dom]
can we get the slides sent to public-webrtc for the benefits of the minutes and the absent?
12:53:32 [DanD]
jesup: going over the slides
12:53:43 [hta]
Sent!
12:54:19 [DanD]
jesup: Open Issues are when can you send data on the datachannel
12:54:36 [dom]
-> http://lists.w3.org/Archives/Public/public-webrtc/2012Jun/att-0063/W3_Interim_June_2012_Data_Channel.pdf Data Channel Issues, slides by Jesup Randell
12:55:49 [DanD]
jesup: Second issue is when can we call create datachannel
12:56:19 [DanD]
adambe: how can I connect datachannel if I don't have a peerconnection?
12:57:16 [DanD]
dom: p2p data is very useful for developers with or without media
12:57:36 [DanD]
.. we should not make the assumption that media is used
12:58:23 [DanD]
jesup: proposal to create offer
12:59:03 [DanD]
.. to create datachannel before createoffer
12:59:30 [DanD]
erk: We need a datachannel container as burn suggested
12:59:41 [Martin_]
s/erk/ekr/
13:00:51 [DanD]
ekr: It is an expessive task
13:02:15 [DanD]
jesup: renegotiation need is application specific
13:02:59 [DanD]
Stefan: you cannot treat renegotiation needed with delay
13:03:17 [fluffy]
q+
13:04:00 [DanD]
Richard: If we don't have a construct for data channels
13:04:57 [DanD]
.. first datachannel is special
13:04:59 [dom]
(note that data channels have at least two different types: reliable and non-reliable; I'm not sure how that is dealt with when some channels are reliable, and others are not)
13:06:10 [DanD]
Ted: We have to consider resource utilization (radio) when keeping these datachannels alive
13:07:03 [DanD]
jesup: If you decide you're done with the datachannel you can drop it
13:08:46 [DanD]
..when there's no data it makes sense to shut it down. If you do shut it down you're left with nothing. Back to square 0
13:09:40 [DanD]
.. I don't have an objection
13:10:17 [DanD]
Paul: to support exposing this object. If there are errors there's no place to report them
13:10:31 [dom]
ack fluffy
13:10:44 [markus]
markus has joined #webrtc
13:11:00 [DanD]
fluffy: agreed with the error handling and add statistics to the case
13:12:17 [DanD]
burn: I'd like to see this explicit object.
13:12:29 [Martin_]
from far enough away, everything looks the same
13:12:42 [fluffy]
q-
13:13:17 [DanD]
.. from an API perspective it looks like a track
13:13:18 [dom]
€3
13:13:32 [DanD]
hta: doesn't really match
13:14:36 [DanD]
JonLennox: You need to know that you can't create the objects
13:16:34 [dom]
have we come to a conclusion about the mystery data track object? is this discussion part of Randell's previous action item
13:16:42 [DanD]
jesup: THe question is when can you call Send (from the slide proposal)
13:17:43 [DanD]
..if we allow before send we can reuse code written for websockets
13:18:16 [DanD]
fluffy: I'm not worried about interoperability with websockets. More interested on error handling
13:18:56 [DanD]
jesup: being application specific, application can figure out
13:19:51 [DanD]
hta: if app really needs this it can build it. If you don't have early data it can fake it. I don't favor early data
13:21:43 [DanD]
JonLennox: it's not clear to me what's the different between I'm connected and I can't send data to I just can't send data
13:22:21 [DanD]
Ted: There's no such thing as early data. It's just data
13:23:45 [DanD]
jesup: I you can create the connection before, better
13:24:10 [DanD]
hta: should we poll for this?
13:24:22 [DanD]
.. a lot of people have oppinions
13:25:03 [DanD]
.. decision not to support early data
13:25:23 [DanD]
..coffee break
13:25:43 [DanD]
Stefan: there was support for container
13:25:54 [dom]
ACTION: Adam to work with Randell on a proposal for a data channel container
13:25:59 [trackbot]
Created ACTION-49 - Work with Randell on a proposal for a data channel container [on Adam Bergkvist - due 2012-06-18].
13:49:40 [stefanh_]
scribe: stefanh_
13:51:46 [stefanh_]
First topic after coffee:
13:52:00 [stefanh_]
Report on status Audio WG.
13:52:05 [stefanh_]
(Dom talking)
13:52:20 [adambe]
adambe has joined #webrtc
13:52:38 [stefanh_]
THere has been some controversy over what API to pick from two proposals.
13:53:05 [stefanh_]
However, now the group has agreed on one API: the Web Audio API
13:53:25 [stefanh_]
Next topic: Next steps as we continue develop the APIs.
13:54:12 [stefanh_]
Document stages FPWD LCWD (several of them usually) CR
13:54:28 [stefanh_]
At CR we have to prove that the spec is implementable
13:54:45 [stefanh_]
and that different implementations implement the spec in the same way
13:54:58 [stefanh_]
testsuites are created for this purpose
13:55:27 [stefanh_]
slides at http://www.w3.org/2012/Talks/dhm-webrtc-testing/#%281%29
13:55:54 [stefanh_]
one or more testcases for each MUST in the spec
13:55:58 [burn]
scribe: burn
13:56:19 [burn]
dom: similarly for MUST NOT
13:56:48 [burn]
... why do we need to do this? of course the process requires it, but more importantly interoperability is crucial for adoption and success of standards
13:57:22 [burn]
... additionally, writing test cases *REALLY* exercises the spec language, pointing out where interpretations need to be clarified
13:58:36 [burn]
... Although test cases are required for Candidate Recommendation, it's best to start as soon as the spec begins to stabilize. There is an obvious trade-off between getting it done early and being forced to update tests often as the spec changes.
13:59:34 [burn]
... but tests can be written for stable parts of the spec. Some people/orgs are test-driven, requiring a test to be provided for every change request, but this can result in many changes.
13:59:59 [burn]
... Best is not to wait too long. We should set up the testing framework before Last Call, and ideally begin writing tests as well.
14:01:02 [burn]
... Often no one in the group wants to write tests. However, often others outside the group find it fun. It is a great way to improve the specification and does not require agreeing to the intellectual property statements that members must agree to.
14:01:49 [burn]
... It's also a good way to really understand how the spec works -- if you can't write a test for it, the problem may be with the spec.
14:03:34 [burn]
... Best practice is to have one or more test facilitator(s) per spec to oversee work. The facilitators do not have to write all the tests, just ensure they are written properly, getting done, etc.
14:04:52 [burn]
... Most JS-based working groups now use testharness.js (assertion-building primitives), with a repository per spec in dvcs.w3.org. Each group needs to decide on the process for submission and review.
14:06:05 [burn]
... Process could be "submit, review, approve" or "submit, approved" until proved wrong. If there is a formal review process details about the review need to be defined in advance.
14:07:47 [burn]
burn: review process does not have to be laborious or complex. can just have writers review other writers' tests, and vice versa.
14:08:12 [burn]
dom: (now showing test case(s) he wrote for getUserMedia)
14:09:06 [burn]
dvcs.w3.org/hg/media-capture/file/de85fe3f590f/submitted/W3C/ (if I got it right)
14:09:14 [Mauro]
Mauro has joined #webrtc
14:09:48 [Josh_Soref]
Josh_Soref has joined #webrtc
14:10:29 [burn]
(now looking at dvcs.w3.org/hg/media-capture/file/de85fe3f590f/submitted/W3C/video.html)
14:12:08 [burn]
library provides two different kinds of tests: synchronous and asynchronous
14:14:16 [burn]
dom: in this example, he calls getUserMedia and verifies three assertions: there is a LocalMediaStream, no audio tracks were returned, and at least one video track was returned.
14:14:28 [burn]
s/library/dom: library/
14:14:49 [burn]
anant: why do you call t.step inside the callback?
14:14:55 [burn]
dom: that might be a bug.
14:15:32 [burn]
hta: what's the procedure for running these against implementations?
14:16:06 [burn]
dom: browsers usuallly run the tests on their own. If they don't pass and they think the test or the spec is wrong, they then contact the WG
14:17:11 [burn]
... also, the second js library allows for integration into various test frameworks for automated testing (for tests that do not require human judgement)
14:19:24 [burn]
... Now for specifics for WebRTC. First, how do you test constraints interoperable? Second, how do you have peers to connect to? Also server-side components that we may need ref implementations for. We also need to make sure there is not a failure in the protocol itself (beyond the API).
14:20:14 [burn]
JonLennox: if ICE connection fails, need to do XXX. These kinds of tests are needed as well.
14:20:28 [burn]
dom: yes, network conditions need to be simulated as well.
14:21:53 [stefanh_]
scribe: stefanh_
14:22:19 [markus]
markus has joined #webrtc
14:22:47 [stefanh_]
Return to JSEP discussion
14:23:32 [stefanh_]
juberti: should we create a sw test harness with virtual input devices virtual network etc.?
14:23:57 [stefanh_]
hta: dom is already in contact with chrome test people
14:24:11 [stefanh_]
ekr: we will do this for firefox
14:24:40 [stefanh_]
cullen: when the discussion starts we can contribute
14:24:50 [yang]
yang has joined #webrtc
14:25:05 [dom]
Scribe: dom
14:25:23 [stefanh_]
hta: we're expecting a Mozilla volonteer for testing!
14:25:27 [dom]
Topic: Back to JSEP
14:25:36 [stefanh_]
JSEP again.
14:26:11 [dom]
adambe: we talked about sdptype on media description
14:26:53 [dom]
... you could set the type as provisional either as a param to createAnswer, or by setting the attribute in the generated answer
14:27:12 [dom]
justin: as far as I know, the only meaning of provisioning vs final answer,
14:27:24 [dom]
... the final answer ends the offer/answer exchange
14:27:36 [dom]
... it only affects the state machine, not the actual offers/answers that are generated
14:27:50 [dom]
... so the only effect of that parameter would be to set the type to pranswer
14:28:24 [dom]
... based on previous discussions, we have already identified that the type attribute needs to be mutable
14:28:40 [dom]
... I also object to this ad-hoc parameter on the method
14:28:50 [dom]
ekr: I think I agree with Justin here
14:29:40 [dom]
cullen: setLocal would behave different with pranswer
14:30:44 [dom]
... I would put it as a constraint
14:31:03 [dom]
richard: there seems to be a potential need for the answer to inform the offer
14:31:13 [dom]
... whether or not the intention behind it is provisional or not
14:31:56 [dom]
martin: the decision is always made by the application
14:32:36 [dom]
justin: it actually matters: there are some cases in which treating an answer as a pranswer is ok, but it's not ok to treat a pranswer as an answer
14:32:44 [dom]
richard: OK from which perspective?
14:33:16 [dom]
justin: at the callee side, the person generating the answer, the app decides whether to mark it as a pranswer or an answer
14:33:37 [dom]
... the caller receives something; if he deals a pranswer as an answer that's bad
14:33:48 [dom]
s/deals a/deals with/
14:33:59 [dom]
... it's probably OK in the reverse
14:34:17 [Jerome]
Jerome has joined #webrtc
14:34:32 [dom]
richard: in SIP, pranswers are not exposed
14:35:50 [dom]
justin: if a caller treats an answer as a pranswer, then the callee assumes that the state machine is in a stable state when it is not
14:36:17 [dom]
adambe: to summarize, we can either treat is as a constraint, or use the fact that the type attribute is mutable in the offer object
14:36:34 [dom]
... so, should we have a constraint for it?
14:37:14 [dom]
justin: a constraint would probably be fine
14:37:22 [dom]
dom: what would we need several ways to do this?
14:37:34 [dom]
cullen: linked to error handling
14:37:59 [dom]
... this depends on things we haven't looked at, so I don't think we can really make a decision
14:38:32 [dom]
ekr: if it turns out we need to know that type, I don't think we should stuff into constraints
14:38:39 [dom]
... It really doesn't seem like a constraint
14:39:31 [ekr]
What I'm saying is that if we do decide we need this, putting it in a constraint seems pretty gross
14:39:43 [ekr]
it's not clear to me why it's any better than an extra argument
14:39:53 [Gonzalo]
Gonzalo has joined #webrtc
14:39:56 [ekr]
Obviously, it's just a taste issue
14:40:34 [dom]
adambe: so, we remove the additional argument; if we need it as a constraint, we'll add it back later
14:43:28 [dom]
[discussion about the value of constraints as a host for this]
14:43:56 [dom]
justin: I would prefer we avoid a bunch of positional parameters
14:44:03 [dom]
... a dictionary with options would be much better
14:44:21 [dom]
dan: constraints were not designed for parameters
14:44:30 [dom]
adambe: yeah, I think we should have a settings dictionary
14:46:46 [dom]
ACTION: adam to look at replacing mediaconstraints in createAnswer with a settings dictionary
14:46:51 [trackbot]
Created ACTION-50 - Look at replacing mediaconstraints in createAnswer with a settings dictionary [on Adam Bergkvist - due 2012-06-18].
14:48:02 [dom]
harald: what on earth does it mean for the error callback to be optional?
14:50:10 [dom]
... I see no reason to make it optional since the app stops when error occurs
14:50:22 [dom]
anant: continuation would help here as well
14:50:34 [dom]
martin: this is similar with things done e.g. in XHR
14:51:32 [Martin_]
setTimeout
14:51:47 [dom]
tim: making it required would at least raise the chances that people copy & pasting the code would deal with error
14:52:14 [dom]
anant: another approach is to deal with errors as part of a single callback signature à la node.js
14:53:08 [Martin_]
node.js uses doSomething(function(err, value) { }); It's a nice pattern.
14:54:26 [dom]
adambe: moving on to ICE Restart
14:54:59 [dom]
... should we have an explicit updateIce() method to reset the IceServers configuration
14:55:23 [Mauro]
Mauro has joined #webrtc
14:56:56 [dom]
justin: in RFC@@@ says that restarting ICE is done by changing @@@
14:57:15 [stefanh]
stefanh has joined #webrtc
14:57:29 [stefanh]
scribe: stefanh
14:57:31 [JonLennox]
RFC 5245, changing ufrag and password
14:57:37 [stefanh]
discussion on restart ice
14:57:45 [stefanh]
usernam+password change
14:58:18 [stefanh]
(scribe a bit lost)
14:58:56 [dom]
scribe: dom
14:59:04 [stefanh]
general design: most apps will never call update ICE
14:59:32 [stefanh]
but what drove is that an app might be after a while willing to supply non-realay candidates
15:00:26 [stefanh]
Ted: is there not a need to be able to restart ICE but the app does not supply username+frag
15:00:36 [stefanh]
justin: what we need
15:00:37 [dom]
scribe: stefanh
15:00:53 [stefanh]
api call "generate new one and restart ice"
15:02:18 [stefanh]
the new username+password must be supplied to the server
15:02:40 [stefanh]
adambe: can the server even generate all info?
15:02:52 [stefanh]
does it have all info (like msid)?
15:03:20 [stefanh]
thompson: an advanced server can do this
15:03:48 [stefanh]
justin: we don't need the extra parameter
15:04:26 [stefanh]
magnusw: can someone tell me how this works if it is the browser that detects that an ICE restart is needed.
15:04:51 [stefanh]
cullen: "onrennegotiaonfeedback" signals this.
15:05:03 [Jerome]
Jerome has joined #webrtc
15:05:49 [stefanh]
lennox: new I/F available: should signal to app
15:06:51 [Mauro]
Mauro has joined #webrtc
15:07:32 [stefanh]
what if you have a perfectly usable 2G connection but moves into WiFi coverage
15:07:39 [stefanh]
what should happen
15:08:28 [stefanh]
should be discussed tomorrow
15:09:09 [stefanh]
justin: what should happen when new candidates are trickled 10min after start?
15:10:56 [stefanh]
cullen: what is the difference betwenn a mandatory constraint and a setting?
15:11:03 [JonLennox]
The logic I understood of ICE was that once you converge, the way you change in the future is to do an "ICE Restart". The old selected pair is still live until a new pair is selected.
15:12:41 [stefanh]
cullen asking for guidance on settings/constraints/dictonaries
15:12:49 [Zakim]
Zakim has left #webrtc
15:13:15 [DiMartini_]
DiMartini_ has joined #webrtc
15:14:07 [stefanh]
ekr: should we replace parameters with dictonaries
15:14:31 [stefanh]
cullen: editors will take liberties and wait for yelling
15:15:21 [stefanh]
hta: chairs to bring back to rtcweb that how interface changes happen is unclear
15:15:58 [stefanh]
Resolution: IceRestart to be removed
15:16:35 [JonLennox]
RFC 5245 9.1.2.1 "Existing Media Streams with ICE Running" is equivalent to trickle candidates before ICE has completed; 9.1.2.2 "…with ICE Completed" says you have to send the existing selected candidate unless you're doing an ICE Restart.
15:16:37 [jesup|laptop]
does it matter that Zakim left?
15:17:12 [JonLennox]
9.1.1 "ICE Restarts" says "during the restart, media can continue to be sent to the previously validated pair."
15:17:29 [JonLennox]
So adding a candidate is an ICE restart; you keep using the old selected pair until the restart succeeds.
15:23:57 [Martin_]
JonLennox, does this imply that you need to gather on the existing network interfaces, or retry connectivity checks on previously failed candidates?
15:26:07 [JonLennox]
You can reuse the existing gather state if you want for the successful candidates, or re-gather. Whether you re-check previously failed candidates is a local decision, depending on whether you have some reason they'll start working now.
15:26:23 [stefanh]
setRemote/setLocal should accept the union of object andf string
15:26:27 [Martin_]
correct :)
15:26:42 [JonLennox]
What candidates to gather is the part of ICE that's the most subject to implementation choice
15:28:22 [JonLennox]
But the point is that once you're in the "ICE Completed" state the only way to change your set of candidates is through an ICE Restart.
15:29:08 [JonLennox]
From a w3c pov the interesting question is whether it's the application or the browser that needs to decide whether and when to do a re-gather.
15:29:19 [JonLennox]
(And how)
15:29:29 [Martin_]
The next trick is working out a) how to trigger ICE restart and b) how to discover that an ICE restart is needed...
15:29:49 [Martin_]
I think we have a, but I think we realize that we also need b
15:30:15 [JonLennox]
needed in a broad sense, including "possibly desirable"
15:30:32 [stefanh]
DanB: you usuall have to touch the SDP when interoprating
15:30:36 [Martin_]
exactly
15:31:15 [stefanh]
anant: important to define for the normal web developer.
15:32:23 [stefanh]
hta: we need to know what SDP things you'd like to munge before starting design an API for it
15:33:12 [Martin_]
of course, if you go to the trouble of enumerating your use cases so precisely, you might as well drop the SDP altogether and build APIs for each use case. Understanding the use case is the hard part, designing APIs is easy.
15:36:51 [dom]
RRSAgent, draft minutes
15:36:51 [RRSAgent]
I have made the request to generate http://www.w3.org/2012/06/11-webrtc-minutes.html dom
15:38:04 [ekr]
ekr has joined #webrtc
15:38:59 [tuexen]
tuexen has joined #webrtc
15:40:01 [JonLennox]
JonLennox has left #webrtc
16:07:43 [ekr]
ekr has joined #webrtc
16:15:18 [ekr]
ekr has joined #webrtc
16:36:08 [ekr]
ekr has joined #webrtc
16:55:01 [mreavy]
mreavy has joined #webrtc
16:57:05 [jesup|laptop]
jesup|laptop has joined #webrtc
17:07:32 [Martin_]
Martin_ has joined #webrtc
17:25:47 [tuexen]
tuexen has left #webrtc
18:54:34 [Martin_]
Martin_ has joined #webrtc
18:58:51 [jesup|laptop]
jesup|laptop has joined #webrtc