19:58:39 RRSAgent has joined #mediacap 19:58:39 logging to http://www.w3.org/2012/10/09-mediacap-irc 19:58:41 RRSAgent, make logs public 19:58:41 Zakim has joined #mediacap 19:58:43 Zakim, this will be MCAP 19:58:43 ok, trackbot; I see UW_MdCap()4:00PM scheduled to start in 2 minutes 19:58:44 Meeting: Media Capture Task Force Teleconference 19:58:44 Date: 09 October 2012 19:58:50 Travis has joined #mediacap 19:59:35 hta has joined #mediacap 19:59:47 zakim, who is here? 19:59:47 UW_MdCap()4:00PM has not yet started, hta 19:59:48 On IRC I see hta, Travis, Zakim, RRSAgent, stefanh, richt_, Josh_Soref, derf, trackbot, dom 20:00:01 Zakim, code? 20:00:01 the conference code is 6227 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), dom 20:00:07 Agenda: http://lists.w3.org/Archives/Public/public-media-capture/2012Oct/0005.html 20:00:17 Chair: hta, stefanh 20:00:20 zakim, who is here? 20:00:20 UW_MdCap()4:00PM has not yet started, hta 20:00:22 On IRC I see hta, Travis, Zakim, RRSAgent, stefanh, richt_, Josh_Soref, derf, trackbot, dom 20:00:41 dom, is zakim's clock broken? 20:00:49 scribe: Josh_Soref 20:01:18 Josh, thanks! 20:01:20 Present+ Travis_Leithead 20:01:51 mreavy has joined #mediacap 20:01:51 jesup has joined #mediacap 20:02:04 gmandyam has joined #mediacap 20:02:27 Zakim, who's on the call? 20:02:27 UW_MdCap()4:00PM has not yet started, dom 20:02:29 On IRC I see gmandyam, jesup, mreavy, hta, Travis, Zakim, RRSAgent, stefanh, richt_, Josh_Soref, derf, trackbot, dom 20:02:37 Present+ Dominique_Hazael-Massieux 20:02:57 Present+ Stefan_Hakansson 20:03:02 adambe has joined #mediacap 20:04:42 Josh, are you getting ready? 20:04:42 someone is talking, but apparently can't hear us. 20:04:47 We hear that person. 20:04:57 Zakim, mute me 20:04:57 sorry, Josh_Soref, I don't know what conference this is 20:05:52 RRSAgent, draft minutes 20:05:52 I have made the request to generate http://www.w3.org/2012/10/09-mediacap-minutes.html Josh_Soref 20:06:00 RRSAgent, make logs public 20:06:16 s/Josh, thanks!// 20:06:23 s/Josh, are you getting ready?// 20:06:30 s/someone is talking, but apparently can't hear us.// 20:06:37 s/We hear that person.// 20:06:41 present+ Josh_Soref 20:06:47 Giri Mandyam (Qualcomm Innovation Center) dialing in from (858) area code 20:06:50 Present+ Eric_Rescorla 20:06:55 Present+ Jim_Barnett 20:06:56 present+ Jim_Barnett 20:07:07 Present+ Giri_Mandyam 20:07:09 s/present+ Jim_Barnett// 20:07:20 s/Giri Mandyam (Qualcomm Innovation Center) dialing in from (858) area code// 20:07:30 Jim_Barnett has joined #mediacap 20:07:42 MoM last meeting: http://lists.w3.org/Archives/Public/public-media-capture/2012Aug/0149.html 20:07:56 i/MoM/Topic: Minutes Approval/ 20:08:04 Resolution: Minutes from last meeting are approved 20:08:33 Topic: capture settings of a MediaStreamTrack 20:09:02 present+ Travis 20:09:14 http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/proposals/SettingsAPI_proposal_v4.html 20:09:19 s/present+ Travis// 20:09:30 Travis: talking about the proposal made last week 20:09:35 ... this is an update of multiple previous proposals 20:09:46 ... particularly for device settings, such as microphones/web cameras 20:10:08 ... the first section describes a proposal to remove the existing notion of a LocalMediaStream 20:10:08 anant has joined #mediacap 20:10:11 ... along with the rationale 20:11:30 ... the second section describes how we propose creating multiple kinds of track objects 20:11:44 ... today we have a vanilla-generic MediaStream Track object 20:11:55 ... this proposal factors it out into Video and Audio Track objects 20:12:02 ... and further factors them to Video and Audio devices 20:12:20 ... the third section describes the mechanism for making changes to settings 20:12:23 ... and reading seettings back 20:12:41 ... a setting can either take an Enumerated set of values, or a Range of values 20:12:47 ... it also provides a list of proposed settings 20:12:51 ... for Cameras 20:12:55 ... as well as for Microphones 20:13:10 ... and it describes the event(s) that fire as a result of a settings change 20:13:17 ... the fourth section covers a Device List 20:13:33 ... a way for a web developer to discretely discover devices 20:13:51 ... starting from getUserMedia 20:14:11 ... the Device List is a list of obtainable objects 20:14:19 ... but a web page wouldn't automatically get it 20:14:35 ... the fifth, and last section, is a proposed set of Constraints relating to section 3 20:14:41 ... for use with getUserMedia 20:15:18 ... there's also examples for how this would work to accomplish scenairos 20:15:24 s/iros/rios/ 20:15:33 ... let me recap the feedback i've received so fatr 20:15:36 s/fatr/far/ 20:15:43 ... very little feedback about section 1 20:15:51 ... section 2 has received little feedback 20:16:01 ... it harmonizes with a counter proposal that richt_ made last month 20:16:08 ... it's essentially what he proposed 20:16:21 ... it introduces the concept of a Picture Device Track 20:16:21 q+ 20:16:27 ... i expected to hear feedback on this 20:16:33 ... i'm curious to know the group's thoughts on that 20:16:43 ... section 3... has received feedback on the mechanism for changing settings 20:16:57 ... what happens when devices decide to alter settings as a result of the environment 20:17:02 ... and how we respond to that 20:17:16 ... and how we use the events (constraintSuccess, constraintError) 20:17:26 ... most of the feedback is about section 4, the device list 20:17:32 ... most of the feedback is about privacy 20:18:01 q+ 20:18:35 ... if i approve one camera, that doesn't imply i'm approving all cameras. 20:18:38 ... that's good feedback, i'm working on how we could preserve this structure 20:18:48 that's how I read it 20:18:54 XX: my understanding is that you could only enumerate one type 20:19:03 xx = ekr 20:19:05 ... once you've been given permission for that type? 20:19:09 s/XX:/ekr:/ 20:19:41 ... under no circumstances is approval of the front camera permission for access to the obverse camera 20:19:50 Travis: i was very lenient at first about privacy issues 20:19:53 This goes back to the entire 'fingerprinting' issue 20:20:05 ... initially you can access a list of other devices of the same class 20:20:27 ekr: it's imperative that there's no access to devices beyond what the user provides 20:20:34 ... there's a distinct question relating to fingerprinting 20:20:45 Travis: i think i understand your feedback 20:20:59 ekr: you should be able to interrogate the list of devices at any time 20:21:11 ... but any request to activate must be associated with a user action 20:21:16 Travis: i think i agree with that 20:21:26 ... we have another proposal variant 20:21:36 ... which allows for inspection, but not enabling without consent 20:21:47 ekr: i understand people objecting to enumeration 20:22:03 ... but the people i speak to in security view access as a security block 20:22:11 adambe: this relates to capabilities 20:22:16 ... a range from all information about a camera 20:22:24 ... down to is video/is audio 20:22:27 ... down to nothing 20:22:37 ... allowing an application to inspect the whole list is 20:22:42 ... XXa1 20:22:48 hta: there's a shift in the thinking about this 20:23:00 ... i think people objected to getCapabilities 20:23:14 ... if people have stopped objecting to that, it's certainly the simplest way forward 20:23:33 adambe: i think we had consensus around hasAudio/hasVideo 20:23:40 hta: i think we had consensus on deviceCount 20:23:53 ... but not a clear consensus on what makes an application trusted 20:24:01 adambe: i think that's correct 20:24:19 ... not hearing someone objecting to unrestricted enumeration 20:24:26 ... doesn't indicate there isn't objection 20:24:43 anant: w3's security WG released a statement that "fingerprinting is no longer an issue" 20:24:47 (where was that statement made?) 20:24:49 ... i think we're ok with enumeration 20:25:00 ... enumeration is ok, but actual access is ok 20:25:12 hta: anant, does enumeration include device capabilities? 20:25:24 anant: are you talking about returning constraints? 20:25:27 hta: yes 20:25:29 anant: i think that's fine 20:25:37 ... whatever we return in the list, i think is fine to return 20:25:41 q+ to ask about statement on fingerprinting 20:25:57 hta: working hypothesis: any application can use getCapabilities at any time 20:26:03 Travis: i'd like to voice my word of caution 20:26:21 ... i backed the word of caution about not exposing arbitrary attributes 20:26:34 ... based on the principle of fingerprinting 20:26:40 ... while this may seem contradictory 20:26:46 ... if a user has approved "a camera" 20:26:55 ... i've crossed the first bridge 20:27:15 ... and then if we take this a step further, allow the application to request permission for additional resources 20:27:24 ... i'm not sure i'm comfortable with getCapabilities in a general sense 20:27:40 ack gmandyam 20:27:43 +1 on not comfortable on general getCapabilities 20:27:54 gmandyam: you mentioned later in the document 20:28:09 ... where would you have XXXgm? 20:28:23 s/XXXgm/Photo capabilitiies/ 20:28:23 Travis: the Video Device (like a web camera) may provide a Picture Device 20:28:36 ... and you can use that device to apply settings to a high resolution picture 20:28:44 ... those settings don't apply to the picture stream 20:28:52 ... they only apply to the takePicture API 20:28:58 This seems a reasonable way to handle video with pictures 20:29:03 gmandyam: i didn't understand how preview would work 20:29:07 ... wrt takePicture 20:29:19 video stream is preview 20:29:41 ... you should have a video stream continuously during the takePicture 20:29:58 Travis: my thought is that the VideoDevice lets you configure your Video stream 20:30:02 ... you can go into the PictureDevice 20:30:12 ... which may support a 12mp resolution (i.e. much better than video) 20:30:22 ... you could request that resolution on the PictureDevice 20:30:28 ... that wouldn't affect your Video element 20:30:36 ... but takePicture would apply those settings 20:30:48 ... take the large (12mp image) and then return back to the Video stream resolution 20:30:54 ... i spoke w/ the MS video team this morning 20:31:09 ... relating to hta 's comment about cameras that dynamically resize their output for different reasons 20:31:22 ... some cameras put settings for the camera to the maximum 20:31:31 ... and the camera drivers resample it down for video 20:31:39 ... so the sensor is working at high res 20:31:46 ... that may dramatically reduce framerate 20:31:55 q? 20:31:56 This matches the general thrust of what Mozilla was thinking of in picture capture IMHO 20:32:05 ack anant 20:32:11 anant: i like takePicture 20:32:16 ... we have an api we've implemented 20:32:32 ... do you feel constraints for a PictureDevice are significantly different from a Video stream? 20:32:37 ... to me, the answer seems to be yes 20:32:48 ... filter/autofocus 20:33:04 q+ to note that some video cameras support auto focus 20:33:13 Pictures tend to have an almost-infinite set of parameters :-) 20:33:17 ... for Firefox OS, we have an autofocus 20:33:29 https://wiki.mozilla.org/WebAPI/CameraControl 20:33:31 ... You mentioned permissions 20:33:35 http://lists.w3.org/Archives/Public/public-webappsec/2012Sep/0048.html 20:34:22 anant: in that message, he says that users concerned about tracking will need a special UA 20:34:34 ... the UX for that doesn't seem great 20:34:48 ... the first is "allow enumerate" and then "pick a camera" 20:35:11 ... if we can get a nicer experience with only one popup, and somehow do enumeration after authorization 20:35:11 q+ to temperate the extent of "rough consensus on giving up on fingerprinting" 20:35:16 ... i'm ok with that 20:35:16 q+ 20:35:35 anant: how do you intend to expose Device List? 20:35:44 Travis: you get it from an existing Device object 20:36:15 anant: that seems convoluted 20:36:25 ... i'd prefer a simpler approach 20:36:45 ... sophisticated apps will want to enumerate first 20:36:50 ... and then pick a device 20:37:11 [ time check: 5 minutes remaining for this topic ] 20:37:39 q+ 20:38:06 ack me 20:38:06 Josh_Soref, you wanted to note that some video cameras support auto focus 20:38:17 Zakim, mute me 20:38:17 sorry, Josh_Soref, I don't know what conference this is 20:38:39 Zakim, this is MCAP 20:38:39 ok, dom; that matches UW_MdCap()4:00PM 20:38:45 Zakim, who's on the call? 20:38:45 On the phone I see +46.1.07.14.aaaa, +1.650.241.aabb, +91.22.39.14.aacc, [Microsoft], +1.289.261.aadd, +1.610.889.aaee, +1.858.651.aaff, Jim_Barnett, +46.1.07.14.aagg, ??P6, 20:38:48 ... [Mozilla], ??P11 20:38:57 Zakim, aadd is me 20:38:57 +Josh_Soref; got it 20:39:06 zakim, aaaa is me 20:39:06 +stefanh; got it 20:39:12 adambe: on anant 's comments 20:39:22 Zakim, aaff is me 20:39:22 +gmandyam; got it 20:39:25 zakim, aaee is me 20:39:25 +jesup; got it 20:39:47 q+ to test his mike 20:39:52 adambe: trying to enumerate first triggers two popups 20:40:03 Zakim, who's noisy? 20:40:07 XXz: for every device, there's at least one popup 20:40:11 zakim, aabb is me 20:40:11 +hta; got it 20:40:15 dom, listening for 11 seconds I heard sound from the following: +1.650.241.aabb (4%), +91.22.39.14.aacc (65%), [Mozilla] (56%), ??P11 (57%) 20:40:23 s/XXv/adambe/ 20:40:23 Zakim, mute aacc 20:40:23 +91.22.39.14.aacc should now be muted 20:40:23 Zakim, mute aacc 20:40:24 +91.22.39.14.aacc was already muted, dom 20:40:30 Zakim, mute ??P11 20:40:30 ??P11 should now be muted 20:40:39 s/XXz/adambe 20:40:53 XXw: in Aurora, the popup has a chooser 20:40:59 ... to let you pick the device you want 20:41:11 ... if you look at Google Hangouts 20:41:19 ... it has an in content interface to select which devices you want 20:41:31 ... how much of that interface would continue to be possible under WebRTC 20:41:36 ... what should a site be able to do? 20:41:41 ... as that would inform what to offer the user 20:41:54 q? 20:42:02 Zakim, who is making noinse 20:42:02 I don't understand 'who is making noinse', Josh_Soref 20:42:05 Zakim, who is making noise? 20:42:06 ekr is speaking 20:42:11 s/XXw/ekr/ 20:42:20 ... i don't want two choosers 20:42:21 Josh_Soref, listening for 10 seconds I heard sound from the following: hta (14%), +46.1.07.14.aagg (10%) 20:42:25 ... and have that be XXek for the user 20:42:43 XXf: last week we were cautious about fingerprinting 20:42:45 ... and today we aren't 20:42:49 ... it feels strange 20:42:55 Zakim, unmute ??P11 20:42:55 ??P11 should no longer be muted 20:43:05 ack dom 20:43:05 dom, you wanted to ask about statement on fingerprinting and to temperate the extent of "rough consensus on giving up on fingerprinting" and to test his mike 20:43:06 Zakim, who is making noise? 20:43:13 Zakim, drop ??P6 20:43:13 ??P6 is being disconnected 20:43:14 -??P6 20:43:19 s/XXf/adambe/ 20:43:20 Josh_Soref, listening for 11 seconds I heard sound from the following: stefanh (8%), hta (8%), Jim_Barnett (4%), [Mozilla] (13%), ??P11 (70%) 20:43:31 Zakim, ??P11 is dom 20:43:31 +dom; got it 20:43:34 how do I mark myself as done speaking? 20:43:40 dom: anant, thanks for that link 20:43:54 ... i wouldn't say that their link is a statement of the world in W3C 20:44:02 ... it's limited to web apps sec 20:44:08 ... i was on a privacy call two weeks ago 20:44:14 ... and i don't think that's their view 20:44:19 @adambe: "ack me" 20:44:25 ... i don't think that view is broadly accepted 20:44:29 q? 20:44:32 ... i'm happy to take an action to research that 20:44:39 Zakim: ack me 20:44:43 q? 20:44:46 ack me 20:44:46 ekr: i cochair webappsec with bradh 20:44:50 q? 20:45:00 ... it wasn't a statement on behalf of the Web App Sec WG 20:45:04 Zakim, who is making noise? 20:45:19 Josh_Soref, listening for 12 seconds I heard sound from the following: [Mozilla] (100%), dom (69%) 20:45:26 Zakim, mute dom 20:45:26 dom should now be muted 20:45:35 ack me 20:45:55 ekr: we should probably have a meeting at TPAC to talk about this 20:46:03 dom: i agree that it makes sense to talk about this at TPAC 20:46:04 action dom: clarify W3C position on fingerprinting 20:46:04 Created ACTION-10 - Clarify W3C position on fingerprinting [on Dominique Hazaƫl-Massieux - due 2012-10-16]. 20:46:16 ... coming back to this WG 20:46:31 ... i'd be very cautious about making design decisions assuming this is no longer a concern 20:46:33 ack me 20:46:52 Topic: Constraints and Memory 20:46:59 Zakim, who is making noise? 20:47:01 Zakim, mute dom 20:47:01 dom should now be muted 20:47:11 hta: once you get a device 20:47:14 Josh_Soref, listening for 13 seconds I heard sound from the following: hta (72%), Jim_Barnett (3%) 20:47:18 ... after having specified constraints 20:47:25 q+ to ask about modularity and schedule (for a change :) 20:47:39 ... take as a given that some devices will change their configuration 20:47:50 ... should an application expect a device to stay within constraints? 20:47:59 ... or should they expect it wanders outside? 20:48:10 q+ devices that change configuration is not a given... 20:48:18 ... if we ask to change its configuration 20:48:21 q+ 20:48:34 ... can we expect that all previously applied constraints are still applicable (unless overridden) 20:48:42 XXcc: XXcd? 20:49:00 Travis: i want to question that devices will change their configuration 20:49:05 ... that may be true for a peer connection 20:49:15 ... but for a device (camera/microphone), it's never the device 20:49:21 ... but perhaps the OS that responds to input 20:49:41 hta: i was using "Device" as shorthand for "device, drivers, and everything else beyond the browser" 20:49:42 Disagree, dsp-enabled-cameras will adapt frame rates with no OS input I believe 20:49:53 q+ 20:49:55 ... mac cameras are famous for adjusting framerate under low-light conditions 20:50:00 Travis: that's the mac os doing it 20:50:07 ... not the camera under its own volition 20:50:12 hta: it's hard to see where that line is 20:50:29 ... if we accept "device" as "everything below the api surface" 20:50:33 Travis: the platform evolves 20:50:42 ... we have apis exposing environmental sensors 20:50:53 ... you may want to implement these in the application itself 20:51:00 ... we should provide the way to do those things if you want to 20:51:17 ... make the assumption that the device is a consistent mechanism 20:51:27 ... apply state, read state 20:51:30 ... be able to depend on that 20:51:33 hta: i'm skeptical 20:51:35 q? 20:51:46 ack dom 20:51:48 dom, you wanted to ask about modularity and schedule (for a change :) 20:51:49 q- 20:51:51 q+ 20:52:22 dom: this api brings a number of fairly deep changes 20:52:32 ... i'm wondering what the plan is around the schedule for this set of features 20:52:38 ... is this part of the main spec 20:52:41 ... is it a distinct module? 20:52:46 Zakim, mute me 20:52:46 dom should now be muted 20:52:47 ... are we slipping our schedule? 20:52:56 q? 20:53:02 ack me 20:53:04 stefanh: feedback we've gotten is that the MediaStream api wasn't supported 20:53:09 ... people wanted additional features 20:53:20 ... i guess we're slipping 20:53:49 dom: does that mean implementers aren't shipping getUserMedia? 20:53:55 ... i know MS doesn't announce shipping plans 20:54:00 ... maybe mozilla can comment? 20:54:09 anant: we want to support getUserMedia and MediaStream 20:54:11 Zakim, mute dom 20:54:11 dom should now be muted 20:54:17 ... we don't support everything 20:54:24 ack me 20:54:30 ... our intention is to support everything from getUserMedia/MediaStream as in the draft 20:54:47 dom: that conditions the work of the simple getUserMedia api 20:54:57 q? 20:55:00 ack jesup 20:55:01 so XXXing our schedule seems reasonable 20:55:05 Zakim, mute dom 20:55:05 dom should now be muted 20:55:16 jesup: about hardware/dumb-hardware/smart-hardware 20:55:22 ... my experience from embedded devices 20:55:35 ... webcams do adaptations automatically unless you stop them 20:55:40 ... maybe the OS can do this 20:55:46 ... whether the OS/camera does it 20:55:54 ... the framerate varies according to light level 20:56:00 ... we shouldn't assume the hardware is dumb 20:56:07 ... assume the hardware may be more active than that 20:56:14 ... be prepared for that 20:56:20 ... it's going to be 20:56:25 ... and in many cases it already is 20:56:29 ack stefanh 20:56:46 s/XXX/stick/ 20:56:57 stefanh: XXY 20:57:44 ... can you elaborate on the relation between getUserMedia constraints and constraints in the request operation 20:57:49 s/XXX/.../ 20:57:58 Travis: the proposal defines constraints for Video/Audio in section 5 20:58:02 ... e.g. a width/height constraint 20:58:07 ... either a number or min-max range 20:58:15 ... the request api 20:58:19 ... when you invoke for a settings change 20:58:25 ... they build up iteratively 20:58:33 ... so if you change 1024x768 to 800x600 20:58:38 ... you request 800x600 20:59:07 ... each time you make a request, you build onto the structure being generated for you 20:59:22 ... when your context ends, the constraints being built are applied 20:59:31 ... a question applies to specific values or ranges 21:00:02 stefanh: if you start with 25-30hz 21:00:09 ... and then XXa? 21:00:19 s/XXa/15hz/ 21:00:21 ... if it's outside your original constraints, is that ok? 21:00:39 Travis: if you specify within the device range 21:00:44 ... but outside the getUserMedia request 21:00:48 ... you still try to honor that 21:01:15 hta: any other comments? 21:01:16 q? 21:01:48 Topic: Recording API proposal 21:02:01 (I guess we haven't quite determined how we integrate this in the spec, but we can figure that after the call) 21:02:11 Jim_Barnett: 4 high level questions 21:02:22 http://lists.w3.org/Archives/Public/public-media-capture/2012Oct/0010.html 21:02:31 ... do we want recording to be a separate interface or a partial? 21:02:34 separate interface++ 21:02:37 ... a lot of people like a separate one 21:02:49 ... Travis identified not likely allowing overlapping recordings 21:02:59 ... what's the relationship between recording and media capture? 21:03:14 ... if there are separate apis, we might be able to make things simpler with a lower level api? 21:03:19 ... XXf? 21:03:29 ... do we think there are any XXg formats? 21:03:44 I suggest to the list generally 21:03:44 s/XXg/MTI/ 21:04:04 Travis: i think i should bring up the background of the Track object instead of a MediaStream 21:04:11 ... we started off trying to record a MediaStream 21:04:13 Josh_Soref: XXXg = MTI formats for recording 21:04:19 ... which is what sane person would have thought would work 21:04:27 ... after trying to get some data out of a stream 21:04:34 ... you have to face that a MediaStream is mutable 21:04:39 ... tracks can come/go at any time 21:04:47 ... as a recorder, trying to latch onto a media stream 21:05:01 ... you have to specify the behavior of your recorder under all of those changing conditions 21:05:13 ... that's how we ended up specifying a Track level based Recorder 21:05:13 q+ 21:05:21 q+ 21:05:36 s/Josh_Soref: XXXg = MTI formats for recording// 21:05:38 ack jesup 21:05:47 jesup: i understand the concern about MediaStream v. Tracks 21:06:07 ... but trying to integrate Tracks and synchronize them seems to be hard 21:06:16 q+ 21:06:22 ... for the based non mutating case it seems nice to solve this 21:06:36 Jim_Barnett: if we keep the track level api, you can do the more sophisticated thing with that 21:06:39 ... hta 's suggestion 21:06:50 ... if your format can handle it, great, if not, it gets an error 21:06:57 ... but if we don't have mandatory formats 21:07:06 ... then recorders will behave very differently on different platforms 21:07:07 ack hta 21:07:13 hta: recordings will be saved 21:07:17 ... for many reasons 21:07:21 s/saved/failed/ 21:07:29 ... saving because the browser ran out of disk for temporary storage 21:07:31 or fail 21:07:50 ... if a recording fails because you ask for something the stream doesn't support 21:07:58 Travis: that's a fair assumption 21:08:06 ... when i discussed Recording with the MS Media Folks 21:08:15 ... they assumed all different Tracks in the Media Stream 21:08:26 ... would be layered into a container format that could be supported 21:08:31 ... they asked for a track limit 21:08:44 ... i said we're going to have only one track 21:09:05 ... but i learned there are container formats that support multiple tracks 21:09:15 ... we can support say 2 tracks and set that as a cap for a recording 21:09:16 DVD's (mpeg2-ts I assume) can have N video tracks, and N audio tracks i believe 21:09:28 hta: for a media element, it's specified so tracks can come and go 21:09:29 q+ 21:09:36 jesup: don't they have a content primary track? 21:09:44 hta: they used to, but last i checked, they didn't really 21:09:55 s/hta/stefan/ 21:09:55 ... for recorder, you should record all media tracks 21:10:05 s/jesup/Jim_Barnett/ 21:10:13 Jim_Barnett: so for recorder, it should try to record everything 21:10:21 ... and then have it throw if it fails? 21:10:42 ... for more complicated recording, you'd have to pull the data out into your own object and record that 21:10:52 ack me 21:10:53 s/hta:/stefanh:/ 21:11:07 Jim_Barnett: if we make them the same interface, it simplifies things 21:11:25 gmandyam: it looks like w8 was the inspiration for this 21:11:40 ... the android api allows for setting an audio interface and a video interface 21:11:43 ... why didn't you just do that? 21:11:52 Jim_Barnett: the track by track basis 21:12:00 ... for media applications, you need access to just one track 21:12:04 Android media recorder: http://developer.android.com/reference/android/media/MediaRecorder.html 21:12:09 ... for video tracking, you need just the video 21:12:18 q? 21:12:19 If you need to work on a single track, create a derivative MediaStream with one track 21:12:20 ... for speech recognition, you need just the audio track 21:12:24 ack gmandyam 21:12:25 q+ 21:12:53 MediaStream Processing API :-) 21:12:56 gmandyam: i don't know XXq 21:13:05 Jim_Barnett: there's no way to access video/media in their own format 21:13:17 ... we need an api to ask for media in a known format 21:13:35 Travis: why do we latch onto MediaStream/Track v. a standalone recorder? 21:13:38 gmandyam: yep 21:13:42 ack adambe 21:13:44 adambe: 21:13:49 s/adambe: // 21:13:55 adambe: say you have a media stream 21:13:59 ... it has a video track playing 21:14:04 ... and another video track playing 21:14:13 ... do you expect to have 2 media tracks? 21:14:24 ... suddenly the content is switched 21:14:44 Travis: we don't know 21:14:50 ... and they're complicated problems to figure out 21:15:01 Jim_Barnett: there's very little structure to Media Streams/Tracks 21:15:08 ... you could try to assume there's a primary track 21:15:16 ... but that may work for some cases, but not others 21:15:22 s/playing/starts playing/ 21:15:32 Jim_Barnett: that's a reason to have a low level api 21:15:40 adambe: say there's a conference 21:15:44 ... and one pair records the conference 21:15:54 ... a viewer might want to be able to switch between the different participants 21:16:06 ... recording a stream is exactly as it'd look in a video element 21:16:13 ... the resulting thing 21:16:22 Jim_Barnett: if a viewer could switch during playback 21:16:28 ... you'd include all in the file 21:16:31 agree to Jim 21:16:32 ... and the viewer would choose 21:16:44 adambe: while that's neat 21:16:57 stefan is correct 21:17:00 ... i think that it's more reasonable to just record the visible track 21:17:08 Jim_Barnett: you could have a MediaStream where it has 4 Tracks 21:17:13 ... each of which is being displayed 21:17:23 Travis: we could think of the recorder as a Destination for a MediaStream 21:17:28 ... instead of part of the Pipeline 21:17:42 ... the Recorder could build a notion of a primary track 21:17:47 ... putting the control into the application 21:17:56 ... getting away from the view of the application 21:18:07 stefanh: you might want to look at the Web Audio API proposal 21:18:16 ... it's implemented in Chrome on Mac 21:18:25 ... there you can get Audio from a MediaStream track 21:18:34 ... i think that's implemented as a destination 21:18:34 s/stefanh/hta 21:18:46 adambe: we have the notion of enabled/disabled tracks in a stream 21:18:52 ... but i think we're moving away from that 21:19:24 hta: the proposal has gotten a deal of feedback. we'll take it to the list 21:19:32 Topic: Direct assignment 21:19:43 hta: createURL() 21:19:56 ... instead of doing that on the video source 21:20:01 ... we have an attribute on the