01:37:21 hongchan has joined #audio 01:49:44 ChrisL has joined #audio 02:19:45 ChrisL_ has joined #audio 05:25:01 kawai has joined #audio 10:25:35 kawai has joined #audio 13:43:19 RRSAgent has joined #audio 13:43:19 logging to http://www.w3.org/2016/04/08-audio-irc 13:43:39 Chair mdjp 13:43:56 jdsmith has joined #audio 13:44:26 zakim, this will be audio 13:44:26 ok, BillHofmann 13:44:32 present+ jdsmith 13:45:07 zakim, agendum 1 Rechartering 13:45:07 I don't understand 'agendum 1 Rechartering', BillHofmann 13:45:56 agenda+ Recharter 13:46:05 agenda+ Output Device Selection 13:46:14 agenda+ Web MIDI Update 13:46:30 agenda+ Discussion of Actions to be taken forward for Recommendation Process 13:46:36 hongchan has joined #audio 13:46:47 zakim, take up agendum 4 13:46:47 agendum 4. "Recharter" taken up [from BillHofmann] 13:46:49 present+ hongchan 13:46:54 scribe BillHofmann 13:47:10 mdjp: should open discussion about whether/how we recharter 13:47:26 mdjp: I believe we should continue, and it's collecting new use cases, etc. 13:48:17 mdjp: issue raised at TPAC whether WG should split into specific areas 13:48:45 mdjp: also of course relates to Web MIDI. 13:49:02 mdjp: does anyone have objections to rechartering with a goal to specing V2, in the same structure as we are in today? 13:49:08 all: silence :) 13:49:34 mdjp: Joe + Matt need to discuss with Chris Lilley re process, but... 13:50:44 mdjp: need to document wide review - WAC, etc are good evidence of this 13:50:58 mdjp: testing is also key here. Google/Mozilla - any updates on the status? 13:51:15 rtoyg_m has joined #audio 13:51:31 padenot: would be good to have Joe's intern's updates 13:52:06 padenot: we have run a few blink tests, and ran ok. However, ~95% of mozilla tests are not in Web Platform Tests - that'd be good to do 13:52:49 rtoyg_m: good thing is that tests can be run in a regular browser 13:53:10 hongchan: advantage vs last year, most tests run in OfflineAudioContext 13:53:12 ChrisL has joined #audio 13:53:26 padenot: most tests now run in both Offline and regular 13:53:52 np. 13:55:45 rtoyg_m: our goal is to cover every user visible thing, but a ways away from that. most basic things covered. 13:56:53 jdsmith: asking re coverage required. 13:57:21 mdjp: shows coverage sheet from Joe's intern's test 13:57:40 jdsmith: are these tests to be submitted? 13:57:57 mdjp: we need to produce an interoperability report 13:58:49 jdsmith: Media goals include common test suite run on all browsers; at least two implementations have to pass all the tests 13:59:56 mdjp: been going on ever since I joined the WG, really need someone to cover 14:00:26 jdsmith: Media WG got help from W3C 14:00:38 hongchan has joined #audio 14:01:11 mdjp: probably no-one around the table has the time to do this 14:02:41 BillHofmann: what is the state of Edge (including test coverage) 14:02:51 jdsmith: not sure, took a snapshot of Chromium last year. 14:03:08 padenot: there is an independent test suite someone created... 14:03:24 https://github.com/mohayonao/web-audio-test-api 14:05:44 primarily a DOM test - covers all interfaces 14:06:28 padenot: roughly 90% WebAudio coverage on mozilla - checking now 14:07:20 hongchan has joined #audio 14:08:45 mdjp: this is a good framework to fill in from, perhaps 14:08:57 mdjp: testing is the main work remaining 14:09:30 jdsmith: it's up to us to decide what interop means 14:09:57 mdjp: should discuss with ChrisL - but likely we need at least some level of perceptual match for output 14:10:33 ideally tests decide a pass/fail from script automatically, but yes we will need some tests that people have to listen to 14:10:52 although maybe we can do some clever things with nulling where the test passes if there is silence 14:11:14 taxi saams to be arriving, be there shortly 14:11:55 hongchan: our tests do a lot of functional testing 14:12:34 padenot: we've been using some techniques where you run the signal twice, invert, sum - should be zero 14:13:21 padenot: one version run through graph, other is hand-built 14:13:35 rtoyg_m: we do that a lot, as well 14:14:03 rtoyg_m: some require bit accuracy, some times we do PSNR tests 14:15:00 rtoyg_m: most everything is completely automatic, working to get to 100% 14:17:50 kawai: I know the developer. 14:18:25 mdjp: do we think he might be interested in extending the suite to cover functional tests? Can kawai talk with him? 14:19:27 BillHofmann: we ought send him a letter rather than invite to a call 14:19:42 jdsmith: what would collaboration look like? we could contribute cases to this? 14:20:04 mdjp: how does this fit into Web Platform Tests? 14:22:12 padenot: we have a few things in it, not much at this point. all new tests get pushed upstream to W3C test suite 14:22:37 kawai: uses testharness.js - if you write to this and submit, it's automatically tested. 14:24:17 mdjp: we've got test coverage in two browsers; mohayonao tests, web platform tests - so in reasonably good state, but need to pull things together 14:25:33 https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/webaudio/ 14:25:38 This is our test suite. 14:25:43 (Chrome) 14:27:32 jdsmith: we've run these as well, don't know status of results, though 14:27:41 padenot: 2-3 of these are run regularly. 14:34:04 mdjp: what do we do for rechartering? 14:34:48 mdjp: new charter should cover new use cases, etc. 14:35:00 ChrisL: BillHofmann, should that cover audio output? 14:35:13 BillHofmann: yes - though should it be in our WG? 14:35:34 ChrisL: we could call it out as a requirement whether or not we do it ourselves 14:35:41 mdjp: How long do we need? 14:36:12 ChrisL: roughly June, for summer holidays. includes W3C staff overhead (~2 weeks), balloting, collating responses 14:36:35 ACTION: ChrisL to draft new charter 14:36:35 Created ACTION-125 - Draft new charter [on Chris Lilley - due 2016-04-15]. 14:36:45 ACTION: ChrisL: to create implementation report 14:36:46 Created ACTION-126 - Create implementation report [on Chris Lilley - due 2016-04-15]. 14:37:05 ACTION: mdjp to coordinate new use cases 14:37:05 Created ACTION-127 - Coordinate new use cases [on Matthew Paradis - due 2016-04-15]. 14:38:49 mdjp: main outstanding item is testing. we've found there's a lot of resources available. 14:40:39 mdjp: We've tried to get community input, just hasn't worked. Feels like this just needs someone to coordinate. 14:41:34 mdjp: is there any scope/help we can get for this from W3C (to coordinate getting existing tests into) 14:42:16 ChrisL: Web Platform Test tends to be DOM level 14:43:07 ChrisL: it's a best effort on test coodination 14:43:57 jdsmith: should they be in one location? 14:44:48 rtoyg_m: most of ours are plain HTML 14:45:37 BillHofmann: could they be just pushed into the web platform tests? 14:45:42 rtoyg_m: sure 14:46:06 mdjp: what we need, basically, is one (mozilla or chrome) to run against. 14:47:35 jdsmith: the Web Platform tests, I believe, are WebIDL tests, so might have coverage there 14:47:53 ChrisL: can take a look and see if we have someone to help coordinate the test effort. 14:48:08 jdsmith: Media group had someone to assist with this. 14:48:36 ChrisL: I'll be seeing Philippe on Monday, can discuss what he did. 14:49:30 BillHofmann: could the existing Gecko tests be pushed up? 14:49:37 padenot: no, different test harness. 14:51:18 mdjp: is there an assumption that we have two complete implementations for CR? 14:51:39 padenot: depends on how you count forking, 2-4, yes. minimum 2 completely different. 14:52:08 rtoyg_m: many of the Safari tests will fail - way behind. 14:52:54 ChrisL has joined #audio 15:12:11 jdsmith has joined #audio 15:12:23 present+ jdsmith 15:15:32 scribenick: ChrisL 15:15:41 topic: Output devices 15:15:53 zakim, take up agendum 5 15:15:53 agendum 5. "Output Device Selection" taken up [from BillHofmann] 15:16:22 jdsmith: discussions at tpac, with media capture and streans, but lots of requirements deferred to future version 15:16:56 ... mostly input devices and streams from webcams etc, notion of a single output device, not really thinking about output channels 15:17:34 ... want it to be generalized enough to say, video output devices, returns none or the ones matching constraints on resolution, highDR, etc. needs to be generalized enough 15:17:52 ... some interest but no timelines. Since then, nothing. 15:18:22 ... Joe agitated to get things moving, sent in some PR but nothing major was changed. There is not even abn output video type 15:19:13 jdsmith: commented on joe's proposal got some input from netflicks, who care a lot about this 15:19:27 ... we don't want a proprietary solution 15:20:05 BillHofmann: they miss a lot of required capabilities 15:20:53 jdsmith: generally useful for media devices. There is a fingerprinting concern. Capture group manage that by requiring permission. Seen as a privacy thing. 15:21:15 ... can persist the permission per-domain, edge does not though 15:22:08 ChrisL: on the web its notmall to assume there is permission to output audio 9we mutter arkly about autoplay videos) 15:22:22 jdsmith: add a kind for video output 15:22:28 BillHofmann: not in charter 15:22:39 mdjp: could be in next charter 15:23:26 jdsmith: broader than audio, not clear who owns it, media taskforce, and incubator to start with 15:23:42 ... no elegant way to manage the fingerprint concern. 15:24:06 BillHofmann: can already fingerprint 15:24:16 ChrisL: fonts are a big fingerprint 15:25:41 BillHofmann: mostly there is a single default output device. maybe query that. example where you plug a media adapter intoa tv, yu already gave permission and don't want to see a dialog box 15:26:24 jdsmith: misuse of the api is hard, would need to give a reason for acessing devices 15:26:35 ... need to persist permissions for the domain 15:27:11 BillHofmann: iphone is very wrmissions oriented, location, photos etc. apps deal with by double dialogs. 15:29:59 jdsmith: can also limit the information that is disclosed. constrain to what is needed. permissions that make sense to the user. 15:30:16 BillHofmann: people are getting used t the iritating dialogs 15:30:40 padenot: android has run-time permissions now, used to be install-only 15:31:43 BillHofmann: idea of constraints is to pick an appropriate device 15:32:10 ... (we wonder about permissions in getUserMedia) 15:32:28 jdsmith: can't get the label until given permission 15:33:27 hongchan has joined #audio 15:33:36 ChrisL: media queries just added a gamut query - sRGB, P3, 2020 basically 15:34:23 https://drafts.csswg.org/mediaqueries/#color-gamut 15:37:13 jdsmith: we have a hardware pipeline that is more robust, we want to expose that, feed it with different characgteristics, let apps choose between them 15:37:55 BillHofmann: you want codec support too? ecide which source stream to select. 15:38:10 ... could do with an API or with MQ 15:39:00 https://developer.mozilla.org/en-US/docs/Web/API/Window/matchMedia 15:39:16 mql = window.matchMedia(mediaQueryString) 15:40:10 mdjp: multichannel auio devices, want to see if there is better than stereo 15:40:24 BillHofmann: media queries for audio? 15:40:38 padenot: yes but only FF supports them 15:41:07 ... can have multile source tags for small or large screens. no audio in the MQ 15:42:35 jdsmith: who would review this, would we need a tag opinion? 15:43:13 mdjp: constraints means you get a device meeting the constraints. not like device enumeration. 15:43:22 BillHofmann: cant get details 15:43:44 jdsmith: can enumerate audio devices, but not descrinbed in any way 15:44:17 BillHofmann: so connstraits are channels, sample rate. what is the minimal list 15:44:25 mdjp: output buffer size 15:44:51 BillHofmann: presumably you get that after permission. then you get more detail 15:45:45 mdjp: mostly this is okay. 15:45:59 BillHofmann: default likely to be hdmi out 15:46:32 mdjp: permission side is for a more specialized useage. sound card details, etc 15:47:34 (discussion on switching video output devices, auto switching, hot-plug) 15:48:01 BillHofmann: do you know sample rate? 15:48:12 hongchan: yes 15:48:26 padenot: platforms specifics on the internals of that 15:49:32 BillHofmann: don't know the color capabilities of the output device, gamut, dynamic range, bit depth 15:51:33 (Bill shows some webrtc option dialogs) 15:56:55 https://webrtc.github.io/samples/src/content/devices/input-output/ 16:03:20 (we play with this for a bit and try adding new devices, etc) 16:05:26 hongchan has joined #audio 16:07:48 BillHofmann: so who does it, is it in rechartering, is it a web incubator CG, where does it get done? 16:08:23 ... we can't wait on getUserMedia, they are specific to the webrtc use case only 16:09:32 mdjp: would audio output be better as a subset group or a separate group. its a lot of duplication. needs to tie to getUserMedia in some way 16:09:41 jdsmith: would add output devices 16:10:58 jdsmith: it is broader than audio only, so best not in our charter 16:11:03 BillHofmann: agreed 16:11:16 ... so we develop use csases first? and where to send them 16:13:02 billl (finds mailing list for the audi output spec is media capture) 16:13:26 mediacapture-output 16:14:44 BillHofmann: seems like the mediacapture output spec is the right one for audio output 16:15:38 seems semi abandoned, has no video output features. 16:17:14 BillHofmann: so yes we need something, consrtraints api a good target, expose a limited set of features fro audio and video. 16:18:09 jdsmith: fingerprinting is a bit voodoish, so if we have a use case that demands data hwhat is the minimal mitigation we propose? 16:18:23 ... would like to see incubator work on this, by this summer 16:18:53 mdjp: separate group with some audio wg involvement. 16:19:27 BillHofmann: incubator is a good place to refactor this so it meets our requirements 16:19:49 jdsmith: want folks from all the relevant groups 16:20:06 ... api is fairly thought through, can be implemented and evaluated rapidly 16:20:45 mdjp: can steer the direction so it meets our audio needs 16:22:57 action: joe to liaise with the media capture and streams to discuss options for adding audio and video device constraint and enumeration, in context of existing group or the incubator group 16:22:57 Created ACTION-128 - Liaise with the media capture and streams to discuss options for adding audio and video device constraint and enumeration, in context of existing group or the incubator group [on Joe Berkovitz - due 2016-04-15]. 16:24:12 action ChrisL to figure out who reviews the privacy/fingerprinting issue 16:24:12 Created ACTION-129 - Figure out who reviews the privacy/fingerprinting issue [on Chris Lilley - due 2016-04-15]. 16:30:39 hongchan has joined #audio 16:46:13 ChrisL_ has joined #audio 17:51:38 BillHofmann has joined #audio 17:51:40 rtoyg_m has joined #audio 17:51:49 hongchan has joined #audio 17:52:48 kawai has joined #audio 18:06:55 cwilso: lets keep the implementation status in the readme.md 18:07:29 https://github.com/WebAudio/web-midi-api/issues/148 18:07:58 cwilso: no way to know the inputs and outputs are on the same device. so separate lists, not orered or encapsulated by hardware device 18:08:07 ... request is to encapsulate them like that 18:08:46 ... challenge is to create individual hardware devices for everything in some cases, depending what the OS exposes to us 18:10:25 ChrisL_: if it fails, no worse off then now 18:10:50 cwilso: prefer to not change the midiAccess and midiPort except a midi interface object on midiport 18:11:16 ChrisL_: so no backwars compat issue 18:11:18 cwilso: no 18:12:18 mdjp: sounds useful 18:12:22 ChrisL_: +1 18:13:02 https://github.com/WebAudio/web-midi-api/issues/158 18:13:33 cwilso: we don't say how much you can send at one go. large buffers help. 18:14:14 ... if pushing a lot of data, esp on DIN midi, mant to see if the buffer is emptying fast enough. ask on remaining buffer space 18:14:47 cwilso: right way to do this is writeable streams, apparently 18:16:18 cwilso: tied to ES6 and ES7 features 18:17:38 cwilso: can connect streams together but mostly not a huge win. better to return a promise and expose the current buffer size 18:18:01 ... domenic worrries it is duplicating streams, but only a very small part and we don't need the rest 18:18:26 cwilso: so we need to expose backpressure, to give confidence that large transfers don't fail 18:19:37 cwilso: think we should do the minimum viable 18:20:06 cwilso: one area streans does help, can pipe between ins and outs. 18:20:14 jdsmith has joined #audio 18:20:19 ChrisL_: but midi thru is not especially hard anyway 18:20:21 present+ jdsmith 18:20:29 cwilso: agreed, except for real time messages 18:21:34 cwilso: moving to promise based made midi a bleeding edge thing, has worked well though. worried about streams because that is difficult to understand with async and ES7 features 18:22:51 kaku: agree with having both a send one and the stream one. it is useful but not really important, we can work around it 18:23:09 s/kaku/kawai 18:23:25 cwilso: send does not expose backpressure. but we could add it 18:24:13 ... don't think we shold replicate a lot of the stream api 18:24:30 BillHofmann: not breaking makes sense, so keep send and provide the minimal change 18:24:34 cwilso: ok 18:25:21 BillHofmann: what is the minimum change? 18:25:39 cwilso: add a "send space available" call 18:25:54 ... not a "wait until this much space is available" 18:26:41 cwilso: stream implementations are uncommon as yet 18:27:01 mdjp: what about doing the minimal for v1 and re-evaluate for v2? 18:27:06 cwilso: that is one way to do it 18:27:51 ... remembering how windows api works here, is it dynamically reallocating 18:28:07 ... duplication of effort 18:29:30 ChrisL_: prefer the minimal backpressure for v1, close issue, re-open only if new data shows that is not sufficient 18:29:42 cwilso: ok, need to talk to the TAGG and see what they think 18:30:10 ... usb and BT do not expose backpressure, they have an async promise based send model 18:30:29 s/usb/WebUSB 18:30:35 s/BT/WebBT/ 18:30:50 cwilso: rest of the issues are editorial 18:31:15 https://github.com/WebAudio/web-audio-api/issues/251 18:31:17 mdjp: ok, great. now nto the webaudio issue 18:32:12 (cwilso reads) 18:33:35 cwilso: think that joe did a subclassing test which failed because of #250. Have to go through create methods o audioContext. once fixed, subclassing works as well 18:34:10 ... can we compose an audio node and hook up a bunch of other things, an then call connect onto an arbitrary point in the graph. 18:35:19 cwilso: can't do constructible audio params 18:35:38 ... think this does work today, crate node and override .connect 18:36:40 (cwilso tests) 18:36:47 cwilso: yes, you can override it 18:37:22 ... cant expose as audio params 18:39:38 cwilso: being able to new and audio param and have it run 18:40:51 cwilso: closed it because we put it on audio worker 18:41:02 mdjp: so it can stay closed 18:41:12 #134 18:42:22 #367 is postponed to v.2 18:43:30 hongchan: so in terms of subclassing we don't need to change anything 18:43:41 cwilso: right, but composition does not, yet 18:46:01 cwilso: audio param is still magic. not newable. marshalled across to worker behind the scenes 18:46:13 BillHofmann: would #250 solve that? 18:46:32 padenot: it could. yesterday we talked of making an audio param from iside a worklet 18:46:49 Chair: att 18:46:52 Chair: Matt 18:47:07 Meeting: Audio WG f2f2, Atlanta, Dday 2 18:47:13 Meeting: Audio WG f2f2, Atlanta, Day 2 18:47:38 cwilso: or we could say #251 and say 367 back 18:48:22 cwilso: don't care exactly because we need to keep going on constructible params, whether it hits v.1 or not 18:49:41 padenot: we should get constructible audionodes first, then constructible audio params, look at this issue. 18:49:55 rtoyg: agree 18:51:32 jdsmith: general question - there like 80 issues ready for editing, this one is useful but if we wanted to get these down to CR in summer, is this where we would be working and push to v.next? 18:51:58 (Sorry, connection froze) 18:52:45 jdsmith: maybe we need a v1-nonblocking flag 18:52:49 I think this is pretty straightforward to spec; I think the question of how hard it is to implement might factor in there, but it is one of those fundamental, Extensible-Web-Layering type issues. 18:53:13 hah 18:54:23 hongchan1 has joined #audio 18:55:27 BillHofmann: Jerry asked if we should stop adding new stuff and deal with the easier v.1 issues, pushing others to v.2 or v.1.nonblocking ie desired but at risk 18:56:07 cwilso: it is an extensible web layering issue 18:56:17 .. avoiding magic 18:56:50 ... without this we can't extend 18:57:42 ChrisL_: agree this is architectural 18:59:36 padenot: if we realy want composite nodes, we need audio param remapping 19:00:39 cwilso: useful when making a unity node, or a looping single sample; other one is four params exposed on a composite node 19:03:11 padenot: what about a composite audio param that connects to two audio params 19:03:21 cwilso: right, you need to be able to do that 19:04:22 cwilso: title needs to be "constructible and connectable" 19:04:57 mdjp: so, are we in v.1 or v.next? 19:05:20 padenot: we keep adding stuff to v.1. Reluctant to but this is important 19:05:41 padenot: review backlog next week and triage 19:06:01 rtoyg: fair amount of design work in this one, too 19:07:04 rtoyg: not clear how it is supposed to work 19:07:37 mdjp: put a time tlimit on investigating this. Likely a v.1 requirement. 19:08:55 mdjp: lets discuss in two weeks (no call next week) 19:09:09 mdjp: agenda is pretty much cleared now 19:09:47 rrsagent, draft minutes 19:09:47 I have made the request to generate http://www.w3.org/2016/04/08-audio-minutes.html ChrisL_ 19:10:05 rrsagent, make logs public 19:10:18 rrsagent, draft minutes 19:10:18 I have made the request to generate http://www.w3.org/2016/04/08-audio-minutes.html ChrisL_ 19:10:41 Chair: Matt 19:11:15 Present: rtoyg hongchan BillHofmann jdsmith ChrisL_ padenot mdjp kawai 19:11:38 regrets: cwilso, joe 19:11:43 rrsagent, draft minutes 19:11:43 I have made the request to generate http://www.w3.org/2016/04/08-audio-minutes.html ChrisL_ 19:12:08 Meeting: udio WG f2f2, Atlanta, Day 2 19:12:16 Meeting: Audio WG f2f2, Atlanta, Day 2 19:12:19 rrsagent, draft minutes 19:12:19 I have made the request to generate http://www.w3.org/2016/04/08-audio-minutes.html ChrisL_ 19:12:40 hongchan has joined #audio 19:13:39 Topic: wrap-iupand next steps 19:13:47 Topic: wrap-up and next steps 19:14:00 s/Topic: wrap-iupand next steps// 19:14:31 rtoyg: we have 13 open pull requests 19:14:41 padenot: will look at those next week 19:16:09 https://github.com/WebAudio/web-audio-api/pulls 19:17:03 [web-audio-api] rtoy closed pull request #770: Fix #769 by correcting the formulas (gh-pages...769-fix-lowpass-highpass-formulas) https://github.com/WebAudio/web-audio-api/pull/770 19:17:52 meeting adjourned 19:18:04 rrsagent, draft minutes 19:18:04 I have made the request to generate http://www.w3.org/2016/04/08-audio-minutes.html ChrisL_ 19:51:28 BillHofmann has joined #audio 20:48:57 kawai has joined #audio