16:08:30 RRSAgent has joined #audio 16:08:30 logging to http://www.w3.org/2014/10/28-audio-irc 16:08:34 RRSagent, meeting spans midnight 16:08:35 naomi has joined #audio 16:08:56 Meeting: Audio WG f2f meeting, TPAC day two 16:09:03 Chair: mdjp, joe 16:09:06 Scribe: olivier 16:09:09 ScribeNick: olivier 16:09:20 Topic: Review of day 1, welcome observers 16:09:28 Editorial issues marked TPAC + Editorial but NOT ready for editing -> https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+label%3A%22V1+(TPAC+2014)%22+label%3AEditorial%2FDocumentation+-label%3A%22Ready+for+Editing%22+ 16:09:42 mdjp: higher level agenda today than bug squashing of yesterday 16:09:58 ... testing, an opportunity for implementers to give an update 16:10:04 ... discussion round on getusermedia 16:10:29 ... planning for transition to v1 of the spec - maybe a broader discussion around v1 and the living WD 16:10:36 rrsagent, make logs world 16:10:41 rrsagent, draft minutes 16:10:41 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 16:11:00 mdjp: also Michael Good - discussion on music notation 16:11:23 Topic: testing 16:11:47 mdjp: the importance of testing comes from our need to prove interoperable implementations to get to recommendation status 16:11:57 rrsagent, draft minutes 16:11:57 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 16:12:18 s/Topic: testing/Topic: Testing/ 16:12:20 rrsagent, draft minutes 16:12:21 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 16:12:41 mdjp: valuable to also look at testing done by implementers 16:12:59 ... so we can try and build upon work already done by implementers for their own testing needs 16:13:34 joe: I have some experience testing non-web audio frameworks 16:13:51 ... useful to have something run and see the results 16:14:00 chris lowis - blog post on testing 16:14:01 http://blog.chrislowis.co.uk/2014/04/30/testing-web-audio.html 16:14:03 ... you need approximate yet strict criteria 16:15:16 joe: the type of testing I fonud useful gave the ability to record "baseline", human validated capture 16:15:27 ... sometimes validated by looking at some waveform validator 16:15:49 ... but have not built a test suite with the scope we are looking at here 16:16:08 hongchan has joined #audio 16:16:12 joe has joined #audio 16:16:16 mdjp: posted on IRC http://blog.chrislowis.co.uk/2014/04/30/testing-web-audio.html -> a blog post by Chris Lowis on writing tests for the web audio API 16:16:25 BillHofmann has joined #audio 16:16:38 philcohen has joined #audio 16:16:39 ... looking at the comments there does not seem to be a lot of activity, but someone created a test for the waveshapernode as a response to this post 16:17:04 mdjp: use a reference output and compare output 16:17:16 ... trickier than just comparing two results 16:17:54 mdjp: current approach is to get community to write tests, and WG would make sure the tests move along with the spec 16:18:09 ... very much about building our own test suite 16:18:27 ... question of whether we (re)use work already done by implementers 16:19:40 padenot: in our test suite there are spots that are not tested 16:19:49 ... in FF, chrome or the W3c test suite for that matter 16:19:50 test suite - https://github.com/w3c/web-platform-tests 16:20:30 joe: anything in FF that isn't in the W3C test suite 16:20:33 https://github.com/w3c/web-platform-tests/tree/master/webaudio# 16:20:47 padenot: yes, there are a LOT of tests in our test suite, they run all the time 16:21:22 joe: sounds like what you have is much more extensive than what is in the W3C test suite 16:21:34 joe: lots of tests in the w3c repo fail - even IDL tests 16:21:40 padenot: probably not updated 16:21:59 padenot: 150 test files, close to 1000 tests 16:22:03 q+ 16:22:29 ack me 16:22:45 olivier: question is how big would it be for full coverage 16:23:02 padenot: when I look at code coverage what we have is all right - not 100% but close 16:23:24 joe: if we get good code coverage it's very likely we cover the spec, but not guaranteed 16:25:56 q+ 16:26:47 olivier: W3C has long history of testing, these days the bar is much higher, with a lot of tests per feature, combinatory etc 16:27:24 BillHofmann: (speaking for myself not Dolby) implication that there is an audit that goes on 16:27:30 ... painful but perhaps needed 16:27:50 ... maybe for v2 we could try and write the test before / as we write the spec 16:30:00 joe: hard to just cook up the expected result - you need some implementation to write the test 16:30:09 ... need to build bridges from 2 sides 16:30:20 ... on the one side, existing tests from implementors 16:30:28 ... on the other, spec and some tests 16:30:36 ... if we go through the spec and annotate it 16:32:47 droh has joined #audio 16:32:54 olivier: can also extract a lot of testable assertions from MUST, SHOULD keywords in the spec 16:33:12 joe: might be best to have other people than implementers doing that analysis of the spec 16:33:19 ... would seem fair too 16:33:32 mdjp: sensible way forward 16:33:40 ... how do we actually make it happen 16:34:11 ... assumption that ChrisLo may not have enough time to take it on and lead effort at this point 16:34:30 ... suggest we need someone to coordinate the testing work 16:34:53 mdjp: [call for volunteer] 16:35:26 joe: permanent benefit is exemption fro scribing. And biscuits 16:35:34 s/fro/from/ 16:37:04 olivier: not sure if non-implementor constraint is helping 16:37:21 joe: agree - just think it would be better if non-implementors involved in it 16:38:13 mdjp: another action on us to communicate with other groups, get better understanding of how they do it 16:38:20 ... SVG mentioned 16:38:38 ... also important not to throw away work done so far - can it be used as a basis? 16:40:45 olivier: some groups have successfully engaged with community in test-writing days 16:41:12 joe: might do so at music conf in Boston soon 16:41:33 padenot: also - upcoming web audio conference 16:41:40 ... we can invite people at moz space 16:42:32 Jerry: need to have prior undertanding of the holes in your coverage 16:43:21 joe: we have a repository for that https://github.com/w3c/web-platform-tests/tree/master/webaudio# 16:43:46 BillHofmann: do we have tooling that we need to do perceptual diff 16:44:14 padenot: usually enough to have reference buffer and then compare 16:45:49 olivier: first slice is whether the interfaces work, then all the things that can be compared with ref buffer; for the rest we can use crowdsourced testing like CSS has been doing 16:47:07 joe: wonder how useful is audioworker to the testing? 16:47:37 padenot: all our tests use offlineaudiocontext and SPN 16:48:00 mdjp: would be good to start using it 16:48:29 mdjp: would be good to get Chris Lowis on a call in near future to discuss anything we missed, understand his approach 16:48:43 ... and then identify person/people to lead the effort 16:48:57 joe: anyone aware of a need for the WG tests to belong to this framework 16:49:50 olivier: not necessary - more of a case of "here's a system" 16:50:03 joe: better to start with whichever suite has better coverage today? 16:51:10 Topic: Browser vendor feedback - current implementation status, future plans issues and blockers 16:51:18 RRSAgent, make minutes 16:51:18 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 16:51:37 Agenda: https://www.w3.org/2011/audio/wiki/F2F_Oct_2014 16:51:56 mdjp: open the floor to implementers 16:52:20 padenot: no big problem so far - we have pretty much everything implemented 16:52:39 joe: anything in the way of future plans 16:52:59 padenot: audioworker has been a bit complicated 16:53:29 s/has been/is going to be/ 16:53:52 hongchan: resume/suspend in the works, seems hard 16:54:02 ... rtoyg has started recently 16:54:10 padenot: yes that will be hard 16:54:26 joe: Jerry are you getting what you need from the group? 16:54:47 jdsmith: see a lot of open issues we can engage in; getting awareness of where the spec is 16:54:58 joe: any other particular areas of concern 16:55:03 jdsmith: not right now 16:55:34 jdsmith: we are engaged in implementing, currently assessing the areas where we might run into issues 16:55:40 ... encouraging so far 16:55:53 ... haven't yet looked at ambiguities in the spec 16:56:05 ... would be what I would prioritise - no specific examples yet 16:56:29 ... there's a lot of bugs/issues at the moment so takes time to get head around what is highest priority 16:56:42 mdjp: good to continue the current review? 16:56:50 jdsmith: yes 16:57:08 joe: identification of what we discussed was the chair's take 16:57:43 ... were trying to tackle anything which could hinder implementers looking at older version of the spec 16:58:27 mdjp: good to continue feeding back issues, group keen to hear what may hinder implementations 16:58:36 Shige has joined #audio 16:59:18 [break] 17:05:20 Topic: bugzilla 17:06:01 discussion on how to close remaining bugs in bugzilla, as they tend to confuse people that the tracker is still open 17:11:53 Shige has joined #audio 17:12:22 rtoyg_m has joined #audio 17:14:26 closing bugs for web audio API on bugzilla - hoping it will not create a deluge of emails this time... 17:18:49 all done - looks like the email address most likely to have received a lot of email was Chris Rogers', and it has been inactive for a while 17:29:06 hongchan has joined #audio 17:40:11 mdjp: moving onto minor/editorial issues 17:40:46 https://github.com/WebAudio/web-audio-api/issues/336 17:41:17 TPAC RESOLUTION: No. 17:41:39 https://github.com/WebAudio/web-audio-api/issues/328 17:42:24 ray: the spec text doesn't say clearly what happens when distance > maxDistance. 17:43:33 padenot: actually spec has the formula 17:44:10 joe: http://webaudio.github.io/web-audio-api/#idl-def-DistanceModelType 17:44:18 joe: spec has the formula 17:45:31 TPAC RESOLUTION: Clarify the formula. To use minimum of distance or maxDistance. 17:46:02 joe: this issue is the same thing - https://github.com/WebAudio/web-audio-api/issues/326 17:46:34 TPAC RESOLUTION: Resolve formulas as recommended 17:47:29 https://github.com/WebAudio/web-audio-api/issues/325 17:48:02 joe: we're not retaining Doppler anymore. Closing. 17:48:41 Issue 324 - https://github.com/WebAudio/web-audio-api/issues/324 17:48:48 https://github.com/WebAudio/web-audio-api/issues/324 17:51:18 rtoyg: when the panning passes the origin (zero) it makes clitche. 17:51:30 hongchan: this is only for HRTF. right? 17:51:43 rtoyg: don't recall right now. have to check. 17:52:03 mdjp: aren't we getting rid of panner node? 17:52:22 padenot: no. we will keep the old one. 17:52:31 TPAC RESOLUTION: Modify formula to use a continuous value as it moves through the listener location. 17:52:51 https://github.com/WebAudio/web-audio-api/issues/318 17:53:53 padenot: this is tricky because we need to jump back and forth between two threads. 17:54:28 joe: what about the 'intrinsic' value and 'computed' one? 17:54:42 padenot: what chrome is doing is a bit nicer, but.. 17:55:16 padenot: this requires more talking and questions. 17:55:18 mdjp: so is this fundamental? 17:55:42 padenot: yes, the implementation between chrome and firefox is also a bit different. 17:56:22 mdjp: no resolution right now. moving on. 17:56:25 https://github.com/WebAudio/web-audio-api/issues/318 17:56:35 https://github.com/WebAudio/web-audio-api/issues/314 17:57:38 rtoyg: .createBuffer has limited in terms of sample rates. 17:57:54 joe: why do we have these limitations? should be removed. 17:58:56 rtoyg: we should specify the minimum requirements. 18:00:18 In chrome it supports 3k ~ 192k to make it compatible with Media. 18:00:43 padenot: at least 192k? 18:01:07 TPAC RESOLUTION: specify rate from 8KHz - 192Khz 18:01:31 NOTE: createBuffer also requires update to reflect this change. 18:01:51 https://github.com/WebAudio/web-audio-api/issues/307 18:02:02 mdjp: this is quite strightforward. 18:02:16 TPAC RESOLUTION: Agreed. 18:02:26 https://github.com/WebAudio/web-audio-api/issues/305 18:03:05 Cyril has joined #audio 18:03:29 TPAC RESOLUTION; should throw NOT_SUPPORTED_ERR exception. 18:03:55 https://github.com/WebAudio/web-audio-api/issues/287 18:04:17 padenot: we have it in the spec now. Closing. 18:04:52 https://github.com/WebAudio/web-audio-api/issues/281 18:05:26 mdjp: we continue to play to the end and not loop. 18:07:27 …: I'll leave it as Clarification. 18:07:34 https://github.com/WebAudio/web-audio-api/issues/269 18:07:50 rtoyg: this is fixed. Closing. 18:07:58 https://github.com/WebAudio/web-audio-api/issues/257 18:08:33 padenot: we can't close this. need more discussion. 18:09:24 https://github.com/WebAudio/web-audio-api/issues/241 18:10:27 padenot: this is not relevant for AudioWorker. Removing, 18:10:46 https://github.com/WebAudio/web-audio-api/issues/135 18:11:48 Cyril has joined #audio 18:12:41 padenot: this is about Doppler effect, so not relevant anymore. Closing. 18:13:08 https://github.com/WebAudio/web-audio-api/issues/131 18:14:24 rtoyg: we always do the interpolation linearly. 18:15:05 rtoyg: user can set whatever he/she wants to draw the curve. 18:15:47 rtoyg: if you don't want to do it linearly, what do you want to do? 18:16:27 …: that itself opens the another discussion. 18:17:06 padenot: we can spec it as 'linear interpolation' and open again if people want another kind of interpolation. 18:17:12 TPAC RESOLUTION: Spec to clarify linear interpolation. If other interpolation requires a feature request is required. 18:17:38 https://github.com/WebAudio/web-audio-api/issues/129 18:18:03 mdjp: sounds similar to the previous. 18:18:14 padenot: not relevant anymore since we decided to drop the doppler. 18:18:57 RESOLUTION: Arbitrary units are appropriate as doppler is being removed. 18:19:31 https://github.com/WebAudio/web-audio-api/issues/127 18:20:42 padenot: this is kinda closed. 18:20:49 mdjp: closing. 18:21:00 https://github.com/WebAudio/web-audio-api/issues/128 18:25:03 hongchan: we didn't reach to the conclusion on this. 18:26:20 mdjp: I'll make a note on this so we can come back later. 18:26:54 https://github.com/WebAudio/web-audio-api/issues/125 18:29:48 RESOLUTION: Do not include documentation on convolution in the spec. 18:30:59 https://github.com/WebAudio/web-audio-api/issues/121 18:31:11 rtoyg: don't we use double nowadays? 18:31:41 padenot: all the time is double and the sample should be float? 18:32:58 hongchan: ES is using double anyway, is it meaningful to use float in the spec? 18:33:43 rtoyg: we fixed some issues in internal implementation about float/double misuse. 18:34:09 mdjp: paul do you have the consistency in your implementation? 18:34:30 padenot: we're going to review the variables. 18:35:21 ChrisL has joined #audio 18:36:07 rtoyg: if you want to use double in the spec, we don't have any problem. 18:36:07 padenot: yeah 18:36:39 rtoyg: when we convert double to float, we will lose some precision. 18:37:09 Cyril_ has joined #audio 18:37:34 rtoyg: paul do you use float internally right? 18:37:49 padenot: yes 18:38:41 rtoyg: then we should just keep both float and double in the spec. just to specify the internal difference in the implementation. 18:39:00 mdjp: do we have to discuss this further? 18:39:09 rtoyg: I am happy with leaving it as float. 18:39:14 padenot: yeah. 18:40:02 RESOLUTION: Specify float for all values except for time. Current implementations use floats internally so specifying doubles would look incorrect when values are inspected. 18:40:22 https://github.com/WebAudio/web-audio-api/issues/118 18:42:04 padenot: sometimes creative application requires extreme resmapling. 18:43:37 ChrisL: I am also arguing we want to have other types of resampling. 18:44:14 mdjp: are we adding some type attribute to the spec? 18:45:33 mdjp: can we put this as 'not quite an editorial issue' and come back later? 18:45:40 https://github.com/WebAudio/web-audio-api/issues/111 18:46:09 padenot: we start to explore this, but it is quite complex. 18:46:59 Shige_ has joined #audio 18:47:03 mdjp: we start adopting CPU gauge into nodes. 18:48:08 padenot: voice screening or cpu saving feature when CPU is overloaded. 18:48:37 mdjp: we can close this one. 18:48:55 https://github.com/WebAudio/web-audio-api/issues/105 18:49:23 Shige has joined #audio 18:50:52 ChrisL: we can just put out the silent from the oscillator, when there is no periodic wave defined. 18:52:36 padenot: this might be a bigger issue - what do we when the invalid value set? 18:53:50 mdjp: Closing and raising wider issue around spec wide behavior on setting enum to invalid values. 19:00:27 Cyril_ has joined #audio 19:03:28 joe: we're moving onto getUserMedia with Harald. 19:09:08 Scribenick: hongchan 19:09:15 joe has joined #audio 19:10:40 Philippe asks - no prompt to play to default output devices - do we want to have a different behavior on non-default devices? (in context of access controls) 19:10:56 Scribenick: BillHofmann 19:12:00 Harald notes that headsets have to work, but it has association between input and output (vs headphone) - heuristic is if you've granted input access, implicitly granted output use case 19:12:56 Joe: should the issue be deferred to UAs? 19:13:22 Harald: would be good to have good non-normative recommendations! 19:14:10 padenot: notes that default audio devices are completely different on devices and OSes. 19:16:02 BillHofmann: concern for non-traditional UAs (like digital media adapters) 19:16:20 padenot: if from a secure origin, you can persist the choice, at least. 19:18:00 "user agents MAY WISH TO allow users to give broad permissions for device access" RFC 6919 19:18:12 Philippe: want to be able to grant (for instance) generic permission to connect audio devices 19:20:47 Joe: if an existing app that plays out to default device then adds device enumeration - picking the same device shouldn't cause a prompt. 19:22:28 ChrisL: the default playout can change based on context (e.g., speakers get muted when you go to a meeting - headphones put in - speakers shouldn't be accessed without prompt) 19:23:23 general discussion on fingerprinting 19:24:17 Harald: certain information doesn't require authorization - can enumerate and get more things 19:24:36 Philippe: permission for enumeration should allow also access (input/output) 19:26:22 Harald: note that the UI may actually require information to build *before* permissions (e.g., remove camera options) 19:27:51 Philippe: perhaps the getDevices is parameterized 19:29:46 BillHofmann: what about expectation of consistent behavior (action when you expect it) 19:30:28 Philippe: initial enumerate would be based on basic UI; once you want to use the camera, you have to request user auth... 19:32:19 Joe: concern - this front-loads the permission process earlier than it might be otherwise relevant - developers will just request at the beginnning 19:33:37 Philippe: restates concern - only one auth 19:34:38 Harald: if we go down that path, should be proposed at getUserMedia/MediaCapture session; lots of negative feedback if we try to re-open 19:36:31 Joe: Summarizing - not assuming anything that might happen in the task force - there's a proposal that would permit enumeration of output devices - will that be transparent or opaque 19:36:37 Harald: same as input devices 19:37:56 BillHofmann: why not a whole lot of info on output device characteristics 19:39:13 Harald: more information will be forthcoming once the security issues are addressed 19:41:02 Harald: Note that things like characteristics of output device are things that Audio WG will be able to help 19:42:36 padenot: passthorugh of compressed streams not relevant to AudioWG - however, Firefox does things like that for e.g. MediaElement playback of MP3 to headphones. 19:43:25 Harald: if we want to extent output device, it'd be appropriate to propose as extensions afterwards 19:43:36 Harald: after v1.0 19:44:21 Joe: at what points in the lifecycle of a context - when can it be attached 19:44:39 padenot: should be switchable - actually relevant in e.g. videochat case 19:45:16 Joe: would you change the AudioContext samplerate when you switch? 19:45:33 padenot: no - you'd put a resampler at the end - too much impact on the graph otherwise. 19:48:11 (All): concerns about order of operations - do you need to know the sample rate of the output device before you can build the AudioContext? 19:48:41 padenot: some suggestion to apply constraints to output device (e.g. sampleRate) when creating the context 19:50:32 Joe: propose a way for the audio api to understand characteristics of devices 19:50:51 (All): again, concern of fingerprinting 19:52:26 Philippe: proper division of labor: characteristics come from WebAudio, permission/API comes from MediaCapture 19:52:34 Harald: seems reasonable 19:53:10 Joe: Let's break for lunch! 19:53:16 (All): general hurrahs! 20:30:43 naomi has joined #audio 20:39:15 naomi has joined #audio 20:40:41 hongchan has joined #audio 20:43:37 Cyril has joined #audio 20:44:55 shepazu has joined #audio 20:46:17 hongchan has joined #audio 20:47:13 rtoyg_m has joined #audio 20:49:30 BillHofmann has joined #audio 20:49:39 RRSAgent, make minutes 20:49:39 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html BillHofmann 20:53:04 rtoyg_m_ has joined #audio 20:56:46 shepazutu has joined #audio 21:02:49 Topic: transition to V1 of the spec. Assign and review actions. 21:03:17 Shige has joined #audio 21:04:52 mdjp: is it even appropriate to discuss a timeframe here? 21:05:56 padenot: yes - definitely, with the changes we've discussed. What is out of scope? Plugins, what else? AudioWorker definitely needed, allows us to work around missing nodes. 21:06:40 mdjp: suggests that AudioWorker changes the paradigm 21:06:59 ChrisL has joined #audio 21:07:00 padenot: always there as a concept, but ScriptProcessor was broken 21:07:34 mdjp: do native nodes become less relevant since you can implement everything in script? 21:08:11 padenot: we need both 21:08:32 ChrisL has left #audio 21:10:21 cwilso: doesn't think that v1 target timeframe is the most important thing - need to be sure that we've made it possible to implement the use cases 21:11:06 cwilso: (particularly referring to VST-related issues/plugins) 21:13:19 cwilso: a number of other architectural issues before we can say, "We're done" 21:13:59 cwilso: and yet - don't think we should have the "pure living standard" approach - need to ship something. 21:14:36 mdjp: without putting a time limit on it - what are the outstanding actions? 21:17:09 BillHofmann: (speaking for himself) - do we have the list of issues that are the real blockers for completing the use cases? (The P0s?) 21:17:51 cwilso: we need to keep prioritizing bugs 21:20:48 (All) more discussion around issues 21:21:04 mdjp: we need to validate that we've met the use cases, or documented why we've dropped them. 21:22:18 mdjp: we need to step back from the process stage (CR/LC/...) and determine what steps we need to get to the point that we're ready to do it. 21:23:29 mdjp: we need to schedule out our work to that point - a matrix of use case vs bugs + features 21:23:43 ACTION: mdjp to put together matrix 21:23:43 Created ACTION-115 - Put together matrix [on Matthew Paradis - due 2014-11-04]. 21:24:27 cwilso: what is the date for CR? 21:26:34 mdjp: LC q2 2015; CR q4 2015 21:29:41 mdjp: important for us to be crisp and drive to clarity on use cases, and have an API we're happy with, rather than "just ship it" 21:31:23 mdjp: proposal to review use cases 21:33:31 (All) reviewing use cases http://www.w3.org/TR/webaudio-usecases/ 21:35:27 mdjp: video chat - questions re speed 21:35:45 cwilso: could you spread out TBL speaking? Seems like. 21:36:42 mdjp: 3D game with music and convincing sound effects 21:36:54 Shige_ has joined #audio 21:38:36 cwilso: the reality here is that this is a good example of where performance is going to be really critical; there are subtleties 21:40:15 cwilso: humblebundle is an example of something that can help us run this down 21:40:57 padenot: unreal engine 4 shooter can run in emscriptm - and uses pannernode, etc. 21:41:05 mdjp: online music production tool 21:41:35 mdjp: one big issue is VST 21:42:58 mdjp: this is the major "problem use case" - and may need to be revised to bring up to date with what we understand (e.g., VSTs) 21:43:06 naomi has joined #audio 21:43:54 observer: do we really *need* VSTs? 21:44:55 cwilso: talked to a bunch of software providers who assumed that VSTs would be supported; however we don't have a way of supporting inter-app audio communication 21:47:33 cwilso: note that for instance an include of a third party JS into e.g. an Abelton online tool - there's a trust issue 21:47:50 naomi_ has joined #audio 21:49:51 observer: proposing that you could release without support of VSTs, for instance, as long as you could implement at a later date. 21:50:29 mdjp: yes, that's more or less where we've come to. we need to make a decision about it - could just stick to use case, meaning no VST support; or look at broader concerns. 21:54:57 mdjp: action at the moment is to look at the use cases vs bugs and determine where the real problem use cases are, and where we absolutely do need. 21:55:43 Joe: I think that when we created the use cases we didn't expect to match the capabilities of all competitors; I would argue that it's time to deliver. 21:57:50 naomi has joined #audio 21:58:02 jdsmith has joined #audio 21:58:18 mdjp: I think we can quickly identify those few key issues and drive to a v1 spec that we can work from (it can change, but...) 21:59:26 mdjp: Next up... Chris Wilson on WebMIDI 21:59:53 joe has joined #audio 22:00:46 cwilso: current editor's draft of WebMIDI is at http://webaudio.github.io/web-midi-api/ 22:01:17 cwilso: standards progress - I've been pushing this forward for a while. some minor edits last month 22:01:50 cwilso: 12 open issues in github. only a couple interesting ones needing resolution, state attribute for instance 22:02:11 cwilso: need to deal with limitations on sharing of ports. InputMap and OutputMap need some reworking 22:02:36 cwilso: a couple issues I'd like people to weigh in on but other than that it's pretty well done. 22:02:43 cwilso: open and close are still issues 22:02:58 cwilso: hotplugging is critically important 22:03:01 droh has joined #audio 22:03:50 cwilso: mozilla is working on WebMIDI but only one person (labor of love) 22:04:07 cwilso: before end of 2014 Chrome will give intent to ship email to Blink list 22:04:17 cwilso: right now users have to enable experimental flag in Chrome 22:04:27 cwilso: Chrome for iOS is not really Chrome, so MIDI won't be there 22:04:44 cwilso: but... it could work... 22:07:20 joe: what questions to the group would help WebMIDI move forward? 22:07:26 ChrisL has joined #audio 22:07:32 cwilso: there are 3 issues that I'd welcome feedback on 22:07:58 cwilso: Issue 75 (MIDI port open) problem is per-app exclusivity of MIDI ports 22:08:06 cwilso: the API doesn't have this concept 22:08:18 ChrisL has joined #audio 22:10:22 cwilso: implication of open and close w/r/t state 22:11:17 joe: issue thread is long, need to read it to respond: https://github.com/WebAudio/web-midi-api/issues/75 22:14:53 cwilso: in the API, there is no concept of open/close, but some underlying OSs require this b/c of exclusive access 22:16:14 cwilso: inclination is to add explicit open/close, maintain state, and implicitly open when a port is used 22:18:37 cwilso: virtual midi ports (issue 45) 22:19:16 cwilso: if you want to build a soft synth in one browser tab and access from another browser tab, you have no way of doing that today 22:19:28 cwilso: part of why I want to punt this is that it's hard to do 22:19:50 cwilso: not just in browser env, but also if you want to provide a web-based synth to Ableton Live as a quasi native app 22:20:27 cwilso: this carries impl burden of implementing a virtual MIDI device driver 22:21:17 padenot: there's a webrtc demo using webmidi using multiple browsers communicating using webRTC data channels 22:27:05 hongchan has joined #audio 22:39:00 hongchan has joined #audio 22:42:36 droh has joined #audio 22:56:53 rrsagent, draft minutes 22:56:53 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 22:57:42 Topic: Music Notation 23:00:09 joe: will be the subject of a breakout session tomorrow 23:01:07 TBA - link to joe's presentation? 23:01:22 Cyril has joined #audio 23:09:33 Shige has joined #audio 23:11:01 joe: rough proposal - form a CG to further consider path forward on a new markup language drawing on past wisdom, eliminate backward-compat and IP concerns 23:13:54 [Michael Good, VP R&D makemusic] 23:21:10 michael: showing how widely musicXML is implemented, explains desire to eventually transfer to a standards org when mature 23:21:30 ... in order to better support the industry's transition from printed to digital sheet music 23:28:38 rrsagent, draft minutes 23:28:38 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 23:29:52 shepazu: w3c CG would be great, but considering AMEI/MMA and W3C have been working together would like some joint publication 23:30:20 ... would recommend not bringing it to W3C without support of music-focused orgs 23:31:11 joe: a CG can be very pluralistic 23:31:20 shepazu: anybody can join 23:31:37 ... needs chairs who can curate/cultivate the community 23:32:13 ... output could be a w3c CG import (musicXML + changes) 23:33:34 olivier: maybe more as a way to look to the future? 23:35:34 ChrisL has joined #audio 23:35:58 TomWhite: likelihood of transfering the community, not just the work? 23:36:35 michael: understand that transfer would involve work, evolution would not happen magically 23:36:50 joe: hopeful to get new contributors to do some of the heavy lifting 23:37:06 ... build new momentum 23:37:18 TomWhite: can work be done by any? 23:37:42 shepazu: no need to be members. CG contribution as individuals, open to all 23:41:34 ChrisLilley has joined #audio 23:42:11 Topic: rapping up! 23:42:45 mdjp: anything we haven't covered tin past 2 days 23:42:51 s/tin/in/ 23:43:48 cwilso: one issue I may need more input 23:44:11 ... https://github.com/WebAudio/web-audio-api/issues/373 23:44:36 padenot: it's an issue because sometimes you are creating dozens of nodes per sec 23:44:50 ... e.g in gaming every sound effect would be a buffersourcenode 23:45:09 cwilso: easier with fire&forget and the reusable nature of nodes 23:45:45 ... previously there was (well known?) limit of 5 playing audio elements 23:45:52 ... which you needed to have in cache, decoded 23:46:17 ... however, if you want reusable sound player 23:46:31 ... you can do that but you will have to have a finite set of sounds playing at one time 23:46:38 ... number of voices playing 23:46:55 ... which you could track, perhaps using buffersourcenode, but will be messy 23:47:43 ... using CPU to clean up buffersourcenodes not ideal 23:48:11 joe: minimal amount of GC collected, no? 23:48:20 cwilso: no, I mentioned that for particular app 23:49:00 joe: worst case scenario - hundreds of objects a second 23:49:05 padenot: not extreme 23:49:24 ... FPS example 23:49:34 joe: GC hundred nodes a sec - is it that bad? 23:49:45 cwilso: problem is that GC is unpredictable 23:50:31 joe: just want to understand if GC causes actual problem 23:51:08 cwilso: came from a bug report saying GC was causing problems, because they were using streaming, decoding track and creating buffersourcenodes and chaining them 23:51:19 ... if you stutter every 5 seconds, probably bad 23:52:04 joe: seems like there will be GC regardless, in this case 23:52:27 cwilso: problem remains in a "machinegun" use case 23:54:13 padenot: what if you could re-trigger 23:54:30 joe: could not play simultaneously 23:54:41 padenot: yes but developer could use an array of them 23:55:19 joe: seems problem is not well quantified 23:56:09 padenot: could be a quick fix 23:56:34 cwilso: there is data associated to the bug report 23:58:25 hongchan has joined #audio 00:02:20 cwilso: do we still have playingstate in buffersourcenode? 00:02:23 all: no 00:02:57 joe: suggest to defer - this can be done with audioworker. Prefer not to do away with fire and forget model too lightly 00:04:38 [discussion whether audioworker would create garbage there] 00:06:23 cwilso: how long do we defer? 00:07:21 joe: we actually need to commit to leaving a few things alone for a while while we focus on "v1" 00:10:18 [discussion on next draft to be published in /TR] 00:15:45 mdjp: focus on mainly audioworker for the next heartbeat pub? 00:18:00 rtoyg_m has joined #audio 00:18:28 19 December through 5 January 2015 00:22:01 olivier: suggest group could schedule publications according to big issues which would require significant review 00:24:50 mdjp: sounds like tagging all issues with milestones is next action 00:29:54 [discussion on what we want to commit to by next teleconference] 00:32:05 shepazu has joined #audio 00:34:35 Topic: next meeting 00:34:49 joe: two weeks from Thu? 00:34:59 cwilso: 13th Nov 00:36:04 ChrisLilley: would it be OK to have a list of what can be published/what needs more work 00:36:09 cwilso: unlikely by then 00:37:23 ... by then I will either have detailed solution for inputs/outputs or marked all issues 00:37:38 ... but probably not both 00:38:06 mdjp: suggest milestones first, address issues second 00:39:07 joe: time box milestone process so there is enough time to resolve audioworker, migrate to a new WD 00:39:27 ... by the moratorium 00:41:38 RESOLUTION: statement of intent - the group aims to publish a new WD with audioworker before the moratorium 00:42:01 [adjourned] 00:42:05 rrsagent, make minutes 00:42:05 I have made the request to generate http://www.w3.org/2014/10/28-audio-minutes.html olivier 00:42:30 shepazutu has joined #audio 00:53:55 jernoble has joined #audio 01:02:24 naomi has joined #audio 01:44:55 jernoble has joined #audio 03:23:17 BillHofmann has joined #audio 04:26:40 naomi has joined #audio 05:42:01 naomi has joined #audio 06:09:28 naomi has joined #audio 07:59:41 rtoyg_m has joined #audio 08:09:43 shepazu has joined #audio 13:24:11 naomi has joined #audio 14:02:57 naomi has joined #audio 14:07:16 naomi_ has joined #audio 15:17:52 jernoble has joined #audio 15:22:47 hongchan has joined #audio 15:27:32 naomi has joined #audio 15:36:43 naomi has joined #audio 15:55:04 naomi has joined #audio 16:03:36 naomi_ has joined #audio 16:18:17 shepazu has joined #audio 16:21:09 naomi has joined #audio 16:42:15 jernoble has joined #audio 16:46:29 naomi has left #audio 16:58:03 jernoble has joined #audio 17:13:39 jernoble has joined #audio 18:05:57 shepazu has joined #audio 18:24:55 [web-audio-api] padenot opened pull request #384: Specify the filter type to be used for the BiquadFilterNode (gh-pages...gh-pages) https://github.com/WebAudio/web-audio-api/pull/384 18:27:23 shepazutu has joined #audio 18:39:35 shepazutu has joined #audio 20:01:01 jernoble has joined #audio 20:32:13 Cyril has joined #audio 20:34:13 Cyril_ has joined #audio 21:14:06 Cyril has joined #audio 21:15:23 Cyril has joined #audio 21:16:34 Cyril_ has joined #audio 21:56:33 Cyril has joined #audio 22:14:33 jernoble has joined #audio 22:35:29 Cyril has joined #audio 23:32:25 jernoble_ has joined #audio 23:36:40 shepazu has joined #audio 00:04:53 jernoble has joined #audio 01:31:11 shepazu has joined #audio 01:38:33 shepazutu_ has joined #audio 03:56:43 jernoble has joined #audio 05:06:17 jernoble has joined #audio 14:15:56 Cyril has joined #audio 15:45:00 Cyril has joined #audio 15:47:13 shepazu has joined #audio 16:20:27 shepazu has joined #audio 16:24:05 shepazutu has joined #audio 16:27:22 Cyril has left #audio 16:35:27 Cyril has joined #audio 17:03:44 [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/a03cee5ff55e413ef9ee3a2d3e6bea1af3347e74 17:03:44 web-audio-api/gh-pages a03cee5 Chris Wilson: Fixes #82. 17:42:36 [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/a595573fa8546310e39bd35b798357b4c9a62caa 17:42:36 web-audio-api/gh-pages a595573 Chris Wilson: Fixes #93. 18:15:56 Cyril has joined #audio 18:25:23 shepazu has joined #audio 18:26:46 shepazutu has joined #audio 19:31:29 Cyril_ has joined #audio 20:55:57 Cyril has joined #audio 20:59:31 Cyril has joined #audio 21:03:37 shepazu has joined #audio