15:32:41 RRSAgent has joined #audio 15:32:41 logging to http://www.w3.org/2014/10/27-audio-irc 15:32:43 RRSAgent, make logs world 15:32:45 Zakim, this will be 28346 15:32:45 I do not see a conference matching that name scheduled within the next hour, trackbot 15:32:46 Meeting: Audio Working Group Teleconference 15:32:46 Date: 27 October 2014 15:33:12 Meeting: Audio Working Group face-to-face meeting 15:38:18 Cyril has joined #audio 15:54:09 Agenda: https://www.w3.org/2011/audio/wiki/F2F_Oct_2014 16:01:47 Chair: joe, mdjp 16:01:50 Scribe: olivier 16:02:37 mdjp: thought it might be worth starting with a brief intro on where we are and where we are going in W3C process 16:03:32 2014 W3C Process -> http://www.w3.org/2014/Process-20140801/ 16:04:33 mdjp: next step for us is to get to Candidate Recommendation 16:05:11 mdjp: that means freeing v1 scope, resolve issues and complete edition of WD 16:06:23 mdjp: explains next step after that - Proposed Recommendation, and how to get there 16:10:50 Topic: v1 Feature issues 16:11:05 Github repositiry of v1 tagged issues -> https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22V1+%28TPAC+2014%29%22 16:12:35 https://docs.google.com/spreadsheets/d/1lBnjJI7_-wVznwuvwoaylu69-S2pelsTUGqu9zl2y-M/edit?usp=sharing 16:13:24 [Jerry Smith joins, round of intro] 16:17:11 kawai has joined #audio 16:17:30 joe: want to talk about criteria for v1/v2 16:17:41 padenot has joined #audio 16:17:43 ... we should be stern with ourselves about what to change/keep at this point 16:18:38 ... large number of things we can put off, will make us feel bad but otherwise we would not get out the door 16:20:15 ChrisLilley: note we can also have a category of things we *think* are v1 but we're not sure 16:21:22 mdjp: looking at table at https://docs.google.com/spreadsheets/d/1lBnjJI7_-wVznwuvwoaylu69-S2pelsTUGqu9zl2y-M/edit?usp=sharing 16:22:29 ChrisL has joined #audio 16:22:54 joe: start with 113 - audioWorkers 16:24:35 shepazu has joined #audio 16:24:37 joe has joined #audio 16:24:51 jdsmith has joined #audio 16:24:53 BillHofmann has joined #audio 16:25:11 Good morning. 16:25:32 113 -> https://github.com/WebAudio/web-audio-api/issues/113 16:25:41 rrsagent, draft minutes 16:25:41 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 16:26:40 cwilso: 113 is largely under control, the biggest issue at this point is in issue 1 (inputs and outputs in Audio Workers) 16:27:02 ... my goal was to be able to reimplement everything in audioworkers except the inputs and outputs 16:27:36 ... splitter and merger are a pain in that regard - they have a non-predefined number of i/o, and dynamically change number of channels 16:28:58 Issue 1 -> https://github.com/WebAudio/web-audio-api/issues/1 16:30:24 cwilso: does everyone understand inputs/outputs and channels in the spec? 16:33:27 olivier has changed the topic to: W3C Audio WG - f2f meeting https://www.w3.org/2011/audio/wiki/F2F_Oct_2014 16:36:31 joe: suggests change [missed] 16:37:14 cwilso: problem is that output channels can be dynamically changed 16:37:58 joe: if you leave out the dynamic bit, what would audioprocess look like? 16:38:10 cwilso: you'd need to define the number of channels somewhere 16:38:15 joe: like in the constructor 16:39:42 [Philippe Cohen from Audix joins] 16:40:37 ChrisL: why are dynamic channels a problem? 16:40:59 cwilso: that happens in the audioworker 16:41:18 ... and the only time you can do anything is when onaudioprocess event fires 16:42:47 cwilso: every connection has the opportunity to cause this upmix/downmix 16:43:27 ... if we didn't care about dynamic channels, we're done, because we let you define channels 16:43:50 joe: what id you could pre-specify number of inputs and channels 16:44:23 cwilso: problem is that channels is per input 16:44:35 ... so we'd end with an array of array of float32 buffers 16:45:25 joe: propose we pass an array per input, and have arrays of arrays of buffers, organised by channel 16:45:35 cwilso: harder for outputs 16:45:47 ... for inputs, easy because you are handed an array of arrays 16:46:22 joe: how do native nodes deal with this? 16:47:19 cwilso: internally it only cares about the output to the destination 16:47:31 BillHofmann has joined #audio 16:48:25 cwilso: question is 1- do we want to represent multiple connections in addition to multiple channels, and 2- do we want dynamic channels? 16:48:40 ... probably both yes, but needs to be figured out 16:48:50 ... so we can replicate splitter and merger node behaviour 16:51:01 joe: worth aiming for a fully scoped audioworker that does all that 16:51:15 ... would argue for most complete approach 16:51:29 cwilso: assuming no need to have dynamic change to number of inputs 16:51:35 ... all predefined 16:55:34 RESOLUTION: agreement on need for support for multiple connection at instantiation, and multiple number of channels after instantiation 16:56:10 "This is a magical node." - Joe, referring to PannerNode. 16:56:11 Next issue - 372 - rework pannerNode https://github.com/WebAudio/web-audio-api/issues/372 16:56:39 joe: suggest deprecating it 16:57:21 cwilso: was an attempt to bind together different scenarios - both control of mixer and 3D game with spatialization 16:58:31 cwilso: issue 368 is about default currently HRTF https://github.com/WebAudio/web-audio-api/issues/368 17:00:19 ... some of it (doppler) was made when there were only buffersourcenodes 17:00:30 ... completely broken with things like a live microphone 17:00:56 ... also - none of the parameters are audioparams, they're floats 17:01:13 kawai_ has joined #audio 17:01:54 shepazu: is there some reason it was done this way? 17:02:20 padenot: looking at openAL - games was a big use case at that point 17:03:14 cwilso: given the above I agree to tear this node apart 17:03:31 ... we need a panner and a spatialization node, separately 17:04:00 ... plus rip dopplr completely, can be replicated with a delaynode 17:04:46 BillHofmann: I hear a proposal for a stereo panner 17:05:00 ... and a proposal to deprecate the pannernode 17:05:12 cwilso: agree on need for stereo panner 17:05:32 ... spatialization still has value, especially as it has been implemented already 17:05:47 ... would want to re-specify, to have xyz as audioparams 17:06:22 BillHofmann: question about whether it should be the last one 17:06:32 shepazu: could be a best practice 17:07:26 cwilso: advise authors to do so - not a requirement 17:07:59 joe: hear consensus on a new stereo panner 17:08:36 ... and a spatialization feature for v2 17:09:10 joe: suggest stripping equalpower from the node 17:10:46 joe: so hear consensus to replace pannernode with a new stereo panner node + spatialization node, with audio parameters, remove doppler 17:12:59 olivier: not clear whether group wants the spatialization to be in v1 or v2? 17:13:28 matt: unsure from our perspective. We have convolvernode 17:14:15 joe: can the new spatialization be pushed to v2? Expecially since we currently have convolvernode 17:14:48 joe: would suggest deprecating for v1, fix it in v2 17:15:15 cwilso: unless we fix it no point in deprecating in v1 - games would be using it 17:16:16 BillHofmann has joined #audio 17:16:43 ChrisL: agree and think new cleaned up should be in v1, possibly marked as risk 17:16:49 ... it sets a clear direction 17:17:31 RESOLUTION: clearly deprecate current pannernode. Add spatialization without doppler. Add a stereo panner node. 17:17:47 http://www.w3.org/TR/mediacapture-depth/ 17:18:25 shepazu: depth track - do we need to take it into consideration? 17:18:30 joe: it could, in the future 17:20:28 Next - Github 359 - Architectural layering: output stream access? - https://github.com/WebAudio/web-audio-api/issues/359 17:20:59 cwilso: question is how do you get access to input and output 17:21:11 ... translates to streams 17:21:30 https://streams.spec.whatwg.org/ 17:21:35 ... questioning the need for streams API 17:21:48 Streams API -> https://streams.spec.whatwg.org/ 17:22:03 cwilso: is is really designed as a sink 17:22:17 ... different model to how web audio works (polling for data) 17:23:11 ... does not really solve the balance between low latency and avoiding glitching 17:23:54 joe: how does this relate to getUserMedia 17:27:01 harald: discussion in group about output devices and how to connect a mediastream to a specific output device 17:28:41 rrsagent, draft minutes 17:28:41 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 17:30:10 s/Audix/Audyx/ 17:31:08 joe: anything actionable for v1? 17:32:48 cwilso: agree it's not mature enough yet. still agree you need some access to audio 17:33:15 joe: agree it us fundamental, but acknowledge it will be a multi-group thing 17:33:35 cwilso: architectural layering seems more important than arbitrary date 17:34:30 ... significant broken bit 17:36:23 TENTATIVE: Github 359 is fundamental but we may not resolve it for v1 17:59:46 hongchan has joined #audio 18:07:24 rrsagent, draft minutes 18:07:24 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 18:10:40 next is Github 358 - Inter-app audio - https://github.com/WebAudio/web-audio-api/issues/358 18:11:10 mdjp: tricky issue - on the one hand this sounds like a case of "plugins are bad" 18:11:16 ... but there is demand from the industry 18:11:23 cwilso: 2 separate but related issues 18:11:40 ... on the one hand massive popular, adopted plugin system(s) (VST, etc) 18:11:51 ... massive investment 18:12:05 ... this is how most inter-application audio is done 18:12:29 ... people have a lot of these around, and not being able to use effects people own is a problem 18:13:08 ... separately, there's a question of whether we allow people to replicate what they do 18:13:21 ChrisL: are you talking about sandboxing? 18:13:33 joe: feels like a web-wide issue 18:14:30 cwilso: audio is a very low latency, high bandwidth connection - makes it different from other kinds of applications 18:15:11 joe: does audioworker dispense with it by allowing loading scripts from other domains? 18:16:15 olivier: what's the use case for this to be in v1? 18:16:31 cwilso: relates very closely to a number of our key use cases 18:18:09 ChrisL: one key difference with other parts of the web is user acceptance of the model 18:19:08 joe: want to be careful jumping this divide. Because this relates to our use cases does not necessary obliges a v1 release to cover those present day platforms 18:19:20 ... would be a good thing, but may not be something we MUST do 18:19:50 ... it will be controversial if we pull plugins in and make it a first class citizen 18:20:30 cwilso: are you talking about plugging into VSTs and rack effects, or the general question of inter-app plugins 18:21:00 joe: audioworker could be the solution to the generic question of javascript "pulgins" 18:21:05 s/pulgins/plugins/ 18:21:13 cwilso: don't know whether it actually works for it 18:21:58 q+ BillHofmann 18:22:14 ack BillHofmann 18:22:52 BillHofmann: question of whether plugins will be native or web-based 18:23:10 ... might be worth renaming to not create allergic reaction to "plugins" 18:24:20 joe: my belief is that there could be a standard built upon audioworker 18:24:39 (note speaking as a matter of personal opinion, not as Dolby) 18:26:45 mdjp: seems to be consensus that plugin arch is important, question is whether v1 or v2 18:27:01 (only cwilso raises hand for preference to v1) 18:27:49 cwilso: we are at a point where we are looking back at coverage of our use cases - we might want to revisit the use cases 18:29:19 olivier: suggest splitting the two issues - multi-app behaviour and VSTs etc 18:30:23 mdjp: suggest 3 steps 18:30:50 .... 1 review use cases and implications of doing this or not 18:31:34 ... 2 if we do it, how 18:32:38 cwilso: will have to have an answer ready when we stamp something as v1 18:32:49 ... as to how this will be done 18:35:36 RESOLUTION: split GH 358 into two, start thinking about our answer 18:37:18 [discussion about v1, living standard etc] 18:37:43 shepazu: note that v1 is where patents lie 18:37:56 ... having a live editor's draft is a good idea 18:43:11 Next - Github 351 - Describe how Nodes created from different AudioContexts interact (or don't) - https://github.com/WebAudio/web-audio-api/issues/351 18:44:02 cwilso: suggest that nodes should not interact with other contexts 18:44:10 ... but buffers should work across contexts 18:44:29 ... remember - a context works at a specific sample rate 18:44:46 ... any node with a notion of a sample buffer would break if you pass it across contexts 18:45:00 ... whereas audiobuffers have sample rate built in 18:45:56 cwilso: one of the two ways of getting a buffer (decodeaudiodata) would be harder 18:46:03 ... but we have a separate issue for that 18:47:22 cwilso: proposal is to say "if you try connecting nodes across context, throw an exception; audiobuffers on the other hand can" 18:47:30 olivier: is there any case against this? 18:47:40 cwilso: would lose the ability to reuse a graph 18:47:45 ... withotu recreating it 18:47:52 s/withotu/without/ 18:48:29 RESOLVED: agree to cwilso's proposal - "if you try connecting nodes across context, throw an exception; audiobuffers on the other hand can" 18:50:03 Next - Github 12 - Need a way to determine AudioContext time of currently audible signal - https://github.com/WebAudio/web-audio-api/issues/12 18:50:47 joe: this comes from the fact that on some devices there was built in latency at OS level, and it was impossible to discover it 18:51:12 ... no way of asking the platform "is there any latency we should know about?" 18:53:34 ... problem with scheduling is that there is a time skew that is not discoverable 18:53:41 ChrisL: why is it not discoverable 18:53:54 ... schedule something and measure? 18:56:15 cwilso: some of it is not measurable/not reported, e.g. bluetooth headset 18:57:03 olivier: there is a suggestion from srikumar here - https://github.com/WebAudio/web-audio-api/issues/12#issuecomment-52006756 18:57:14 joe: similar to my suggestion 18:59:39 (group looks at proposal - some doubts about point 3) 19:00:48 jdsmith has joined #audio 19:01:41 Shiger has joined #audio 19:02:22 timeless has joined #audio 19:02:26 I'm logging. I don't understand 'this meeting spans midnight <- if you want a single log for the two days', timeless. Try /msg RRSAgent help 19:02:54 RRSAgent, this meeting spans midnight 19:06:27 joe: suggestion of new attribute of audiocontext describing the UA's best guess of the signal heard by the listener right now 19:06:39 ... in audiocontext time 19:07:14 ... it's a time in the past, different from currenttime which is the time of the next processing 19:07:40 additionally, I think we should expose the best guess at the time the currentTime block will play in performance.now time. 19:08:54 olivier: will need to be more precise to be testable 19:09:41 rtoyg_m has joined #audio 19:11:17 timeless has left #audio 19:12:03 RESOLUTION: add two things - new AudioContext attribute exposing UA's best guess at real context time being heard now on output device (this will normally be a bit behind currentTime, and not quantized). Also new attribute expressing DOM timestamp corresponding to currentTime. - see https://github.com/WebAudio/web-audio-api/issues/12#issuecomment-60651781 19:14:45 Next - Github 78 - HTMLMediaElement synchronisation - https://github.com/WebAudio/web-audio-api/issues/78 19:14:53 cwilso: suggest to close "not our problem" 19:17:06 (consensus to close, crafting close message) 19:17:28 joe: essentially duplicate of 257? 19:18:14 RESOLUTION: Close Github 78 19:18:45 Next - Github 91 - WaveTable normalization - https://github.com/WebAudio/web-audio-api/issues/91 19:23:35 ChrisL: is this the highest or the sum that is normalised to 1? 19:23:37 joe: sum 19:30:09 [discussion about normalising and band-limitation] 19:30:26 cwilso: need to look at actual normalization algorithm 19:31:41 cwilso: suggested resolution - need a parameter to turn off normalization; also - need better explanation of periodicwave 19:34:39 RESOLUTION: add additional optional parameter to createPeriodicWave() to enable/disable normalization; better describe real and imag; document the exact normalization function - https://github.com/WebAudio/web-audio-api/issues/91#issuecomment-60655020 19:34:44 [break for lunch] 19:34:51 RRSAgent, draft minutes 19:34:51 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 20:28:38 joe has joined #audio 20:29:50 hongchan has joined #audio 20:35:18 zakim, pick a victim 20:35:18 sorry, olivier, I don't know what conference this is 20:36:26 rtoyg_m has joined #audio 20:37:49 jdsmith has joined #audio 20:38:39 Next - Need to provide hints to increase buffering for power consumption reasons - GH 348 - https://github.com/WebAudio/web-audio-api/issues/348 20:39:21 BillHofmann has joined #audio 20:39:38 padenot: in some use cases better to not run at the lowest possible latency 20:39:58 ... multiple proposals in the thread; one is to tell the contextwhat the prefered buffer size is 20:40:03 ... another is to use channel API 20:40:19 ... and Jer has a strawman where he kind of uses the channel API 20:40:48 ... my position would be not to explicitely pick a buffer size 20:41:00 ... the UA has typically a better idea of appropriate buffer size 20:41:30 cwilso: major concern was to conflate with other behaviour such as pausing for a phone call 20:41:37 ... (stop the context altogether) 20:42:09 ... agree that the requested number is not necessarily what you would get 20:42:54 padenot: basically we need low latency / save battery 20:42:59 cwilso: and balance the two 20:43:04 ... and turn that dial 20:44:01 ChrisL has joined #audio 20:45:19 olivier: use case for typical developer? I see how that's useful for implementer... 20:45:40 padenot: example of a audio player with visualiser - no need for low latency there 20:48:02 joe: if this were to be implemented as a "the UA may..." would that be acceptable? 20:48:07 cwilso: right thing to do 20:48:10 padenot: agree 20:49:48 cwilso: not sure it should be at constructor level 20:50:07 padenot: tricky on some platforms if you want to make it glitch-less 20:50:29 ... worried about putting too many parameters on the constructor 20:52:06 hongchan has joined #audio 20:52:13 RESOLUTION: do something similar to Jer's proposal at https://github.com/WebAudio/web-audio-api/issues/348#issuecomment-53757682 - but as a property bag options object passed to the constructor 20:52:37 shepazu has joined #audio 20:52:51 cwilso: expose it, make it readonly and decide later whether we make it dynamic 20:53:45 ... expose the effects, not the whole property bag 20:57:16 hongchan1 has joined #audio 20:57:44 Next - two related issues 20:58:04 joe: two issues - Github 264 and 132 20:58:42 Use AudioMediaStreamTrack as source rather than ill-defined first track of MediaStream -> https://github.com/WebAudio/web-audio-api/issues/264 20:59:03 Access to individual AudioTracks -> https://github.com/WebAudio/web-audio-api/issues/132 20:59:17 joe: looking at 132 first 21:00:08 Cyril has joined #audio 21:00:24 joe: would it make change to change the API to be track-based for both 21:01:16 joe: also this comment from Chris suggesting one output per track for MediaElementSourceNode https://github.com/WebAudio/web-audio-api/issues/132#issuecomment-51366048 21:04:45 padenot: tracks are not ordered 21:07:07 joe: tracks have a kind, id and label 21:07:13 cwilso: yes they're identifiable 21:07:41 cwilso: the problem is "we say first" and they have an unordered label list... 21:07:59 joe: we could rename and require an id 21:08:09 cwilso: take the track instead 21:08:11 joe: agree 21:09:30 cwilso: suggest we keep the same name and factory method, and add another that take a track 21:09:59 ... which would be unambiguous 21:10:04 Shige_ has joined #audio 21:11:44 cwilso: seems to be agreement on adding the interface that takes a track 21:11:55 ... question is what we do with the old one 21:12:04 ... and essentially deciding "what is the first track" 21:12:22 ... first id in alphabetical order? 21:14:03 ChrisL: justification for keeping the "first track" interface? 21:14:28 cwilso: works today without people having to go read another spec and pick a track - especially since most of the time there is only one track 21:15:20 cwilso: we could clearly explain not to use the old system if there may be more than one track 21:16:38 ChrisL: what if we rickrolled them if they don't specify the track? 21:17:32 Harald: throw an exception if trying to use this method when there is more than one track 21:18:06 joe: at least it looks like mediaElement and MediaStream are congruent in their treatment of tracks 21:18:14 ... (as far as we are concerned) 21:19:49 RESOLUTION: keep the same node, keep same factory method but add a second signature that take a track - also define first 21:20:20 cwilso: "it doesn't need to make sense, it just needs to be definitive" 21:22:52 Next - (ChannelLayouts): Channel Layouts are not sufficiently defined - Github 109 - https://github.com/WebAudio/web-audio-api/issues/109 21:24:39 BillHofmann: the reason this is relevant is for downmixing 21:24:51 ... any more we want to cover for v1? 21:25:08 mdjp: do we have a limit? 21:25:11 cwilso: 32 21:25:53 olivier: mention a spec (AES?) attempting to name/describe channel layouts 21:26:05 BillHofmann: proposal to defer to v2 21:26:17 cwilso: propose to remove the statement about "other layouts" from the spec 21:27:25 RESOLVED: stick to currently supported layouts, rewrite the statement about other layouts to clarify expectations that we are not planning any more layouts for v1 21:27:48 Next - Lack of support for continuous playback of javascript synthesized consecutive audio buffers causes audio artifacts. - GH265 - https://github.com/WebAudio/web-audio-api/issues/265 21:28:03 joe: agree that audioworker is the current solution to this problem 21:30:55 group discussing https://github.com/WebAudio/web-audio-api/issues/300 (Configurable sample rate for AudioContext) 21:32:30 RESOLUTION: close wontfix 21:33:47 RESOLUTION: bump up priority of GH300 21:34:16 Next - Unclear behavior of sources scheduled at fractional sample frames - GH332 - https://github.com/WebAudio/web-audio-api/issues/332 21:34:55 joe: propose we not do it, and specify that we are not doing it 21:40:11 RESOLUTION: edit spec to stipulate that all sources are always scheduled to occur on rounding sample frame 21:40:53 Next - OfflineAudioContext onProgress - GH302 - https://github.com/WebAudio/web-audio-api/issues/302 21:41:49 joe: seems like a showstopper to me if you need to create tens of thousands of notes 21:44:58 ... onprogress would allow JIT instantiation 21:45:42 cwilso: if you want to do sync graph manipulation best thing may be to pause, modify, then resume 21:45:54 ... you do have to schedule it 21:47:04 jdsmith has joined #audio 21:47:28 ... you could schedule a pause every n seconds, and use the statechange callback 21:50:26 joe: single interval would be fine 21:50:31 ... given use cases I have seen 21:51:08 RESOLUTION: introduce way to tell offlineaudiocontext to pause automatically at some predetermined interval 21:51:41 Next - Musical pitch of an AudioBufferSourceNode cannot be modulated - GH333 - https://github.com/WebAudio/web-audio-api/issues/333 21:52:43 joe: not sure this is v1 level 21:53:35 joe: would be nice to have detune for audiobuffersourcenode 21:54:11 ... not great at the moment 21:54:16 ... but suggest we defer this 21:54:32 mdjp: fair use case 21:56:15 cwilso: it does feel like something we forgot, not particularly hard 21:57:18 RESOLUTION: Add detune AudioParam in cents, analogous to Oscillator, at a-rate 21:57:56 Next - Map AudioContext times to DOM timestamps - GH340 - https://github.com/WebAudio/web-audio-api/issues/340 21:58:22 Resolution: see GH12 for description of new AudioContext time attribute 21:59:06 Next - Configurable sample rate for AudioContext - GH300 - https://github.com/WebAudio/web-audio-api/issues/300 21:59:14 rtoyg has joined #audio 22:00:09 resolution: the new options object argument to realtime AudioContext will now accept an optional sample rate 22:00:30 cwilso: may want to round it 22:05:46 [break] 22:06:33 Cyril has joined #audio 22:19:23 jdsmith has joined #audio 22:37:52 philcohen has joined #audio 22:39:11 ScribeNick: philcohen 22:39:19 rrsagent, draft minutes 22:39:19 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 22:39:27 rtoyg_m has joined #audio 22:39:52 Topic: issues prioritisation 22:40:22 Chris: bring the a-rate and k-rate issue 22:40:45 https://github.com/WebAudio/web-audio-api/issues/55 22:42:07 Decision: k-rate and not a-rate per Chris proposition 22:42:58 Chris bring issue https://github.com/WebAudio/web-audio-api/issues/337: DecodeAudioData 22:45:46 Chris: Use case is Audio API needs a decoder of its own since it cannot use Media API for this purpose that has its own design goals 22:47:04 Bill: Additional related issues 371 337, 30 & 7 22:47:28 Chris: https://github.com/WebAudio/web-audio-api/issues/30 we should do 22:50:24 Joe: accepted 30 for escalation 22:50:59 Chris: issue https://github.com/WebAudio/web-audio-api/issues/359 22:51:35 Joe: we will discuss this tomorrow at 12 with Harrald from the device task force 22:52:11 Joe: De-zippering https://github.com/WebAudio/web-audio-api/issues/76 22:53:23 Chris: we have it built in today 22:53:37 Paul: defined in the specs as part of the Gain node 22:53:38 Shige has joined #audio 22:54:08 Olivier: thought we have already a resolution 22:54:47 Chris: made a resolution back in January 22:55:49 Joe: So what do we miss today? 22:57:32 Chris: not convinced we can define the use case where it will be applied 22:57:51 Joe: Does not want the API to De-zipper itself and let the developers being responsible for that 22:59:56 Decision: We changed the January decision and the issue is open 23:00:32 Joe: proposing De-zippering is OFF 23:00:56 ChrisL: supporting it 23:01:37 Decision: Cancel automatic De-zippering, developers will use the API when needed 23:03:52 Olivier: should add informative material to inform developers 23:03:56 Joe: OK 23:04:42 Chris: issue https://github.com/WebAudio/web-audio-api/issues/6 23:04:53 AudioNode.disconnect() needs to be able to disconnect only one connection 23:05:21 Chris: Connect allow selective connection but disconnect is not selective and destroy all output 23:05:25 Joe: it's bad 23:05:34 Decision: Just do it! 23:06:38 https://github.com/WebAudio/web-audio-api/issues/367 Connecting AudioParam of one AudioNode to another Node's AudioParam 23:08:21 Joe: concern we are inventing a new way to connect that will create a heavy load on implementors 23:10:49 Chris: Use case: create a source with a DC offset 23:11:19 Decision: not in V1 23:12:36 https://github.com/WebAudio/web-audio-api/issues/39 MediaRecorder node 23:13:02 Chris: should not be done, since we have already have ... 23:13:19 Paul: What about offline AudioContext? 23:13:28 Chris: great V2 feature 23:13:29 s/.../MediaRecorder via MediaStream/ 23:13:31 Cyril has joined #audio 23:13:42 rrsagent, draft minutes 23:13:42 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 23:14:46 s/Decision:/RESOLUTION:/g 23:14:50 rrsagent, draft minutes 23:14:50 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier 23:15:36 Decision: Deferring to V2 23:16:16 https://github.com/WebAudio/web-audio-api/issues/13 - A NoiseGate/Expander node would be a good addition to the API 23:17:30 Chris: Pretty common use case 23:17:47 Chris: Doable in AudioWorker 23:19:33 Paul: Dynamic compressor can be used for that? 23:20:12 Chris: prefers to make it a separated node 23:22:59 Matt: suggesting a decision Additional node in V1, name to be finalized Expander, Dynamic Compressor 23:24:12 Matt: testing requires work 23:24:24 Chris: anyhow we have a lot to do in testing 23:24:40 Decision: Approved to include this in V1 23:25:22 Related: DynamicsCompressor node should enable sidechain compression https://github.com/WebAudio/web-audio-api/issues/246 23:27:50 Chris: Joe position is to not include this in V1 and detailed how to achieve that 23:29:14 Paul: Connecting two inputs can make it 23:29:45 Chris: Two input connections (signal + control) 23:32:12 Decision: no new node: DynamicCompressor can have optional 2nd input for Control signal, for V1. 23:40:38 s/Decision:/RESOLUTION:/g 23:41:59 Cyril has joined #audio 23:42:19 hongchan has joined #audio 23:44:16 Topic: editorial issues ready for review 23:44:16 https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22Ready+for+Review%22 23:46:23 cwilso: moving onto issues on AnalyserNode 23:46:31 https://github.com/WebAudio/web-audio-api/issues/330 23:47:26 …: issue 1 - processing block size 23:48:30 …: issue 2 - smoothing 23:49:28 https://github.com/WebAudio/web-audio-api/issues/377 23:50:28 shepazu has joined #audio 23:52:03 …: issue 3 - analyser FFT size 23:52:11 https://github.com/WebAudio/web-audio-api/issues/375 23:53:47 …: needed from visualization and robust pitch detection, we might want to crank it up to 8k. 23:55:08 mdjp: it is necessary to layout some explanation about the trade-off on FFT. 23:57:31 olivier: if commercial audio software support 32k, web audio api should do it too. 23:58:48 plh has joined #audio 23:59:58 cwilso: consensus is 32k. 00:01:50 cwilso: the minimum size of frame for FFT should be 128. 00:02:04 Cyril has joined #audio 00:02:51 RESOLUTION: Specify which 32 samples to use. Last 32 has been identified. 00:04:15 cwilso: issues smoothing performed on method call 00:04:18 https://github.com/WebAudio/web-audio-api/issues/377 00:05:39 cwilso: ray suggested a problem caused by non-consecutive smoothing executions. 00:06:57 TPAC RESOLUTION: Clarify analysis frame only on getFrequencyData 00:07:41 https://github.com/WebAudio/web-audio-api/issues/308 00:08:01 Issues on shared methods in offlineAudioContext 00:11:35 cwilso: polling data from Media API faster than real-time is not possible 00:15:19 TPAC Resolution: Adopt ROC's original suggestion of making both the offline and realtime AudioContexts inherit from an abstract base class that doesn't contain the methods in question. 00:15:24 Note: Ask Cameron McCormack to pronounce on best way to describe in WebIDL. 00:16:05 https://github.com/WebAudio/web-audio-api/issues/268 00:16:15 Not relevant any more. Closing. 00:16:58 Noise Reduction should be a float. 00:17:01 https://github.com/WebAudio/web-audio-api/issues/243 00:18:28 Closing issue 243. 00:18:51 Moving onto Issue 128 - https://github.com/WebAudio/web-audio-api/issues/128 00:23:34 using .value setter is sort of a training wheel, so it shouldn't be used for serious parameter control. 00:24:43 mdjp: No behavioral changes on API. Editorial changes. 00:25:33 Moving onto Issue 73 - https://github.com/WebAudio/web-audio-api/issues/73 00:25:53 cwilso: introspective nodes should not be introduced 00:26:18 mdjp: closing. 00:27:27 Issue 317 - https://github.com/WebAudio/web-audio-api/issues/317 00:27:40 Moving onto issue 317 - https://github.com/WebAudio/web-audio-api/issues/317 00:28:56 rrsagent, draft minutes 00:28:56 I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier