IRC log of audio on 2014-10-27

Timestamps are in UTC.

15:32:41 [RRSAgent]
RRSAgent has joined #audio
15:32:41 [RRSAgent]
logging to http://www.w3.org/2014/10/27-audio-irc
15:32:43 [trackbot]
RRSAgent, make logs world
15:32:45 [trackbot]
Zakim, this will be 28346
15:32:45 [Zakim]
I do not see a conference matching that name scheduled within the next hour, trackbot
15:32:46 [trackbot]
Meeting: Audio Working Group Teleconference
15:32:46 [trackbot]
Date: 27 October 2014
15:33:12 [olivier]
Meeting: Audio Working Group face-to-face meeting
15:38:18 [Cyril]
Cyril has joined #audio
15:54:09 [olivier]
Agenda: https://www.w3.org/2011/audio/wiki/F2F_Oct_2014
16:01:47 [olivier]
Chair: joe, mdjp
16:01:50 [olivier]
Scribe: olivier
16:02:37 [olivier]
mdjp: thought it might be worth starting with a brief intro on where we are and where we are going in W3C process
16:03:32 [olivier]
2014 W3C Process -> http://www.w3.org/2014/Process-20140801/
16:04:33 [olivier]
mdjp: next step for us is to get to Candidate Recommendation
16:05:11 [olivier]
mdjp: that means freeing v1 scope, resolve issues and complete edition of WD
16:06:23 [olivier]
mdjp: explains next step after that - Proposed Recommendation, and how to get there
16:10:50 [olivier]
Topic: v1 Feature issues
16:11:05 [olivier]
Github repositiry of v1 tagged issues -> https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22V1+%28TPAC+2014%29%22
16:12:35 [mdjp]
https://docs.google.com/spreadsheets/d/1lBnjJI7_-wVznwuvwoaylu69-S2pelsTUGqu9zl2y-M/edit?usp=sharing
16:13:24 [olivier]
[Jerry Smith joins, round of intro]
16:17:11 [kawai]
kawai has joined #audio
16:17:30 [olivier]
joe: want to talk about criteria for v1/v2
16:17:41 [padenot]
padenot has joined #audio
16:17:43 [olivier]
... we should be stern with ourselves about what to change/keep at this point
16:18:38 [olivier]
... large number of things we can put off, will make us feel bad but otherwise we would not get out the door
16:20:15 [olivier]
ChrisLilley: note we can also have a category of things we *think* are v1 but we're not sure
16:21:22 [olivier]
mdjp: looking at table at https://docs.google.com/spreadsheets/d/1lBnjJI7_-wVznwuvwoaylu69-S2pelsTUGqu9zl2y-M/edit?usp=sharing
16:22:29 [ChrisL]
ChrisL has joined #audio
16:22:54 [olivier]
joe: start with 113 - audioWorkers
16:24:35 [shepazu]
shepazu has joined #audio
16:24:37 [joe]
joe has joined #audio
16:24:51 [jdsmith]
jdsmith has joined #audio
16:24:53 [BillHofmann]
BillHofmann has joined #audio
16:25:11 [BillHofmann]
Good morning.
16:25:32 [olivier]
113 -> https://github.com/WebAudio/web-audio-api/issues/113
16:25:41 [olivier]
rrsagent, draft minutes
16:25:41 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
16:26:40 [olivier]
cwilso: 113 is largely under control, the biggest issue at this point is in issue 1 (inputs and outputs in Audio Workers)
16:27:02 [olivier]
... my goal was to be able to reimplement everything in audioworkers except the inputs and outputs
16:27:36 [olivier]
... splitter and merger are a pain in that regard - they have a non-predefined number of i/o, and dynamically change number of channels
16:28:58 [olivier]
Issue 1 -> https://github.com/WebAudio/web-audio-api/issues/1
16:30:24 [olivier]
cwilso: does everyone understand inputs/outputs and channels in the spec?
16:33:27 [olivier]
olivier has changed the topic to: W3C Audio WG - f2f meeting https://www.w3.org/2011/audio/wiki/F2F_Oct_2014
16:36:31 [olivier]
joe: suggests change [missed]
16:37:14 [olivier]
cwilso: problem is that output channels can be dynamically changed
16:37:58 [olivier]
joe: if you leave out the dynamic bit, what would audioprocess look like?
16:38:10 [olivier]
cwilso: you'd need to define the number of channels somewhere
16:38:15 [olivier]
joe: like in the constructor
16:39:42 [olivier]
[Philippe Cohen from Audix joins]
16:40:37 [olivier]
ChrisL: why are dynamic channels a problem?
16:40:59 [olivier]
cwilso: that happens in the audioworker
16:41:18 [olivier]
... and the only time you can do anything is when onaudioprocess event fires
16:42:47 [olivier]
cwilso: every connection has the opportunity to cause this upmix/downmix
16:43:27 [olivier]
... if we didn't care about dynamic channels, we're done, because we let you define channels
16:43:50 [olivier]
joe: what id you could pre-specify number of inputs and channels
16:44:23 [olivier]
cwilso: problem is that channels is per input
16:44:35 [olivier]
... so we'd end with an array of array of float32 buffers
16:45:25 [olivier]
joe: propose we pass an array per input, and have arrays of arrays of buffers, organised by channel
16:45:35 [olivier]
cwilso: harder for outputs
16:45:47 [olivier]
... for inputs, easy because you are handed an array of arrays
16:46:22 [olivier]
joe: how do native nodes deal with this?
16:47:19 [olivier]
cwilso: internally it only cares about the output to the destination
16:47:31 [BillHofmann]
BillHofmann has joined #audio
16:48:25 [olivier]
cwilso: question is 1- do we want to represent multiple connections in addition to multiple channels, and 2- do we want dynamic channels?
16:48:40 [olivier]
... probably both yes, but needs to be figured out
16:48:50 [olivier]
... so we can replicate splitter and merger node behaviour
16:51:01 [olivier]
joe: worth aiming for a fully scoped audioworker that does all that
16:51:15 [olivier]
... would argue for most complete approach
16:51:29 [olivier]
cwilso: assuming no need to have dynamic change to number of inputs
16:51:35 [olivier]
... all predefined
16:55:34 [olivier]
RESOLUTION: agreement on need for support for multiple connection at instantiation, and multiple number of channels after instantiation
16:56:10 [cwilso]
"This is a magical node." - Joe, referring to PannerNode.
16:56:11 [olivier]
Next issue - 372 - rework pannerNode https://github.com/WebAudio/web-audio-api/issues/372
16:56:39 [olivier]
joe: suggest deprecating it
16:57:21 [olivier]
cwilso: was an attempt to bind together different scenarios - both control of mixer and 3D game with spatialization
16:58:31 [olivier]
cwilso: issue 368 is about default currently HRTF https://github.com/WebAudio/web-audio-api/issues/368
17:00:19 [olivier]
... some of it (doppler) was made when there were only buffersourcenodes
17:00:30 [olivier]
... completely broken with things like a live microphone
17:00:56 [olivier]
... also - none of the parameters are audioparams, they're floats
17:01:13 [kawai_]
kawai_ has joined #audio
17:01:54 [olivier]
shepazu: is there some reason it was done this way?
17:02:20 [olivier]
padenot: looking at openAL - games was a big use case at that point
17:03:14 [olivier]
cwilso: given the above I agree to tear this node apart
17:03:31 [olivier]
... we need a panner and a spatialization node, separately
17:04:00 [olivier]
... plus rip dopplr completely, can be replicated with a delaynode
17:04:46 [olivier]
BillHofmann: I hear a proposal for a stereo panner
17:05:00 [olivier]
... and a proposal to deprecate the pannernode
17:05:12 [olivier]
cwilso: agree on need for stereo panner
17:05:32 [olivier]
... spatialization still has value, especially as it has been implemented already
17:05:47 [olivier]
... would want to re-specify, to have xyz as audioparams
17:06:22 [olivier]
BillHofmann: question about whether it should be the last one
17:06:32 [olivier]
shepazu: could be a best practice
17:07:26 [olivier]
cwilso: advise authors to do so - not a requirement
17:07:59 [olivier]
joe: hear consensus on a new stereo panner
17:08:36 [olivier]
... and a spatialization feature for v2
17:09:10 [olivier]
joe: suggest stripping equalpower from the node
17:10:46 [olivier]
joe: so hear consensus to replace pannernode with a new stereo panner node + spatialization node, with audio parameters, remove doppler
17:12:59 [olivier]
olivier: not clear whether group wants the spatialization to be in v1 or v2?
17:13:28 [olivier]
matt: unsure from our perspective. We have convolvernode
17:14:15 [olivier]
joe: can the new spatialization be pushed to v2? Expecially since we currently have convolvernode
17:14:48 [olivier]
joe: would suggest deprecating for v1, fix it in v2
17:15:15 [olivier]
cwilso: unless we fix it no point in deprecating in v1 - games would be using it
17:16:16 [BillHofmann]
BillHofmann has joined #audio
17:16:43 [olivier]
ChrisL: agree and think new cleaned up should be in v1, possibly marked as risk
17:16:49 [olivier]
... it sets a clear direction
17:17:31 [olivier]
RESOLUTION: clearly deprecate current pannernode. Add spatialization without doppler. Add a stereo panner node.
17:17:47 [shepazu]
http://www.w3.org/TR/mediacapture-depth/
17:18:25 [olivier]
shepazu: depth track - do we need to take it into consideration?
17:18:30 [olivier]
joe: it could, in the future
17:20:28 [olivier]
Next - Github 359 - Architectural layering: output stream access? - https://github.com/WebAudio/web-audio-api/issues/359
17:20:59 [olivier]
cwilso: question is how do you get access to input and output
17:21:11 [olivier]
... translates to streams
17:21:30 [cwilso]
https://streams.spec.whatwg.org/
17:21:35 [olivier]
... questioning the need for streams API
17:21:48 [olivier]
Streams API -> https://streams.spec.whatwg.org/
17:22:03 [olivier]
cwilso: is is really designed as a sink
17:22:17 [olivier]
... different model to how web audio works (polling for data)
17:23:11 [olivier]
... does not really solve the balance between low latency and avoiding glitching
17:23:54 [olivier]
joe: how does this relate to getUserMedia
17:27:01 [olivier]
harald: discussion in group about output devices and how to connect a mediastream to a specific output device
17:28:41 [olivier]
rrsagent, draft minutes
17:28:41 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
17:30:10 [olivier]
s/Audix/Audyx/
17:31:08 [olivier]
joe: anything actionable for v1?
17:32:48 [olivier]
cwilso: agree it's not mature enough yet. still agree you need some access to audio
17:33:15 [olivier]
joe: agree it us fundamental, but acknowledge it will be a multi-group thing
17:33:35 [olivier]
cwilso: architectural layering seems more important than arbitrary date
17:34:30 [olivier]
... significant broken bit
17:36:23 [olivier]
TENTATIVE: Github 359 is fundamental but we may not resolve it for v1
17:59:46 [hongchan]
hongchan has joined #audio
18:07:24 [olivier]
rrsagent, draft minutes
18:07:24 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
18:10:40 [olivier]
next is Github 358 - Inter-app audio - https://github.com/WebAudio/web-audio-api/issues/358
18:11:10 [olivier]
mdjp: tricky issue - on the one hand this sounds like a case of "plugins are bad"
18:11:16 [olivier]
... but there is demand from the industry
18:11:23 [olivier]
cwilso: 2 separate but related issues
18:11:40 [olivier]
... on the one hand massive popular, adopted plugin system(s) (VST, etc)
18:11:51 [olivier]
... massive investment
18:12:05 [olivier]
... this is how most inter-application audio is done
18:12:29 [olivier]
... people have a lot of these around, and not being able to use effects people own is a problem
18:13:08 [olivier]
... separately, there's a question of whether we allow people to replicate what they do
18:13:21 [olivier]
ChrisL: are you talking about sandboxing?
18:13:33 [olivier]
joe: feels like a web-wide issue
18:14:30 [olivier]
cwilso: audio is a very low latency, high bandwidth connection - makes it different from other kinds of applications
18:15:11 [olivier]
joe: does audioworker dispense with it by allowing loading scripts from other domains?
18:16:15 [olivier]
olivier: what's the use case for this to be in v1?
18:16:31 [olivier]
cwilso: relates very closely to a number of our key use cases
18:18:09 [olivier]
ChrisL: one key difference with other parts of the web is user acceptance of the model
18:19:08 [olivier]
joe: want to be careful jumping this divide. Because this relates to our use cases does not necessary obliges a v1 release to cover those present day platforms
18:19:20 [olivier]
... would be a good thing, but may not be something we MUST do
18:19:50 [olivier]
... it will be controversial if we pull plugins in and make it a first class citizen
18:20:30 [olivier]
cwilso: are you talking about plugging into VSTs and rack effects, or the general question of inter-app plugins
18:21:00 [olivier]
joe: audioworker could be the solution to the generic question of javascript "pulgins"
18:21:05 [olivier]
s/pulgins/plugins/
18:21:13 [olivier]
cwilso: don't know whether it actually works for it
18:21:58 [olivier]
q+ BillHofmann
18:22:14 [olivier]
ack BillHofmann
18:22:52 [olivier]
BillHofmann: question of whether plugins will be native or web-based
18:23:10 [olivier]
... might be worth renaming to not create allergic reaction to "plugins"
18:24:20 [olivier]
joe: my belief is that there could be a standard built upon audioworker
18:24:39 [BillHofmann]
(note speaking as a matter of personal opinion, not as Dolby)
18:26:45 [olivier]
mdjp: seems to be consensus that plugin arch is important, question is whether v1 or v2
18:27:01 [olivier]
(only cwilso raises hand for preference to v1)
18:27:49 [olivier]
cwilso: we are at a point where we are looking back at coverage of our use cases - we might want to revisit the use cases
18:29:19 [olivier]
olivier: suggest splitting the two issues - multi-app behaviour and VSTs etc
18:30:23 [olivier]
mdjp: suggest 3 steps
18:30:50 [olivier]
.... 1 review use cases and implications of doing this or not
18:31:34 [olivier]
... 2 if we do it, how
18:32:38 [olivier]
cwilso: will have to have an answer ready when we stamp something as v1
18:32:49 [olivier]
... as to how this will be done
18:35:36 [olivier]
RESOLUTION: split GH 358 into two, start thinking about our answer
18:37:18 [olivier]
[discussion about v1, living standard etc]
18:37:43 [olivier]
shepazu: note that v1 is where patents lie
18:37:56 [olivier]
... having a live editor's draft is a good idea
18:43:11 [olivier]
Next - Github 351 - Describe how Nodes created from different AudioContexts interact (or don't) - https://github.com/WebAudio/web-audio-api/issues/351
18:44:02 [olivier]
cwilso: suggest that nodes should not interact with other contexts
18:44:10 [olivier]
... but buffers should work across contexts
18:44:29 [olivier]
... remember - a context works at a specific sample rate
18:44:46 [olivier]
... any node with a notion of a sample buffer would break if you pass it across contexts
18:45:00 [olivier]
... whereas audiobuffers have sample rate built in
18:45:56 [olivier]
cwilso: one of the two ways of getting a buffer (decodeaudiodata) would be harder
18:46:03 [olivier]
... but we have a separate issue for that
18:47:22 [olivier]
cwilso: proposal is to say "if you try connecting nodes across context, throw an exception; audiobuffers on the other hand can"
18:47:30 [olivier]
olivier: is there any case against this?
18:47:40 [olivier]
cwilso: would lose the ability to reuse a graph
18:47:45 [olivier]
... withotu recreating it
18:47:52 [olivier]
s/withotu/without/
18:48:29 [olivier]
RESOLVED: agree to cwilso's proposal - "if you try connecting nodes across context, throw an exception; audiobuffers on the other hand can"
18:50:03 [olivier]
Next - Github 12 - Need a way to determine AudioContext time of currently audible signal - https://github.com/WebAudio/web-audio-api/issues/12
18:50:47 [olivier]
joe: this comes from the fact that on some devices there was built in latency at OS level, and it was impossible to discover it
18:51:12 [olivier]
... no way of asking the platform "is there any latency we should know about?"
18:53:34 [olivier]
... problem with scheduling is that there is a time skew that is not discoverable
18:53:41 [olivier]
ChrisL: why is it not discoverable
18:53:54 [olivier]
... schedule something and measure?
18:56:15 [olivier]
cwilso: some of it is not measurable/not reported, e.g. bluetooth headset
18:57:03 [olivier]
olivier: there is a suggestion from srikumar here - https://github.com/WebAudio/web-audio-api/issues/12#issuecomment-52006756
18:57:14 [olivier]
joe: similar to my suggestion
18:59:39 [olivier]
(group looks at proposal - some doubts about point 3)
19:00:48 [jdsmith]
jdsmith has joined #audio
19:01:41 [Shiger]
Shiger has joined #audio
19:02:22 [timeless]
timeless has joined #audio
19:02:26 [RRSAgent]
I'm logging. I don't understand 'this meeting spans midnight <- if you want a single log for the two days', timeless. Try /msg RRSAgent help
19:02:54 [olivier]
RRSAgent, this meeting spans midnight
19:06:27 [olivier]
joe: suggestion of new attribute of audiocontext describing the UA's best guess of the signal heard by the listener right now
19:06:39 [olivier]
... in audiocontext time
19:07:14 [olivier]
... it's a time in the past, different from currenttime which is the time of the next processing
19:07:40 [cwilso]
additionally, I think we should expose the best guess at the time the currentTime block will play in performance.now time.
19:08:54 [olivier]
olivier: will need to be more precise to be testable
19:09:41 [rtoyg_m]
rtoyg_m has joined #audio
19:11:17 [timeless]
timeless has left #audio
19:12:03 [olivier]
RESOLUTION: add two things - new AudioContext attribute exposing UA's best guess at real context time being heard now on output device (this will normally be a bit behind currentTime, and not quantized). Also new attribute expressing DOM timestamp corresponding to currentTime. - see https://github.com/WebAudio/web-audio-api/issues/12#issuecomment-60651781
19:14:45 [olivier]
Next - Github 78 - HTMLMediaElement synchronisation - https://github.com/WebAudio/web-audio-api/issues/78
19:14:53 [olivier]
cwilso: suggest to close "not our problem"
19:17:06 [olivier]
(consensus to close, crafting close message)
19:17:28 [olivier]
joe: essentially duplicate of 257?
19:18:14 [olivier]
RESOLUTION: Close Github 78
19:18:45 [olivier]
Next - Github 91 - WaveTable normalization - https://github.com/WebAudio/web-audio-api/issues/91
19:23:35 [olivier]
ChrisL: is this the highest or the sum that is normalised to 1?
19:23:37 [olivier]
joe: sum
19:30:09 [olivier]
[discussion about normalising and band-limitation]
19:30:26 [olivier]
cwilso: need to look at actual normalization algorithm
19:31:41 [olivier]
cwilso: suggested resolution - need a parameter to turn off normalization; also - need better explanation of periodicwave
19:34:39 [olivier]
RESOLUTION: add additional optional parameter to createPeriodicWave() to enable/disable normalization; better describe real and imag; document the exact normalization function - https://github.com/WebAudio/web-audio-api/issues/91#issuecomment-60655020
19:34:44 [olivier]
[break for lunch]
19:34:51 [olivier]
RRSAgent, draft minutes
19:34:51 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
20:28:38 [joe]
joe has joined #audio
20:29:50 [hongchan]
hongchan has joined #audio
20:35:18 [olivier]
zakim, pick a victim
20:35:18 [Zakim]
sorry, olivier, I don't know what conference this is
20:36:26 [rtoyg_m]
rtoyg_m has joined #audio
20:37:49 [jdsmith]
jdsmith has joined #audio
20:38:39 [olivier]
Next - Need to provide hints to increase buffering for power consumption reasons - GH 348 - https://github.com/WebAudio/web-audio-api/issues/348
20:39:21 [BillHofmann]
BillHofmann has joined #audio
20:39:38 [olivier]
padenot: in some use cases better to not run at the lowest possible latency
20:39:58 [olivier]
... multiple proposals in the thread; one is to tell the contextwhat the prefered buffer size is
20:40:03 [olivier]
... another is to use channel API
20:40:19 [olivier]
... and Jer has a strawman where he kind of uses the channel API
20:40:48 [olivier]
... my position would be not to explicitely pick a buffer size
20:41:00 [olivier]
... the UA has typically a better idea of appropriate buffer size
20:41:30 [olivier]
cwilso: major concern was to conflate with other behaviour such as pausing for a phone call
20:41:37 [olivier]
... (stop the context altogether)
20:42:09 [olivier]
... agree that the requested number is not necessarily what you would get
20:42:54 [olivier]
padenot: basically we need low latency / save battery
20:42:59 [olivier]
cwilso: and balance the two
20:43:04 [olivier]
... and turn that dial
20:44:01 [ChrisL]
ChrisL has joined #audio
20:45:19 [olivier]
olivier: use case for typical developer? I see how that's useful for implementer...
20:45:40 [olivier]
padenot: example of a audio player with visualiser - no need for low latency there
20:48:02 [olivier]
joe: if this were to be implemented as a "the UA may..." would that be acceptable?
20:48:07 [olivier]
cwilso: right thing to do
20:48:10 [olivier]
padenot: agree
20:49:48 [olivier]
cwilso: not sure it should be at constructor level
20:50:07 [olivier]
padenot: tricky on some platforms if you want to make it glitch-less
20:50:29 [olivier]
... worried about putting too many parameters on the constructor
20:52:06 [hongchan]
hongchan has joined #audio
20:52:13 [olivier]
RESOLUTION: do something similar to Jer's proposal at https://github.com/WebAudio/web-audio-api/issues/348#issuecomment-53757682 - but as a property bag options object passed to the constructor
20:52:37 [shepazu]
shepazu has joined #audio
20:52:51 [olivier]
cwilso: expose it, make it readonly and decide later whether we make it dynamic
20:53:45 [olivier]
... expose the effects, not the whole property bag
20:57:16 [hongchan1]
hongchan1 has joined #audio
20:57:44 [olivier]
Next - two related issues
20:58:04 [olivier]
joe: two issues - Github 264 and 132
20:58:42 [olivier]
Use AudioMediaStreamTrack as source rather than ill-defined first track of MediaStream -> https://github.com/WebAudio/web-audio-api/issues/264
20:59:03 [olivier]
Access to individual AudioTracks -> https://github.com/WebAudio/web-audio-api/issues/132
20:59:17 [olivier]
joe: looking at 132 first
21:00:08 [Cyril]
Cyril has joined #audio
21:00:24 [olivier]
joe: would it make change to change the API to be track-based for both
21:01:16 [olivier]
joe: also this comment from Chris suggesting one output per track for MediaElementSourceNode https://github.com/WebAudio/web-audio-api/issues/132#issuecomment-51366048
21:04:45 [olivier]
padenot: tracks are not ordered
21:07:07 [olivier]
joe: tracks have a kind, id and label
21:07:13 [olivier]
cwilso: yes they're identifiable
21:07:41 [olivier]
cwilso: the problem is "we say first" and they have an unordered label list...
21:07:59 [olivier]
joe: we could rename and require an id
21:08:09 [olivier]
cwilso: take the track instead
21:08:11 [olivier]
joe: agree
21:09:30 [olivier]
cwilso: suggest we keep the same name and factory method, and add another that take a track
21:09:59 [olivier]
... which would be unambiguous
21:10:04 [Shige_]
Shige_ has joined #audio
21:11:44 [olivier]
cwilso: seems to be agreement on adding the interface that takes a track
21:11:55 [olivier]
... question is what we do with the old one
21:12:04 [olivier]
... and essentially deciding "what is the first track"
21:12:22 [olivier]
... first id in alphabetical order?
21:14:03 [olivier]
ChrisL: justification for keeping the "first track" interface?
21:14:28 [olivier]
cwilso: works today without people having to go read another spec and pick a track - especially since most of the time there is only one track
21:15:20 [olivier]
cwilso: we could clearly explain not to use the old system if there may be more than one track
21:16:38 [olivier]
ChrisL: what if we rickrolled them if they don't specify the track?
21:17:32 [olivier]
Harald: throw an exception if trying to use this method when there is more than one track
21:18:06 [olivier]
joe: at least it looks like mediaElement and MediaStream are congruent in their treatment of tracks
21:18:14 [olivier]
... (as far as we are concerned)
21:19:49 [olivier]
RESOLUTION: keep the same node, keep same factory method but add a second signature that take a track - also define first
21:20:20 [olivier]
cwilso: "it doesn't need to make sense, it just needs to be definitive"
21:22:52 [olivier]
Next - (ChannelLayouts): Channel Layouts are not sufficiently defined - Github 109 - https://github.com/WebAudio/web-audio-api/issues/109
21:24:39 [olivier]
BillHofmann: the reason this is relevant is for downmixing
21:24:51 [olivier]
... any more we want to cover for v1?
21:25:08 [olivier]
mdjp: do we have a limit?
21:25:11 [olivier]
cwilso: 32
21:25:53 [olivier]
olivier: mention a spec (AES?) attempting to name/describe channel layouts
21:26:05 [olivier]
BillHofmann: proposal to defer to v2
21:26:17 [olivier]
cwilso: propose to remove the statement about "other layouts" from the spec
21:27:25 [olivier]
RESOLVED: stick to currently supported layouts, rewrite the statement about other layouts to clarify expectations that we are not planning any more layouts for v1
21:27:48 [olivier]
Next - Lack of support for continuous playback of javascript synthesized consecutive audio buffers causes audio artifacts. - GH265 - https://github.com/WebAudio/web-audio-api/issues/265
21:28:03 [olivier]
joe: agree that audioworker is the current solution to this problem
21:30:55 [olivier]
group discussing https://github.com/WebAudio/web-audio-api/issues/300 (Configurable sample rate for AudioContext)
21:32:30 [olivier]
RESOLUTION: close wontfix
21:33:47 [olivier]
RESOLUTION: bump up priority of GH300
21:34:16 [olivier]
Next - Unclear behavior of sources scheduled at fractional sample frames - GH332 - https://github.com/WebAudio/web-audio-api/issues/332
21:34:55 [olivier]
joe: propose we not do it, and specify that we are not doing it
21:40:11 [olivier]
RESOLUTION: edit spec to stipulate that all sources are always scheduled to occur on rounding sample frame
21:40:53 [olivier]
Next - OfflineAudioContext onProgress - GH302 - https://github.com/WebAudio/web-audio-api/issues/302
21:41:49 [olivier]
joe: seems like a showstopper to me if you need to create tens of thousands of notes
21:44:58 [olivier]
... onprogress would allow JIT instantiation
21:45:42 [olivier]
cwilso: if you want to do sync graph manipulation best thing may be to pause, modify, then resume
21:45:54 [olivier]
... you do have to schedule it
21:47:04 [jdsmith]
jdsmith has joined #audio
21:47:28 [olivier]
... you could schedule a pause every n seconds, and use the statechange callback
21:50:26 [olivier]
joe: single interval would be fine
21:50:31 [olivier]
... given use cases I have seen
21:51:08 [olivier]
RESOLUTION: introduce way to tell offlineaudiocontext to pause automatically at some predetermined interval
21:51:41 [olivier]
Next - Musical pitch of an AudioBufferSourceNode cannot be modulated - GH333 - https://github.com/WebAudio/web-audio-api/issues/333
21:52:43 [olivier]
joe: not sure this is v1 level
21:53:35 [olivier]
joe: would be nice to have detune for audiobuffersourcenode
21:54:11 [olivier]
... not great at the moment
21:54:16 [olivier]
... but suggest we defer this
21:54:32 [olivier]
mdjp: fair use case
21:56:15 [olivier]
cwilso: it does feel like something we forgot, not particularly hard
21:57:18 [olivier]
RESOLUTION: Add detune AudioParam in cents, analogous to Oscillator, at a-rate
21:57:56 [olivier]
Next - Map AudioContext times to DOM timestamps - GH340 - https://github.com/WebAudio/web-audio-api/issues/340
21:58:22 [olivier]
Resolution: see GH12 for description of new AudioContext time attribute
21:59:06 [olivier]
Next - Configurable sample rate for AudioContext - GH300 - https://github.com/WebAudio/web-audio-api/issues/300
21:59:14 [rtoyg]
rtoyg has joined #audio
22:00:09 [olivier]
resolution: the new options object argument to realtime AudioContext will now accept an optional sample rate
22:00:30 [olivier]
cwilso: may want to round it
22:05:46 [olivier]
[break]
22:06:33 [Cyril]
Cyril has joined #audio
22:19:23 [jdsmith]
jdsmith has joined #audio
22:37:52 [philcohen]
philcohen has joined #audio
22:39:11 [olivier]
ScribeNick: philcohen
22:39:19 [olivier]
rrsagent, draft minutes
22:39:19 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
22:39:27 [rtoyg_m]
rtoyg_m has joined #audio
22:39:52 [olivier]
Topic: issues prioritisation
22:40:22 [philcohen]
Chris: bring the a-rate and k-rate issue
22:40:45 [olivier]
https://github.com/WebAudio/web-audio-api/issues/55
22:42:07 [philcohen]
Decision: k-rate and not a-rate per Chris proposition
22:42:58 [philcohen]
Chris bring issue https://github.com/WebAudio/web-audio-api/issues/337: DecodeAudioData
22:45:46 [philcohen]
Chris: Use case is Audio API needs a decoder of its own since it cannot use Media API for this purpose that has its own design goals
22:47:04 [philcohen]
Bill: Additional related issues 371 337, 30 & 7
22:47:28 [philcohen]
Chris: https://github.com/WebAudio/web-audio-api/issues/30 we should do
22:50:24 [philcohen]
Joe: accepted 30 for escalation
22:50:59 [philcohen]
Chris: issue https://github.com/WebAudio/web-audio-api/issues/359
22:51:35 [philcohen]
Joe: we will discuss this tomorrow at 12 with Harrald from the device task force
22:52:11 [philcohen]
Joe: De-zippering https://github.com/WebAudio/web-audio-api/issues/76
22:53:23 [philcohen]
Chris: we have it built in today
22:53:37 [philcohen]
Paul: defined in the specs as part of the Gain node
22:53:38 [Shige]
Shige has joined #audio
22:54:08 [philcohen]
Olivier: thought we have already a resolution
22:54:47 [philcohen]
Chris: made a resolution back in January
22:55:49 [philcohen]
Joe: So what do we miss today?
22:57:32 [philcohen]
Chris: not convinced we can define the use case where it will be applied
22:57:51 [philcohen]
Joe: Does not want the API to De-zipper itself and let the developers being responsible for that
22:59:56 [philcohen]
Decision: We changed the January decision and the issue is open
23:00:32 [philcohen]
Joe: proposing De-zippering is OFF
23:00:56 [philcohen]
ChrisL: supporting it
23:01:37 [philcohen]
Decision: Cancel automatic De-zippering, developers will use the API when needed
23:03:52 [philcohen]
Olivier: should add informative material to inform developers
23:03:56 [philcohen]
Joe: OK
23:04:42 [philcohen]
Chris: issue https://github.com/WebAudio/web-audio-api/issues/6
23:04:53 [philcohen]
AudioNode.disconnect() needs to be able to disconnect only one connection
23:05:21 [philcohen]
Chris: Connect allow selective connection but disconnect is not selective and destroy all output
23:05:25 [philcohen]
Joe: it's bad
23:05:34 [philcohen]
Decision: Just do it!
23:06:38 [philcohen]
https://github.com/WebAudio/web-audio-api/issues/367 Connecting AudioParam of one AudioNode to another Node's AudioParam
23:08:21 [philcohen]
Joe: concern we are inventing a new way to connect that will create a heavy load on implementors
23:10:49 [philcohen]
Chris: Use case: create a source with a DC offset
23:11:19 [philcohen]
Decision: not in V1
23:12:36 [philcohen]
https://github.com/WebAudio/web-audio-api/issues/39 MediaRecorder node
23:13:02 [philcohen]
Chris: should not be done, since we have already have ...
23:13:19 [philcohen]
Paul: What about offline AudioContext?
23:13:28 [philcohen]
Chris: great V2 feature
23:13:29 [olivier]
s/.../MediaRecorder via MediaStream/
23:13:31 [Cyril]
Cyril has joined #audio
23:13:42 [olivier]
rrsagent, draft minutes
23:13:42 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
23:14:46 [olivier]
s/Decision:/RESOLUTION:/g
23:14:50 [olivier]
rrsagent, draft minutes
23:14:50 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier
23:15:36 [philcohen]
Decision: Deferring to V2
23:16:16 [philcohen]
https://github.com/WebAudio/web-audio-api/issues/13 - A NoiseGate/Expander node would be a good addition to the API
23:17:30 [philcohen]
Chris: Pretty common use case
23:17:47 [philcohen]
Chris: Doable in AudioWorker
23:19:33 [philcohen]
Paul: Dynamic compressor can be used for that?
23:20:12 [philcohen]
Chris: prefers to make it a separated node
23:22:59 [philcohen]
Matt: suggesting a decision Additional node in V1, name to be finalized Expander, Dynamic Compressor
23:24:12 [philcohen]
Matt: testing requires work
23:24:24 [philcohen]
Chris: anyhow we have a lot to do in testing
23:24:40 [philcohen]
Decision: Approved to include this in V1
23:25:22 [philcohen]
Related: DynamicsCompressor node should enable sidechain compression https://github.com/WebAudio/web-audio-api/issues/246
23:27:50 [philcohen]
Chris: Joe position is to not include this in V1 and detailed how to achieve that
23:29:14 [philcohen]
Paul: Connecting two inputs can make it
23:29:45 [philcohen]
Chris: Two input connections (signal + control)
23:32:12 [philcohen]
Decision: no new node: DynamicCompressor can have optional 2nd input for Control signal, for V1.
23:40:38 [olivier]
s/Decision:/RESOLUTION:/g
23:41:59 [Cyril]
Cyril has joined #audio
23:42:19 [hongchan]
hongchan has joined #audio
23:44:16 [olivier]
Topic: editorial issues ready for review
23:44:16 [olivier]
https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22Ready+for+Review%22
23:46:23 [hongchan]
cwilso: moving onto issues on AnalyserNode
23:46:31 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/330
23:47:26 [hongchan]
…: issue 1 - processing block size
23:48:30 [hongchan]
…: issue 2 - smoothing
23:49:28 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/377
23:50:28 [shepazu]
shepazu has joined #audio
23:52:03 [hongchan]
…: issue 3 - analyser FFT size
23:52:11 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/375
23:53:47 [hongchan]
…: needed from visualization and robust pitch detection, we might want to crank it up to 8k.
23:55:08 [hongchan]
mdjp: it is necessary to layout some explanation about the trade-off on FFT.
23:57:31 [hongchan]
olivier: if commercial audio software support 32k, web audio api should do it too.
23:58:48 [plh]
plh has joined #audio
23:59:58 [hongchan]
cwilso: consensus is 32k.
00:01:50 [hongchan]
cwilso: the minimum size of frame for FFT should be 128.
00:02:04 [Cyril]
Cyril has joined #audio
00:02:51 [hongchan]
RESOLUTION: Specify which 32 samples to use. Last 32 has been identified.
00:04:15 [hongchan]
cwilso: issues smoothing performed on method call
00:04:18 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/377
00:05:39 [hongchan]
cwilso: ray suggested a problem caused by non-consecutive smoothing executions.
00:06:57 [hongchan]
TPAC RESOLUTION: Clarify analysis frame only on getFrequencyData
00:07:41 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/308
00:08:01 [hongchan]
Issues on shared methods in offlineAudioContext
00:11:35 [hongchan]
cwilso: polling data from Media API faster than real-time is not possible
00:15:19 [hongchan]
TPAC Resolution: Adopt ROC's original suggestion of making both the offline and realtime AudioContexts inherit from an abstract base class that doesn't contain the methods in question.
00:15:24 [hongchan]
Note: Ask Cameron McCormack to pronounce on best way to describe in WebIDL.
00:16:05 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/268
00:16:15 [hongchan]
Not relevant any more. Closing.
00:16:58 [hongchan]
Noise Reduction should be a float.
00:17:01 [hongchan]
https://github.com/WebAudio/web-audio-api/issues/243
00:18:28 [hongchan]
Closing issue 243.
00:18:51 [hongchan]
Moving onto Issue 128 - https://github.com/WebAudio/web-audio-api/issues/128
00:23:34 [hongchan]
using .value setter is sort of a training wheel, so it shouldn't be used for serious parameter control.
00:24:43 [hongchan]
mdjp: No behavioral changes on API. Editorial changes.
00:25:33 [hongchan]
Moving onto Issue 73 - https://github.com/WebAudio/web-audio-api/issues/73
00:25:53 [hongchan]
cwilso: introspective nodes should not be introduced
00:26:18 [hongchan]
mdjp: closing.
00:27:27 [hongchan]
Issue 317 - https://github.com/WebAudio/web-audio-api/issues/317
00:27:40 [hongchan]
Moving onto issue 317 - https://github.com/WebAudio/web-audio-api/issues/317
00:28:56 [olivier]
rrsagent, draft minutes
00:28:56 [RRSAgent]
I have made the request to generate http://www.w3.org/2014/10/27-audio-minutes.html olivier