W3C

- DRAFT -

SV_MEETING_TITLE

26 Jun 2019

Attendees

Present
rtoyg_m2, mdjp, scottlow, padenot, hoch, cwilso, rtoy
Regrets
Chair
SV_MEETING_CHAIR
Scribe
scottlow

Contents


<padenot> hoch now presenting the Audio Device Client API

<padenot> hoch Multiple benefits:

<padenot> hoch low-level audio IO callback, doesn't have any building block like the Web Audio API

<padenot> hoch also, better access to audio hardware, which is a shortcoming of the current API, and also there should be a way to get notification on device changes, xruns, etc.

<padenot> hoch often, user code would like to have notifications about all that

<padenot> hoch the last thing, which is quite important to us, would be to have a dedicated scope and associated rendering thread, along with real-time thread

<padenot> hoch we have engaged with the security folks

<padenot> hoch but the question is "why are we doing this?"

<padenot> hoch close the app gap for audio

<padenot> hoch the missing piece is a low level audio api, like wasapi, coreaudio, etc.

<padenot> hoch the Web Audio API is not a replacement for this

<padenot> hoch in early review of the Web Audio API spec, this was identified by the tech review

<padenot> hoch subclassing was bolted on, but ScriptProcessingNode wasn't good enough

<padenot> hoch hence, the AudioWorklet system

<padenot> hoch there was a blog post, asking "who is the audience for this web audio api"

<padenot> hoch quite high level, but also quite low level, depending on the perspective

<padenot> hoch no way for authors to create low level DSP code

<padenot> hoch main issue: ScriptProcessorNode runs off of the main thread, and this create tons of problems

<padenot> hoch the main benefit of the AudioWorkletNode is to run audio code off main thread

<padenot> hoch developers ask "how do I port my existing audio code to AudioWorklets"

<padenot> hoch use WASM, which exists now

<padenot> hoch pass WASM modules to an AudioWorklet

<padenot> hoch mentioning his talk at Google IO this year

<padenot> hoch AudioWorkletNode is, at its core, is an AudioNode, better for small AudioNode and processing

<padenot> hoch the render quantum is fixed to 128 frames, we can't change this in the spec

<padenot> hoch this adds a non-compressible 128 frames buffer size

<padenot> hoch people come up with a new design pattern, using lots of cutting edge technologies: Web Worker, SharedArrayBuffer, AudioWorklet

<padenot> hoch the AudioWorklet does little: data is just written to it

<padenot> hoch the communication is done using the SharedArrayBuffer

<padenot> hoch this is a promising approach

<padenot> hoch the AudioWorklet is constrained by the 3ms render quantum, but you're processing in the Worker so this is more flexible

<padenot> hoch at the Chrome Dev Summit 2018, there was a demo with a multithreaded audio software

<padenot> hoch still, not a perfect pattern

<padenot> hoch Workers are low priority

<padenot> hoch overly complex setup

<padenot> hoch Worklet is just an endpoint, but it's quite complicated still

<padenot> hoch this is quite a long story

<padenot> hoch to return to the original subject Audio Device Client

<padenot> hoch the most important thing is "isochronous audio IO"

<padenot> hoch design with WASM in mind, but this will work with JS

<padenot> hoch configurable buffer size, sample rate, handles resample, the number of channels, reclocking

<padenot> hoch constraint-based based on Media Capabilities API

<padenot> hoch a dedicated scope on an RT thread if permitted

<padenot> hoch no more complex plumbing and thread hops, optimum for WASM-powered audio processing

<padenot> hoch code snippet, still a WIP

<padenot> hoch first, enumerating devices

<padenot> hoch this is an existing API, this allows getting the device ID

<padenot> hoch then, creating constraints, and pass that to getAudioDeviceClient, pass in a js module, and call start (governed by the autoplay policy)

<padenot> hoch in the global scope, similar to AudioWorklet, a process function is called, etc.

<padenot> hoch the discussion has intentionnaly started early

<padenot> hoch design discussion

<padenot> hoch tighter WASM integration, collaboration with WASM people

<padenot> hoch Web Audio API integratin

<padenot> hoch WebRTC integration ? questions from the WebRTC folks

<padenot> hoch task scheduler / event loop for real-time use case

<padenot> hoch real-time priority thread, should only be able to be spawed by a top-level document

<padenot> hoch no new attack vector

<padenot> hoch makes a current problem slighly worse "maybe", encouraged to experiment by the security review

<padenot> hoch possible mitigation, only allow RT threads in a top-level document

<padenot> hoch one of the idea

<padenot> hoch and also secure-context, but this is kind of unrelated

<padenot> hoch auto play policy and secure context are by default when using this new API

<padenot> hoch the end

<padenot> brian where did the SharedArrayBuffer pattern come from ?

<padenot> hoch there is an article about it

<padenot> padenot this was anticipated and we designed for it initially

<padenot> JackSchaedler we faced problems with the Web Audio API, because the model is not traditional, AudioDeviceClient aligns the different ways of doing audio on different platforms

<padenot> JackSchaedler interested about it

<padenot> JackSchaedler sometimes, ScriptProcessNode sometimes is more tolerant, and in certain cases better, smoother playback

<padenot> JackSchaedler, trading latency vs. resilience

<padenot> JackSchaedler still, challenging to code for both the ScriptProcessor and AudioWorklet, but it would be best to not have to worry about the thread hops

<padenot> JackSchaedler and not have to do this trade off

<padenot> hoch you did some experiments with the real-time thread

<padenot> JackSchaedler we did, we're going to release some videos about it soon

<padenot> JackSchaedler sometimes, on a macbook pro, this really changes the deal, but sometimes it doesn't quite work

<padenot> JackSchaedler, but we'll continue to experiment and test things

<padenot> JackSchaedler one other thing to add to the conversation: an observations, working with pro audio developers, that crossed off the Web Audio API early on, but WASM and AudioDeviceClient changes the game

<padenot> JackSchaedler they start to be interested again about it

<padenot> brian about WASM integration

<padenot> brian what is the story with tighter WASM integration

<padenot> hoch we talked about the SHaredArrayBuffer pattern

<padenot> hoch WASM has its own memory system, you have to render into the WASM heap. SharedArrayBuffer is separated, you have to copy lots of data all the time

<padenot> hoch the goal is to use user-provided memory, a bit of the WASM heap

<padenot> hoch there are some technical issues to address

<padenot> hoch concurrent access from the audio callback and the rest of the system

<padenot> philippe will there be a dependency on SharedArrayBuffer ?

<padenot> hoch we don't know yet

<padenot> scottlow how will this integrate with WebRTC

<padenot> hoch unclear, but maybe expose a MediaStream-based interface

<padenot> hoch (just an idea)

<padenot> rtoy WebRTC apps use Web Audio API today, for example to show a volume meter

<padenot> scottlow saw impressive demos at a conference relying on all that

<padenot> JackSchaedler how important is the Web Audio API integration

<padenot> hoch AudioDeviceClient is a kind of proxy for the hardware

<padenot> hoch and we use the id to route to the correct device

<padenot> hoch what if we can use the AudioDeviceClient to run the AudioContext, and run it to the correct device

<padenot> hoch another id, the AudioDeviceClient will have a getContext method

<padenot> hoch opposite ways to see things

<padenot> hoch the first approach allows running more AudioContext on the same device

<padenot> rtoy is that important ?

<padenot> cwilso we don't know, but this is a good way to do true extensible web layering, and to make the Web Audio API appear less magical

<padenot> cwilso which is not necessarily bad, but now we have an opportunity to explain the layer below the Web Audio API

<padenot> cwilso and ignore the Web Audio API completely if people want to do that, but they won't have to

<padenot> rtoy there won't be a complete separation between AudioContext and AudioDeviceClient

<padenot> rtoy they are inherently linked

<padenot> hoch one of the issue is technical: if we mention Web Audio API in the AudioDeviceClient, then we need to explain how the rendering of the Web Audio API is done

<padenot> hoch which complicates things

<padenot> hoch *shows code example live*

<padenot> hoch calling the AudioContext process method (`contextCallback`): this renders the AudioContext into a buffer

<padenot> hoch this is a good thing to have

<padenot> ... this is used when authors want to use both in the same app ?

<padenot> hoch yes, this makes it easier to figure out what is going on, but there is a lot of open questions and interesting problems we have to deal will, this will take some time to sort out

<padenot> hoch maybe we don't want integration

<padenot> cwilso integration needs to be done properly, and the abstraction should be uni-directional

<padenot> cwilso don't pay for what you don't use

<padenot> cwilso one of the poor layering things in the Web Audio API, is how to reimplement the HTMLMediaElement with the Web Audio API, this is not really possible

<padenot> cwilso expects that in the future, the Web Audio API could be explained in totality by the AudioDeviceClient

<padenot> chrisguttandin could one reimplement the Web Audio API in terms of AudioDeviceClient

<padenot> cwilso that's an explicit goal indeed

<padenot> chrisguttandin the same for AudioWorklet and any AudioNode

<padenot> rtoy almost, the listener and the panner have a magic connection that is a bit hard to emulate

<padenot> brian if someone wants to work at the higher level (Web Audio API), would the new API change anything?

<padenot> cwilso no, this won't change anything

<padenot> cwilso the current API will still continue to work as is

<padenot> cwilso I don't have a pile of C++ that I want to run, but others do

<padenot> brian what's the story with hybrid apps, use both AudioDeviceClient and the Web Audio API

<padenot> hoch that's the use case we want to support

<padenot> cwilso but really you might to want to use AudioWorklet

<padenot> cwilso most of the people who are going to use AudioDeviceClient have strict requirements

<padenot> JackSchaedler you'd only mix-and-match when prototyping

<padenot> JackSchaedler hybrid is a convenience for prototyping, but in the later stage, you just write everything

<padenot> ... for the people that want to use custom code

<padenot> ... it's directly going to the hardware? How would I use the AnalyserNode?

<padenot> cwilso use AudioWorklet !

<padenot> hoch use postMessage also, or SharedArrayBuffer

<padenot> cwilso there is plenty of available library to do this

<padenot> mbuffa but what about DOM access

<padenot> cwilso just write the code !

<padenot> mbuffa a common comment from DSP folks is that the analysers are useful, and hybrid can be nice

<padenot> hoch AudioDeviceClient has only raw input, but maybe WebRTC people might now like this

<padenot> ... how would I use the AGC and co ?

<padenot> hoch *whiteboards*

<padenot> hoch explains how to do analysis with AudioDeviceClient

<padenot> hoch passing the data using a SharedArrayBuffer

<padenot> mbuffa why are we stuck with 128-block? we do 1-sample processing with Faust

<padenot> rtoy you're still being called at 128-frames rate

<padenot> brian maybe with tighter integration with WASM, authors will be able to have access to the Heap as well

<padenot> rtoy you have to use it from both sides

<padenot> hoch mentions padenot's Freeze idea from yesterday

<padenot> rtoy for visualizer, this works, if you're careful

<padenot> philippe Audio Device Client has less plumbing and less setup that the AudioWorklet+SharedArrayBUffer pattern

<padenot> philippe you provide a callback, it gets called on a high priority thread. in a real application, you're going have to deal with user input, which has an effect on the audio, and that requires processing of some things that run on the app clock, not the audio clock

<padenot> philippe wwise has a thread that runs high priority, takes events from the game, in sync with the game loop, and based on that, we do the audio processing

<padenot> philippe we need multiple threads with high priority

<padenot> hoch you need high priority workes

<padenot> philippe yes, this is one way

<padenot> philippe not realistic for games, you have a synchronization issue

<padenot> philippe you need an even queue

<padenot> hoch what is the clock for the games

<padenot> philippe it depends, but it's just an event queue. it gets woken up, processes a bit, and then go back to sleep

<padenot> hoch synchronous or asynchronous

<padenot> philippe just a simple semaphore

<padenot> philippe unclear if this will solve the problem we have

<padenot> hoch use two AudioDeviceClient

<padenot> hoch it's a hack but it would work

<padenot> philippe multiple RT thread in the same page ?

<padenot> hoch you can have multiple threads already with multiple audiocontexwt

<padenot> hoch *clarifies the processing model of Worklet*

<padenot> hoch it's a broader question, creating RT threads for workers

<padenot> philippe it's a clock issue

<padenot> philippe I'm talking about this more later this week

<padenot> ... can an instance of AudioDeviceClient spawn more than one AudioContext

<padenot> hoch undecided yet

<padenot> rtoy we started with one, but this is not decided yet

<padenot> rtoy originally an 1:1 mapping

<padenot> mbuffa you can use this at a low level, or with the Web Audio API, what are the use cases

<padenot> mbuffa when would I use AudioWorklet, when would I use AudioDeviceClient

<padenot> hoch it's up to the developer, and it depends on the size of the application

<padenot> rtoy with AudioDeviceClient, you could have lower input latency

<padenot> mbuffa it's a bit frightening to be such a low level

<padenot> rtoy web audio is not going away

<padenot> philippe what do you mean by resampling? this was mentionned

<padenot> hoch simple audio output and input resampling

<padenot> philippe why ?

<padenot> hoch this is an ask from developers

<padenot> rtoy two things: you can get the normal sample-rate, but you can also render at lower sample-rate, this is useful

<padenot> rtoy it adds flexibility

<padenot> padenot for emulators as well

<padenot> philippe this is supposed to be the lowest level API

<padenot> philippe you have sometimes APIs that let you run interleaved, int16, etc.

<padenot> philippe ideally, AudioDeviceClient would let you pick interleaving, sample-type, etc.

<padenot> rtoy hard to know for a browser perspective

<padenot> philippe in wwise, lots of permutations of the same algorithm for interleaving, sample-type conversion, etc. we want to use this

<padenot> hoch this is about making sure the audio data is in the right format and is not touched later

<padenot> philippe I'm expecting this from a low level API

<padenot> JackSchaedler not so sure I want to that

<padenot> philippe this would be optional

<padenot> hoch we should make some notes about that

<padenot> philippe this is in the interest of performance, if we want to max things out, it's the way to go

<padenot> break - 15min

<hoch> rtoy CG input on WebAudio V2 and WebMIDI next

<hoch> rtoy we're looking for input from CG

<hoch> rtoy high priority issues V2

<hoch> rtoy - pulseWidth oscillator

<hoch> rtoy - phase offset in oscillator

<hoch> rtoy - noise source

<hoch> (uniform/gaussian)

<hoch> rtoy - some generalized approach on dynamics processor

<hoch> (noise gate, expander, compressor, limiter...)

<hoch> mbuffa found a good impl for compressor, so we compiled to WASM and used it AudioWorklet. It's good now

<hoch> mbuffa not sure if one implementation of dynamics processor can accommodate all the use cases

<hoch> briangins have any one used the compressor as a limiter?

<hoch> padenot yeah our compressor impl is for master bus

<hoch> briangins source node's start(), stop(), cancel() might worth revisiting. The current answer from WG is not that satisfactory.

<hoch> braingins this is about canceling the stop() call that is already scheduled.

<rtoyg_m2> Discussing https://github.com/WebAudio/web-audio-api/issues/1944

<hoch> Hi chris

<hoch> Please join here: https://teams.microsoft.com/_#/pre-join-calling/19:meeting_OGIwOTY0NWEtYTExZC00MWM1LWJjY2EtZDhjNzQzMzc2NzVh@thread.v2

<hoch> briangins perhaps no one in CG encountered this problem?

<hoch> rtoy it came up in a different place (in recent discussions on cancel() method)

<hoch> rtoy perhaps people already got used to the stop(hugenumber) hack.

<hoch> (yeah it needs registration)

<hoch> khoi@ perhaps it can be named to cancelScheduledStop() method?

<hoch> padenot@ can you clarify what doesn't work?

<hoch> braingins@ (re-explain the case)

<hoch> rtoy@ let's reopen the issue and revisit it later.

<hoch> mdjp@ is it v2 or v1?

<hoch> hoch@ it's v2.

<hoch> khoi@ some body opened an issue on "Polynomial" ramping?

<hoch> hoch@ this is about cross-fading multiple sources, not panning.

<hoch> khoi@ this can be done with setValueAtCurve()

<rtoyg_m2> Discussing: https://github.com/WebAudio/web-audio-api/issues/671

<hoch> khoi@ I didn't raise the issue, but it seems doable with setValueAtCurve().

<hoch> rtoy@ I am opposed to add a new method to AudioParam automations.

<hoch> rtoy@ perhaps we could take a different interpolation math for setValueAtCurve()

<hoch> rtoy@ but we have to be careful because it can get tricky and unpredictable

<hoch> rtoy@ if there's nothing, we can talk web MIDI

<hoch> cwilso@ summary: the biggest issue is back pressure

<hoch> cwilso@ this is useful for dumping big sysex data

<hoch> cwilso@ API itself is already straightforward and intuitive.

<hoch> cwilso@ backpressure is not a part of MIDI protocol

<hoch> cwilso@ we want to handle this problem in V1 somehow.

<hoch> cwilso@ Using stream might be a path, but we also do not want to rely on that completely.

<hoch> chrisguttandin@ Can we do something similar node

<hoch> node-like stream model?

<hoch> cwilso@ looked into that, but it might not work well for the firmware update use case where the data size is potentially really big.

<hoch> cwilso@ stream API is usable, but the set up costs is quite significant.

<hoch> chrisguttandin@ WebRTC's data channel API might be something to look at

<hoch> ?

<hoch> cwilso@ not sure about it's potential because by design it is unreliable.

<hoch> cwilso@ the biggest future request is being able to create a virtual MIDI channel

<hoch> cwilso@ for example, I can create a webaudio synth and expose a virtual MIDI port. So I can use it in the other (native) audio apps.

<hoch> cwilso@ but we have to think about the security issue, because some software synth (a destination of MIDI message) can be vulnerable.

<hoch> hoch@ Web MIDI API is in Worker?

<hoch> cwilso@ the short answer is yes, but...

<hoch> hoch@ we want off-loading from the main thread where it's always busy with other UI tasks.

<hoch> JackSchaedler@ chrisguttandin@ yes we want this

<hoch> cwilso@ last time we visited a lot of components were not ready, but I think we can do this now. It

<hoch> It's even in V1 milestone

<hoch> cwilso@ stream is useful, but there's a price to pay

<hoch> JackSchaedler@ is that anything we can help? as an industry partner? (FireFox Web MIDI launch)

<hoch> padenot@ I'll follow up on that after F2F

<hoch> padenot@ it's less resistant on our side so I am hopeful

<hoch> briangins@ V1 Web Audio, what's the transition process?

<hoch> chrislilley@ we really need to close the open issues quickly before it gets reviewed by the director.

<hoch> chrislilley@ we need to have 2 working implementations, and waiting on the firefox

<hoch> rtoy@ does Edge count as another implementation?

<hoch> chrislilley@ well it depends, but generally it's chromium

<hoch> scottlo@ even before the new Edge we relied on the open source code

<hoch> chrislilley@ we'll review the submission on next Friday

<hoch> chrislilley@ it's good to have updates on Web MIDI and get it published

<hoch> cwilso@ let me take another run on issues

<hoch> chrislilley@ that'd be great to get updates

<hoch> rtoy@ that's it for the morning. let us continue after lunch

<AdenFlorian> howdy!

<rtoyg_m2> Aloha!

<AdenFlorian> you at the meeting?

<rtoyg_m2> Oh, we're having lunch. We start up again at 1pm (Seattle time).

<AdenFlorian> ah, ok

<rtoyg_m2> Lunch is over.

<cwilso> scribenick: scottlow

Audio Device Client

hoch Would like to discuss Web Audio API integration

<hoch> https://github.com/WebAudio/web-audio-cg/issues/5

hoch there are two sets of thoughts: 1) Chris has an example of modifying AudioContext constructor; 2) hoch's last two comments which is DeviceClient has a getContext() method to get context out of it

<hoch> https://github.com/WebAudio/web-audio-cg/tree/master/audio-device-client

hoch to help understand where we are there are explainers and code examples at the above link (worth checking out proposed code snippets)

hoch all relevant code snippets are in #5 already

hoch I think the first proposal (modifying constructor) is more flexible since you can pass DeviceClient object to multiple AudioContexts. We don't know what the practical use case of this is, but it's always nice to be flexible

cwilso Do we have any characterization of why people use multiple AudioContexts?

hoch we need more data here. Guess is the reason is to sniff between prefixed and non-prefixed WebAudio (first one sniffs and second one is used)

hoch have seen libraries use multiple AudioContexts to sniff various values and leave these hanging around

hoch ADC addresses this by setting up a constraint and sending this to the API

rtoy don't really see a use for multiple contexts, but you never know what people will do

cwilso I don't see a use for it either, but not sure why people are still doing this

rtoy are there people who think you need to create multiple?

hoch Had a chat with someone creating an audio-based app that was trying to use Web Audio to do gain control. Folks were asking whether a new context needed to be used for every media element

brian I had three students do WebAudio projects. Two of them also thought you needed to use multiple AudioContexts

hoch being flexible is nice, but not sure how to handle multiple contexts in AudioDevice callback

rtoy I like the first approach since it separates concerns

rtoy: process method calls ADC callback. IN the browser, we run normal audio context method (i.e. not exposing them at all)

rtoy if you want to do mixing yourself, then if nothing is exposed you can't do this

brian if you have two callbacks (one for input and one for output) you can say that the output callback is executed after all work is done

hoch assumption is that you want to apply your code after graph render

hoch if I want to have flexibility, I want to call WASM function first then you have the render result and pass that into audio graph to do other processing. Then there could be a post render operation

Sorry that was phillipe not brian

rtoy hoch wants to take output from graph and mix it with output from WASM code and mix it together in some way

rtoy what I'm proposing prevents that

rtoy if that's an important use case, then we have to go with what hoch is saying

rtoy if you have a context callback, it only makes sense to have one. I'm okay with this

rtoy if you really want to mix the output of Web Audio and WASM before you send it out, my idea does not work. Only question now is that an important usecase

brian this is what we were discussing earlier about hybrid apps

hoch when we talk about hybrid apps, we need to be able to mix these two and expose context callback

koi could we use worklet for this?

hoch depends on scale of application

rtoy rendering quantum on worklet sometimes makes code not feasible

hoch use case: convolver implementation

rtoy won't audio engines already have convolvers?

hoch this integration is all about supporting hybrid implementations

hoch by hybrid I mean you have code in WebAudio but want to use ADC to take advantage of device constraints/notifcations

hoch hybrid in terms of API usage not WASM usage

rtoy you don't need a getContext in ADC in this case. If you want to take WASM code and audio graph and do processing at the end, then you do need a context callback

phillipe when you have a hybrid app, in this case, the AudioContext is a big computation function. It doesn't talk to hardware directly. It will just return processed samples to ADC

phillipe isn't this like OfflineAudioContext? Render on demand? Would it be crazy to pass an OfflineAudioContext as part of a postMessage?

hoch where does postMessage happen?

phillipe in the spec, can you post message to ADC?

phillipe: would it be possible to let the user mix the two worlds together by postMessaging an OfflineAudioContext to the ADC?
... this would be a way to keep the two specs separate

rtoy only problem is that offline context isn't a streaming type thing. If you want to mix yourself with an offline context, you have a fixed number of samples you're going to generate

rtoy this may be solved by a continuous offline context. Then what you're saying could work

hoch can you pass a stream object via postMessage?

padenot we can check

hoch you'd lose sample accuracy. Being able to callback directly means you know the number of frames

padenot they don't seem to be transferable. It errors

rtoy if mixing is important then you have to have a context in ADC. I don't think we can close on this today since we need more data.

hoch I have an HTMLMediaElement and I want to process that with ADC. How do I do it?

padenot you'd need AudioContext. Same for WebRTC

rtoy fundamental question is do you want to be able to get audio samples out of an AudioContext so that you can play with them in the ADC?

hoch if that's the case, then I want to have getContext on ADC

padenot I think this is important. Let's consider request that we haven't been able to answer. Simple case of pitch shifting for audio for long tracks.

padenot If you want to apply any processing (or measure) a media element, you'd need this. How would you operate echo cancellation in WebRTC without this?

padenot another scenario: any involved VoIP application where you have the concept of an active speaker (ducking various people) and maybe increasing volume of someone who is speaking

padenot you'd want time stretching and/or pitch shifting

padenot any custom processing on MediaElement/Media Stream

padenot the source of the media is not local. Network/remote parties

hoch ideally with low-level API, you should be able to implement media element on top of this

phillipe using ADC as an output selector for Web Audio code is a valid use case

rtoy more convincing use case is mixing (as discussed earlier). Maybe we don't want to prevent this use case

hoch we should ask WebRTC people. Say this is our proposal and see how much interest there is

rtoy agreed

padenot: what speech people want to do is either simple or complex. Any decent WebRTC application is going to use an AudioContext to do their mixing
... You take your media streams and mix them together (maybe using relative gains)

rtoy: Do they actually use AudioContext for this?

padenot: absolutely. That's the best way
... The hard bit is custom echo cancellation for example

phillipe: What was the rationale behind having a single object representing both input and output?

hoch that's why we came up with different operation modes. Aggregate mode is default since it's much easier to reason over, but if you're able to handle multiple callbacks, you should go raw mode.

padenot: We don't talk with system APIs directly. What we do is aggregate and reclock all the audio streams we need and do one IPC transaction. Get input, process, send output. This reduces uncertainties

phillipe: what's raw mode for then?

hoch we have browser audio mixer. Then it makes sense to have a single callback. If underlying implementation changes somehow, it makes sense to split them.

phillipe: Requiring a consumer to make this choice seems strange since I don't know the advantages/drawbacks of each

hoch: do you prefer to have these separate?

phillipe: Yes. I'd expect to have two separate entities

hoch: We should give some more thought to modes

rtoy: We should see how low we can go and decide if that's what we actually want to expose
... Since we need to go from renderer to browser, we'll never be as good as native

phillipe: I'd also never expect that though, so seems okay

rtoy: Android sometimes has a DSP in the audio path. In some platforms you can turn it off and in some cases you can't. That affects how low you can really get
... Does low mean turn off the DSP when you can?

padenot: for the output of a web browser, turning off the DSP seems like a good idea
... 20ms to do an EQ is too much

rtoy: Problem is that you don't know what that code is doing though

<padenot> https://www.w3.org/2001/12/zakim-irc-bot.html

<hoch> sigh

<hoch> rtoy: what's the next for V2 and ADC?

<hoch> rtoy: mdjp suggested to pick up the best course of actions: 1) gradual change on v1 or 2) a new spec doc for V2?

<hoch> padenot: the gradual approach seems sensible

<hoch> rtoy: we don't want to add premature spec text to V1.

<hoch> hoch: do we have to delete WD after we publish Rec?

<hoch> rtoy: I don't know

<hoch> rtoy: we'll keep the V1 as it is and somehow we need to manage the logistics of new additions.

<hoch> padenot: webgl might be an example we can look up

<hoch> padenot: for webgl, v1 doc is frozen and v2 doc is recently updated

<hoch> padenot: v2 is super set of v1

<hoch> padenot: we'll tag v1

<hoch> hoch: we'll be updating the editor's draft

<hoch> cwilso: keep the github repo and use the "level number".

<hoch> cwilso: we can use GitHub's "release" with the tag.

<hoch> padenot: whenever Chris Lilley tells us the document is stamped (CR, R), then we can release it with a tag.

<hoch> padenot: CSS WG uses the "level" scheme

<hoch> rtoy: that works for me - "the living document"

<hoch> padenot: some changes might not be compatible with V1 so having an ED is sensible

<hoch> scottlo: an explainer document for the scope of "level 2" might be nice

<hoch> rtoy: (filling up Chris Lilley)

<hoch> rtoy: we need to finish off the remaining issues in webaudio

<hoch> hoch: can we clarify the venue for the discussion?

<hoch> rtoy: let's have CG discuss everything. we have to discuss V2 brainstorming including ADC, so it doesn't make sense to separate them out

<hoch> rtoy: what about web midi?

<hoch> cwilso: I don't know some of v2 issues are easily solvable, so we'll focus on finishing V1 for the time being

<hoch> chris_lilley: WPT for web midi?

<hoch> cwilso: we have an issue for that

<cwilso> specifically: https://github.com/WebAudio/web-midi-api/issues/184

<hoch> hoch: ADC prototype and more iteration based on the feedback from developers and partners

<hoch> chris_lilley: next Friday after the director meeting, and it will get published on following Tuesday or Thursday.

<hoch> padenot: we still need to scrub the backlog and high priority issues.

<hoch> padenot: we can try to approach the public via twitter, posting the github issue query.

<hoch> hoch: let's do some housekeeping on github tools

<hoch> padenot: I'll do that while you're away

<hoch> rtoy: anything else?

<hoch> rtoy: (closing remarks)

<hoch> rssagent create minutes

<hoch> rssagent make minute

<hoch> rssagent make minutes

<hoch> rssagent make log public

<rtoyg_m2> https://www.w3.org/2002/03/RRSAgent

<hoch> rssagent, make logs public

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/06/26 22:26:50 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/nit/it/
Succeeded: s/SharedArrayBUffer/SharedArrayBuffer/
Succeeded: s/sempahore/semaphore/
Present: rtoyg_m2 mdjp scottlow padenot hoch cwilso rtoy
Found ScribeNick: scottlow
Inferring Scribes: scottlow

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting


WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth


WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]