W3C

- DRAFT -

Audio Working Group Teleconference

15 May 2014

Agenda

See also: IRC log

Attendees

Present
Regrets
Chair
olivier
Scribe
chrislowis

Contents


<olivier> trackbot, start meeting

<trackbot> Date: 15 May 2014

<cwilso> sigh

<padenot> am I aaaa ?

Review Action Items

<olivier> https://www.w3.org/2011/audio/track/agenda

olivier: action 11 on me to organise constructors CfC
... that's organised now.

<olivier> action-101 done

<olivier> action-101 closed

<trackbot> Closed action-101.

olivier: I've checked with the TPAC organisers, and we're not overlapping.
... monday-tuesday should be for the audio group.
... wednesday all together
... I haven't heard from Matt recently (re ACTION 87) so leaving open

<olivier> action-87

<trackbot> action-87 -- Matthew Paradis to Review http://docs.webplatform.org/wiki/apis/webaudio, suggest updates -- due 2014-03-06 -- OPEN

<trackbot> http://www.w3.org/2011/audio/track/actions/87

olivier: ACTION 89 is in progress.

<olivier> action-89

<trackbot> action-89 -- Paul Adenot to Look at current implementations, and draft interface to request mathematical oscillator (issues/127) and specify phase (base on pr 270) -- due 2014-03-06 -- OPEN

<trackbot> http://www.w3.org/2011/audio/track/actions/89

padenot: that's right, last update was people weren't sure if it was needed.
... I think we can close #127

olivier: I'd like to have something for this, even if it's in the backlog.

padenot: #244 ?

olivier: that looks ok. It's odd that we had something discussed, a PR and now we're not sure ... but?

cwilso: I'm trying to remember, I think I commented on the first PR that I thought he had claimed that the oscillators were mathematically, having fixed the phase issue.

olivier: Action 96. We have an agenda item on.
... Actually it's done, we have proposals etc. I think we can close it?

cwilso: I think so - there's one subtlety here that I wanted to bring up.

<olivier> cwilso: mentions https://github.com/WebAudio/web-audio-api/issues/113#issuecomment-42858944

cwilso: I think there's a tension between providing something that will make ScriptProcessor "first class" - i.e. make them "zero" latency, and being able to farm them off to workers and do parallel stuff.
... really the only way to make things zero latency is to to make things synchronous - that is you can ask for the input and output at the same time and get the same result.
... which is not how they're designed at the moment.
... I think it's a better idea to put the ScriptProcessor in the "audio thread".
... so the question I have here is are we looking for something that turns ScriptProcessor into a node with zero latency? Or are we looking for something that turns them in to something that can be parallelised?

joe: my recollection here is that we wanted something that could enable algorithmic sound generation from the audio thread.

cwilso: having the code hop between the threads (main thread vs audio thread) means that it's almost impossible to avoid latency.

joe: I've never been too keen on the idea of explaining the rest of the api in terms of the ScriptProcessor node - seems a bit pedantic.
... but as an application developer, I want to know that if I put JS into that node, I'll get the best deal for doing so.
... I don't know if a "worker" is needed for that in order to do so, I think they just need to be in a "vacuum" so they have minimal dependencies

padenot: like shaders?

<olivier> ScribeNick chrislowis

joe: yes! But without the DSL - so just JS that can be uploaded to the audio thread so to say.

<olivier> ScribeNick: chrislowis

joe: I think it's tough at the moment for developers to work out what's happening.

cwilso: I think the problem with that shader idea is that you have to define how it interacts with the rest of it's world - does it get its own thread? But if we say do these things get folded into the audio thread - we need to know whether they get their input synchronously.

padenot: I think I'd prefer to have things syncronous on the audio thread.
... but it needs a lot of thinking. 1 how to get things on and off the thread, how to avoid biasing the audio thread

cwilso: locking / glitching will be possible in that case.
... if we had a async system like we have now ... then the rest of the system could run, but the scriptprocessor would be on another thread so it would "glitch" on its own, which is not better in an audio sense.
... the confirmation I was looking for is that we're not looking for a system that takes advantage of multiple cores to do audio processing in real time.
... it sounds like if you have that use case you need to think a bit harder.

joe: I think that's a fine starting point.
... you can still achieve some parallelisation by processing different parts of the graph in a certain way.

cwilso: I agree with you Paul, about how to get parameters on to the thread - maybe we need a simple example which would show how to reimplement the gainnode or something like that.
... I think I have enough information to rework my proposal.

olivier: and that covers off one of our agendum too.

<olivier> action-96 closed

<trackbot> Closed action-96.

Action 98 is on jernoble who is not here.

<trackbot> Error finding '98'. You can review and register nicknames at <http://www.w3.org/2011/audio/track/users>.

olivier: Action 98 is on jernoble who is not here.
... 96 is on me, which I haven't done yet.

Constructors CfC

<olivier> http://lists.w3.org/Archives/Public/public-audio/2014AprJun/0044.html

<olivier> cwilso: http://lists.w3.org/Archives/Public/public-audio/2014AprJun/0047.html

cwilso: dominic put something in the ticket. The terseness came from having constructors that took some of the parameters.
... I think it would be useful to have some of the parameters in the factories too.
... I was talking to rtoyg - anything that is an audionode must be connected to a context.
... There are two nodes that are constructed that are not part of the context PeriodicWave and AudioBuffer.
... The former, PeriodicWave actually does need the context since it's samplerate dependent.
... the latter, we could talk about whether we could share the buffers between context. At the moment you have to call that from the context, currently it is called from the context and resamples to its sample rate. But it doesn't need to be that way for creating the buffer itself.
... when you're trying to build a DAW, or a DJ app, you need multiple audio outputs to make this happen.
... croger's answer at the time was to have a sound card with multiple channels and to do the routing yourself.
... but if you have two separate audio devices, today you'd have two separate audioContexts, but we don't have that API today.
... the really painful thing today would be duplicating audio buffers across multiple contexts, which you don't want to have to do if you don't have to.
... I don't have a problem with the proposal, you want to make context a required attribute and raise on the constructor of all nodes, except audio buffer.

+q

padenot: I agree, we talked to the webrtc people 3 months ago, about switching devices - this api is coming, so we should make sure it fits.

chrislowis: if we introduce constructable nodes, do we keep the existing factories on AudioContext?

olivier: yes, I think we agreed we wouldn't break the existing interface.

padenot: we'd break the world if we didn't!

Review PRs for promises

olivier: I was wondering whether it was needed at all to review these.

<olivier> uncontroversial

midi demos page

olivier: as I said earlier, I haven't touched my action item which was to review the demos page.
... in particular, we don't have MIDI demos on that page at all.
... my question is quite simple : is there any reason why we shouldn't have both?

chrislowis: what is the purpose of the demos page?
... is it to show people how to use the APIs etc, as it originally was?

olivier: I agree that I would prefer to only have demos that are aimed at learning, but at some point the demos page transitioned from learning resources to an honours role of people doing good stuff with the API.
... I don't know whether we want to tell people that we don't need that any more...
... but I don't want us to be the official repo of sanctioned uses.

cwilso: perhaps we should separate it into two parts - the educational stuff and the top, and a second section of cool demos.
... the bar should probably be education / some level of documentation - but that probably rules mine out!

olivier: I think they are mostly like that.

cwilso: 99% of the demos I wrote were mostly "I wonder if I can do this"? And then I did it and the demo was done. And they don't have a lot of dependencies.
... but I didn't document or clarify every line.

olivier: OK, not hearing any objections to having the WebMIDI demos to the list.
... and I'll add a category.

AOB

olivier: that was all for the agenda that I had. But as it's the first call for a while with both cwilso and padenot, I thought I'd remind you that you wanted to schedule an editor's face-to-face
... any progress?

cwilso: no, is the short answer. I did mention it to padenot and jernoble... I think we're realistically talking about sometime in August which is getting close to TPAC anyway.

olivier: I'm very happy for editors / implementors to get together and hack on issues together, but I want us to be able to reflect on them as a group.

cwilso: sure.

olivier: AOB?
... then if there is no other business the call is adjurned. The next one is on the 29th May. Any known regrets?

<olivier> Next meeting proposed: 29th May

cwilso: should be ok.

olivier: let me know asap if not.

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2014/05/15 16:51:21 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found ScribeNick: chrislowis
Inferring Scribes: chrislowis

WARNING: No "Present: ... " found!
Possibly Present: Domenic_ IPcaller ScribeNick aaaa aabb audio chrislowis colinbdclark cwilso cwilso___ https inserted joe joined kawai mdjp olivier padenot paul___irish rtoyg rtoyg_ slightlyoff__ trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Agenda: http://lists.w3.org/Archives/Public/public-audio/2014AprJun/0045.html
Found Date: 15 May 2014
Guessing minutes URL: http://www.w3.org/2014/05/15-audio-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]