W3C

- DRAFT -

Audio Working Group Teleconference

09 May 2013

See also: IRC log

Attendees

Present
chrislowis, +1.510.334.aaaa, crogers, ehsan, +1.617.600.aabb, Doug_Schepers, +1.858.780.aacc, gmandyam
Regrets
Chair
SV_MEETING_CHAIR
Scribe
chrislowis

Contents


<trackbot> Date: 09 May 2013

OfflineAudioContext

ehsan: did you see my email to the list?
... let's have a quick overview:
... 1) what does it mean for nodes to not do anything before the first call to start rendering?

crogers: the media stream source node wouldn't be really usable with an OfflineAudioContext, as they'll be running at real time.

ehsan: a general thing to keep in mind is to have an exact notion of what it means for nodes to be attached to a graph that is contained within an OfflineAudioContext.
... we need a defined behaviour about what would happen if, for example a media stream node is connected to that context

crogers: it could be as simple as throwing an exception if someone tries to connect those two nodes?
... or couldn't even be created from the context (since the context has factories/generators for the nodes)

ehsan: that might be better

crogers: yes, you could just stop them from being created.

ehsan: the only two types of nodes are therefore oscillator and (?)

joe: what about script processor nodes?

ehsan: I think scriptProcessorNode should be usable too.
... we just need to prohibit those nodes that depend on a real-time timeline,
... am I correct to assume that events are not dispatched until start rendering has been called.

crogers: yes that would be true.
... in your example you have a scriptProcessor and you try to call the start rendering inside a process, you'd have to do that externally.

(hello joe, gmandyam and shepazu)

ehsan: 2) roc has a proposal to allow start rendering to take an optional duration argument. Here I'm assuming that we're talking about how it appears under the current spec. In that case - what happens when the rendering has stopped, should the nodes stop generating?

crogers: the second - the whole graph should stop rendering.

ehsan: we should mention that in the spec.
... for example if you have a scriptProcessor in the graph, you should never receive another event to that node.

chrislowis: does the context continue to exist after rendering?

crogers: practically it continues to exist, you can refer to nodes etc. but it doesn't do anything.

ehsan: it should be possible to optimise therefore to remove all the un-referenced nodes.

crogers: yes, GC can happen as normal.
... we've spoken about allowing nodes to be connected in in the middle of rendering, but I don't know if we need to talk about that until we've spec'd them.

ehsan: right.
... 3)
... I'm assuming that current time won't change as part of the event loop. Is that correct?

crogers: OfflineAudioContext is not running at real time, hopefully faster than real time. But possibly slower than real time, if it's doing a lot of work.

ack

joe: I wanted to make a point about offlineAudioContext, if we're talking about events that punctuate the creation of chunks, we need to make sure that when the app receives an event it's an opportunity to schedule future rendering. I want to make sure that whatever def of current time we're talking about here,
... without it currently continuing to advance, would that cause problems?

crogers: I think I understood. Right now, we're not talking about these partial rendering events, but when we do consider that, the processing needs to stop until the js has chance to do something with the partial event.

joe: I don't know if we need to lock this down right now, but we are edging into a more detailed definition of current time, so we need to consider it.

ehsan: let's talk about the proposal from roc about the event on the start rendering call.
... there are 2 real worries I have - i) i don't think it's reasonable to assuming the developer will know how many frames to read off at the beginning (e.g. if delay times change during a run).

ii) currently there's no reliable way to modify the structure of the graph while the offlineAudioContext is doing it's rendering, as we don't know exaclty where it is in the processing when a change is made.

ehsan: This will also vary across implementations.
... what I'd like to see, to completely drop the duration argument in the constructor of offlineAudioContext and make it a required argument to startRendering.

joe: I want to echo my support for roc's proposal on this. It unifies the handling of chunks and simplifies things.

crogers: I also like the idea of the duration, although I don't think it really gets around the problems, as if you chose a large duration, you could still modify the graph before you get a partial completion event.
... on the other hand I don't view that problem as big, as although it's not deterministic, it's also not that useful.
... many things are non-deterministic on the web platform, e.g. grabbing video frames from cameras etc.

ehsan: I agree that we shouldn't optimise for making everything deterministic, but the way it exists at the moment, it's impossible. E.g. if an audio node is added halfway through a rendering.

crogers: oh yes, I agree - I think the optional argument for startRendering is useful, but it's also not useless to give it a buffer to render, where you do know ahead of time what you will be doing.

joe: in fact, rocs proposal allows you to make changes in a deterministic fashion.

crogers: yes, I agree - if you want to do things piece by piece. But if it's an optional argument, you can choose to use it or not.

ehsan: the reason I'd like to drop the arg on the constructor and add it to the startRendering method, we could support what you're saying by making your fist call to startRendering receive the event for that call, and at some point in the future make another call...
... it will allow the current behaviour, if all you're interested in is a single completion event.

crogers: I think it can also work if you create a context with a duration and then call startRendering with a block size for when you want the completed event ... I wouldn't think you'd have to keep calling startRendering time and time again?

ehsan: the problem with that approach is that there might be cases where knowing how many frames you'll need in the future, so there's no good number to pass to the constructor. We'd get around that by not having the argument there at all - and only give authors one way to specify it.

crogers: I think that's a fair point. I'm going to have to have a think about it, as it will be a breaking change, but you have a good point.

ehsan: is offlineAudioContext used in web content at the moment?

crogers: maybe a couple, but it might be early enough that we could change it. I understand your point that in some cases you might not know what your duration is, but in many cases you do - I just need to think about how much more complicated it'll make the code in the common cases.

ehsan: it'll just change the places where you pass the length arguments ...

crogers: so if you call startRendering(10,000) it'll fire the event every 10,000 .

ehsan: yes, and the rendering will pause when startRendering is called - otherwise it just behaves like the oneshot call webkit already implements.

crogers: ok. I think that makes sense, I'm just a bit worried about the impact on current apps in the wild.

ehsan: I'm working on offlineAudioContext but I might hold off on dropping my changes until we've decided, as I'm working on the API as written at the moment

crogers: no problem.

ehsan: Another point I'd like to bring up is handling inherently real-time nodes.
... currently the spec says the offlineAudioContext "can render faster than realtime". Which means an implementation could be compliant if it doesn't.
... what would be nice is if we allow offlineAudioContext to be a drop-in replacement for AudioContext, i.e run faster than realtime unless there's a "real time" node in the graph. Would be useful for debugging purposes.

crogers: that sounds interesting - I'm wondering how it would be possible to implement this in the browsers I know about.
... I think it might be possible in chrome, not sure how hard it would be in safari or ios - Apple are not on the call, but it would be good to get their feedback. I wouldn't want to make a snap decision now ...

ehsan: that's fair. The reason I'm requesting it, is that it's going to make the implementation harder but I don't like the fact that which context your graph is attached to changing the semantics of some of the other nodes and so on. It'd be better if this was consistant, in terms of the quality of the API.

crogers: yes, I can see that.
... if you do want a partial rendering with a mediastream node, you can do that today with an regular context and a script node.

ehsan: yes, I think it's fair to say that you could use a script processing node to do some of these use-cases of an offlineAudioContext.

crogers: yes, for example processing things from a stream for example.
... but I don't think it would be unreasonable to do this with an offlineAudioContext too - but I can forsee some implementation difficulties.

ehsan: I think we should ask for feedback as you say.

crogers: sounds good to me.

chrislowis: anything else ehsan?

ehsan: I think we should go to through the spec at some point and see what things should look like in the presence of an offlineAudioContext. Is that something you've done crogers?

crogers: our layout tests are based on the offlineAudioContext so we are using it with almost all the other nodes.
... scheduled parameters, start() and stop() etc.

ehsan: that makes me feel quite a bit better about it!
... I will go through and have a look and see what stands out as potential problems. Hopefully I won't find anything, but we can discuss on the list.

crogers: the scriptProcessor will be the one we have to look at closely.

https://github.com/WebAudio/web-platform-tests

Testing

ehsan: I've been looking at the gecko tests and extracted two helper methods for comparing two buffers.

Implementations

shepazu: is there a update on implementation of Web MIDI ?

crogers points us to: https://groups.google.com/a/chromium.org/forum/#!searchin/blink-dev/midi/blink-dev/KUx9s-XFdj0/PPZcwE4l3ScJ

(summary: there's an intent to work on a proof-of-concept implementation behind a feature flag in blink)

shepazu: it's good news that it'll be available to play with behind a flag.

ehsan: Also, I'm probably going to try and review the spec myself or ask a collegue to help.

http://alxgbsn.co.uk/wavepad/

<shepazu> trackbot, end telcon

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2013-05-09 17:07:17 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

No ScribeNick specified.  Guessing ScribeNick: chrislowis
Inferring Scribes: chrislowis
Default Present: chrislowis, +1.510.334.aaaa, crogers, ehsan, +1.617.600.aabb, Doug_Schepers, +1.858.780.aacc, gmandyam
Present: chrislowis +1.510.334.aaaa crogers ehsan +1.617.600.aabb Doug_Schepers +1.858.780.aacc gmandyam

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 09 May 2013
Guessing minutes URL: http://www.w3.org/2013/05/09-audio-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]