15:50:06 RRSAgent has joined #audio 15:50:06 logging to http://www.w3.org/2013/05/09-audio-irc 15:50:08 RRSAgent, make logs world 15:50:08 Zakim has joined #audio 15:50:10 Zakim, this will be 28346 15:50:10 ok, trackbot; I see RWC_Audio()12:00PM scheduled to start in 10 minutes 15:50:11 Meeting: Audio Working Group Teleconference 15:50:11 Date: 09 May 2013 15:53:08 Agenda+ OfflineAudioContext 15:56:20 Zakim, what's the code? 15:56:20 the conference code is 28346 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), chrislowis 15:57:01 RWC_Audio()12:00PM has now started 15:57:08 +[IPcaller] 15:57:13 Zakim, IPcaller is me 15:57:13 +chrislowis; got it 16:00:13 crogers has joined #audio 16:01:15 + +1.510.334.aaaa 16:01:19 ehsan has joined #audio 16:01:29 +[Mozilla] 16:01:34 Zakim, aaaa is crogers 16:01:34 +crogers; got it 16:01:57 Zakim, Mozilla is ehsan 16:01:57 +ehsan; got it 16:04:46 Zakim, take up 1 16:04:46 I don't understand 'take up 1', chrislowis 16:04:49 Zakim, take up agendum 1 16:04:49 agendum 1. "OfflineAudioContext" taken up [from olivier] 16:05:03 ehsan: did you see my email to the list? 16:05:16 ehsan: let's have a quick overview: 16:05:34 + +1.617.600.aabb 16:05:36 ehsan: 1) what does it mean for nodes to not do anything before the first call to start rendering? 16:05:55 joe has joined #audio 16:06:51 +Doug_Schepers 16:07:44 crogers: the media stream source node wouldn't be really usable with an OfflineAudioContext, as they'll be running at real time. 16:08:45 ehsan: a general thing to keep in mind is to have an exact notion of what it means for nodes to be attached to a graph that is contained within an OfflineAudioContext. 16:09:07 ehsan: we need a defined behaviour about what would happen if, for example a media stream node is connected to that context 16:09:23 crogers: it could be as simple as throwing an exception if someone tries to connect those two nodes? 16:09:46 crogers: or couldn't even be created from the context (since the context has factories/generators for the nodes) 16:09:50 ehsan: that might be better 16:10:00 crogers: yes, you could just stop them from being created. 16:10:27 ehsan: the only two types of nodes are therefore oscillator and (?) 16:10:36 joe: what about script processor nodes? 16:10:53 ehsan: I think scriptProcessorNode should be usable too. 16:11:09 ehsan: we just need to prohibit those nodes that depend on a real-time timeline, 16:11:29 gmandyam has joined #audio 16:11:39 ehsan: am I correct to assume that events are not dispatched until start rendering has been called. 16:11:45 crogers: yes that would be true. 16:12:02 + +1.858.780.aacc 16:12:11 Zakim, aacc is gmandyam 16:12:11 +gmandyam; got it 16:12:16 crogers: in your example you have a scriptProcessor and you try to call the start rendering inside a process, you'd have to do that externally. 16:12:33 (hello joe, gmandyam and shepazu) 16:13:53 ehsan: 2) roc has a proposal to allow start rendering to take an optional duration argument. Here I'm assuming that we're talking about how it appears under the current spec. In that case - what happens when the rendering has stopped, should the nodes stop generating? 16:14:05 crogers: the second - the whole graph should stop rendering. 16:14:21 ehsan: we should mention that in the spec. 16:14:46 ehsan: for example if you have a scriptProcessor in the graph, you should never receive another event to that node. 16:15:17 chrislowis: does the context continue to exist after rendering? 16:15:41 crogers: practically it continues to exist, you can refer to nodes etc. but it doesn't do anything. 16:15:58 ehsan: it should be possible to optimise therefore to remove all the un-referenced nodes. 16:16:11 crogers: yes, GC can happen as normal. 16:17:21 crogers: we've spoken about allowing nodes to be connected in in the middle of rendering, but I don't know if we need to talk about that until we've spec'd them. 16:17:24 ehsan: right. 16:17:37 ehsan: 3) 16:18:10 ehsan: I'm assuming that current time won't change as part of the event loop. Is that correct? 16:18:42 crogers: OfflineAudioContext is not running at real time, hopefully faster than real time. But possibly slower than real time, if it's doing a lot of work. 16:19:38 q+ 16:20:04 ack 16:20:10 Zakim, ack joe 16:20:10 I see no one on the speaker queue 16:21:20 joe: I wanted to make a point about offlineAudioContext, if we're talking about events that punctuate the creation of chunks, we need to make sure that when the app receives an event it's an opportunity to schedule future rendering. I want to make sure that whatever def of current time we're talking about here, 16:21:40 joe: without it currently continuing to advance, would that cause problems? 16:21:51 Marcos has joined #audio 16:22:15 crogers: I think I understood. Right now, we're not talking about these partial rendering events, but when we do consider that, the processing needs to stop until the js has chance to do something with the partial event. 16:22:43 colinbdclark has joined #audio 16:22:50 joe: I don't know if we need to lock this down right now, but we are edging into a more detailed definition of current time, so we need to consider it. 16:23:19 ehsan: let's talk about the proposal from roc about the event on the start rendering call. 16:24:15 ehsan: there are 2 real worries I have - i) i don't think it's reasonable to assuming the developer will know how many frames to read off at the beginning (e.g. if delay times change during a run). 16:24:58 ii) currently there's no reliable way to modify the structure of the graph while the offlineAudioContext is doing it's rendering, as we don't know exaclty where it is in the processing when a change is made. 16:25:13 ehsan: This will also vary across implementations. 16:25:52 ehsan: what I'd like to see, to completely drop the duration argument in the constructor of offlineAudioContext and make it a required argument to startRendering. 16:26:14 joe: I want to echo my support for roc's proposal on this. It unifies the handling of chunks and simplifies things. 16:26:50 crogers: I also like the idea of the duration, although I don't think it really gets around the problems, as if you chose a large duration, you could still modify the graph before you get a partial completion event. 16:27:12 crogers: on the other hand I don't view that problem as big, as although it's not deterministic, it's also not that useful. 16:27:33 crogers: many things are non-deterministic on the web platform, e.g. grabbing video frames from cameras etc. 16:28:38 ehsan: I agree that we shouldn't optimise for making everything deterministic, but the way it exists at the moment, it's impossible. E.g. if an audio node is added halfway through a rendering. 16:29:19 crogers: oh yes, I agree - I think the optional argument for startRendering is useful, but it's also not useless to give it a buffer to render, where you do know ahead of time what you will be doing. 16:29:46 joe: in fact, rocs proposal allows you to make changes in a deterministic fashion. 16:30:07 crogers: yes, I agree - if you want to do things piece by piece. But if it's an optional argument, you can choose to use it or not. 16:31:11 ehsan: the reason I'd like to drop the arg on the constructor and add it to the startRendering method, we could support what you're saying by making your fist call to startRendering receive the event for that call, and at some point in the future make another call... 16:31:34 ehsan: it will allow the current behaviour, if all you're interested in is a single completion event. 16:32:14 crogers: I think it can also work if you create a context with a duration and then call startRendering with a block size for when you want the completed event ... I wouldn't think you'd have to keep calling startRendering time and time again? 16:33:08 ehsan: the problem with that approach is that there might be cases where knowing how many frames you'll need in the future, so there's no good number to pass to the constructor. We'd get around that by not having the argument there at all - and only give authors one way to specify it. 16:33:29 crogers: I think that's a fair point. I'm going to have to have a think about it, as it will be a breaking change, but you have a good point. 16:33:44 ehsan: is offlineAudioContext used in web content at the moment? 16:34:37 crogers: maybe a couple, but it might be early enough that we could change it. I understand your point that in some cases you might not know what your duration is, but in many cases you do - I just need to think about how much more complicated it'll make the code in the common cases. 16:34:59 ehsan: it'll just change the places where you pass the length arguments ... 16:35:23 crogers: so if you call startRendering(10,000) it'll fire the event every 10,000 . 16:35:57 ehsan: yes, and the rendering will pause when startRendering is called - otherwise it just behaves like the oneshot call webkit already implements. 16:36:46 crogers: ok. I think that makes sense, I'm just a bit worried about the impact on current apps in the wild. 16:37:16 ehsan: I'm working on offlineAudioContext but I might hold off on dropping my changes until we've decided, as I'm working on the API as written at the moment 16:37:37 crogers: no problem. 16:38:12 ehsan: Another point I'd like to bring up is handling inherently real-time nodes. 16:39:01 ehsan: currently the spec says the offlineAudioContext "can render faster than realtime". Which means an implementation could be compliant if it doesn't. 16:39:48 ehsan: what would be nice is if we allow offlineAudioContext to be a drop-in replacement for AudioContext, i.e run faster than realtime unless there's a "real time" node in the graph. Would be useful for debugging purposes. 16:40:10 crogers: that sounds interesting - I'm wondering how it would be possible to implement this in the browsers I know about. 16:41:26 crogers: I think it might be possible in chrome, not sure how hard it would be in safari or ios - Apple are not on the call, but it would be good to get their feedback. I wouldn't want to make a snap decision now ... 16:42:28 ehsan: that's fair. The reason I'm requesting it, is that it's going to make the implementation harder but I don't like the fact that which context your graph is attached to changing the semantics of some of the other nodes and so on. It'd be better if this was consistant, in terms of the quality of the API. 16:42:34 crogers: yes, I can see that. 16:42:55 crogers: if you do want a partial rendering with a mediastream node, you can do that today with an regular context and a script node. 16:43:25 ehsan: yes, I think it's fair to say that you could use a script processing node to do some of these use-cases of an offlineAudioContext. 16:43:41 crogers: yes, for example processing things from a stream for example. 16:44:07 crogers: but I don't think it would be unreasonable to do this with an offlineAudioContext too - but I can forsee some implementation difficulties. 16:45:13 ehsan: I think we should ask for feedback as you say. 16:45:17 crogers: sounds good to me. 16:45:47 chrislowis: anything else ehsan? 16:46:15 ehsan: I think we should go to through the spec at some point and see what things should look like in the presence of an offlineAudioContext. Is that something you've done crogers? 16:46:41 crogers: our layout tests are based on the offlineAudioContext so we are using it with almost all the other nodes. 16:46:55 crogers: scheduled parameters, start() and stop() etc. 16:47:12 ehsan: that makes me feel quite a bit better about it! 16:47:46 ehsan: I will go through and have a look and see what stands out as potential problems. Hopefully I won't find anything, but we can discuss on the list. 16:48:14 crogers: the scriptProcessor will be the one we have to look at closely. 16:50:00 https://github.com/WebAudio/web-platform-tests 16:51:08 agenda+ Implementations 16:51:30 agenda+ Testing 16:51:37 Zakim, take up agendum 3 16:51:37 agendum 3. "Testing" taken up [from chrislowis] 16:52:21 ehsan: I've been looking at the gecko tests and extracted two helper methods for comparing two buffers. 16:54:42 Zakim, take up agendum 2 16:54:42 agendum 2. "Implementations" taken up [from shepazu] 16:57:51 shepazu: is there a update on implementation of Web MIDI ? 16:58:04 crogers points us to: https://groups.google.com/a/chromium.org/forum/#!searchin/blink-dev/midi/blink-dev/KUx9s-XFdj0/PPZcwE4l3ScJ 16:58:57 (summary: there's an intent to work on a proof-of-concept implementation behind a feature flag in blink) 17:00:04 shepazu: it's good news that it'll be available to play with behind a flag. 17:03:00 ehsan: Also, I'm probably going to try and review the spec myself or ask a collegue to help. 17:04:50 - +1.617.600.aabb 17:05:00 automata has joined #audio 17:06:31 http://alxgbsn.co.uk/wavepad/ 17:06:36 -gmandyam 17:07:03 trackbot, end telcon 17:07:03 Zakim, list attendees 17:07:03 As of this point the attendees have been chrislowis, +1.510.334.aaaa, crogers, ehsan, +1.617.600.aabb, Doug_Schepers, +1.858.780.aacc, gmandyam 17:07:11 RRSAgent, please draft minutes 17:07:11 I have made the request to generate http://www.w3.org/2013/05/09-audio-minutes.html trackbot 17:07:12 RRSAgent, bye 17:07:12 I see no action items