W3C

- DRAFT -

Audio Working Group Teleconference

09 May 2012

Agenda

See also: IRC log

Attendees

Present
Regrets
Alistair
Chair
Olivier
Scribe
Chris Lowis

Contents


<olivier> trackbot, start

<trackbot> Sorry, olivier, I don't understand 'trackbot, start'. Please refer to http://www.w3.org/2005/06/tracker/irc for help

<olivier> trackbot, start meeting

<trackbot> Date: 09 May 2012

Hi olivier!

<olivier> Scribe: Chris Lowis

<olivier> ScribeNick: chrislowis

<roc> I think I joined

<roc> it's completely silent

<mdjp> mdjp - same problem

<jussi> for me VOIP worked fine

mdjp, olivier: zakim@voip.w3.org ?

<olivier> yup

<olivier> got in

mdjp: try again?

roc: are you ??P7 ?

<roc> guess so

<olivier> yes I think so

<jussi> might be me

<jussi> although I'm muted

<jussi> I think

<jussi> thanks

<olivier> jussi, ack-ing you will unmute you

<olivier> shepazu, are you joining?

<jussi> olivier: alright

Specs Roadmap

<olivier> http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0100.html

olivier: today I want to talk about:
... 1) what are we going to do with our two specs
... 2) What are we going to do with UC and recs document
... Last week we noted that there was a lot of buy in for the Web Audio API, but less so for the Mediastream API.
... today I'd like to hear from the two editors about what they think.
... I've had a quick conversation with roc.
... It feels like we will proceed with the Web Audio API and try to fold the Mediastream API in as a note.

shepazu: given what I've seen a good strategy would be to keep in mind the Use Cases around streaming that roc contributed, especially around the consistancy of video and audio.

roc: I think that what you are proposing is reasonable
... There's still issues I have around syncronisation and I still have a strong desire for tight integration with Mediastreams.
... and some way of processing different types of media.
... I want to try and figure out how to integrate the two. I'd like the semantics of mediastreams and web audio nodes to match
... I think we can do that while still keeping compatability for the people who are currently using the web audio api

olivier: I think the biggest question was the necessity of having consistancy between web audio and mediastreams.
... Chris Rogers, could you respond to the question about how the web audio spec could better integrate with Mediastreams?

CRogers: it's a good time to talk about this as we are starting to prototype mediastreams in Chrome right now.

chris: I put a proposal of how the two might integrate based on roc's use cases, which were a very useful starting point.

<chris> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html

chris: this document is my first best stab at how the integration might work. We're going to try it in a prototype.
... it's two new methods on the audioContext, so it's fairly light-weight.
... we'd like to show our progress with WebRTC

olivier: I'd say go ahead and add it to the spec, note that it's still under discussion exactly how'd they'd work.

chris: there's still a lot of discussion, such as how to deal with multiple audio and video tracks.
... it's useful to be able to split out the streams and deal with the separately.

roc: in mediastream processing spec you can get a video stream from a canvas and use that for overlays. That's a logical way of doing that. With multiple audio tracks you can mix them together too.

chris: I'm sure we'll find some cases where we'll overlay video tracks together in a canvas, but I suspect it'll be more normal that each video track will have a separate layout on a page.
... e.g. video conference-type application.

roc: if you're going to process multiple tracks you will need some API to allow them to be mixed. To keep the simple case simple the default behaviour might be to mix them together by default, then have an API to allow them to be split.

chris: even without the Web Audio API in the picture you'd still need to cope with multiple streams. So maybe that is the best default behaviour.

olivier: I'd like to give everyone the chance to register their objections to this and document in our rechartering that we'll focus on the web audio api going forward.
... roc: if we're going to republish your work as a note, do you need some time to reflect any changes you've made?

roc: I think we'll try to publish it the way it is, there's not much point in changing it now.

shepazu: if we decide later to change anything we can update the note.

olivier: it's quite important to have it in a state we're happy with as it's a cornerstone of our work on the web audio api, so if you'd like to make changes feel free.

roc: at the moment it reflects the implementation so it makes sense to keep it as it is, even though there's a couple of things I could change.

<olivier> RESOLVED: the group will publish the mediastream processing API as a note

<olivier> RESOLVED: our new charter will document the focus on the web audio API as our audio processing spec

olivier: Moving to the 2nd question:

(What are we going to do with UC and recs document)

olivier: if you go to the spec today there is a section called Use Cases and Requirements.
... my question is whether the work we have done this winter on the UC&R document should go into the spec as an informative section. Or whether we'd rather keep it in the wiki.

chris: my preference would be to take it from the wiki into a separate html file but to link to it from the spec.
... I think the wiki made more sense when we were brainstorming the ideas. It could be formatted more nicely as a formal document.

olivier: that's pretty close to my preference: take the use cases and requirements, turn it into a working draft and eventually to publish it as a note when it's more mature.
... then we'd link that note from the spec.
... and note which use cases we considered out of scope for the spec.

chris: I would put them in the use cases and requirements doc.

olivier: agreed. Objections?

None noted.

olivier: Then I'll start on preparing that for that draft. The door is open for volunteers to take that on.

<olivier> RESOLVED: the group will publish the use cases & requirements as a WD, with a view to publish as a note

<olivier> RESOLVED: features left out of scope for the v1 of web audio API will be documented in the UC&D Note

JavaScriptNode buffer size and delay (ISSUE-13 and ISSUE-14)

<olivier> ISSEU-13?

<olivier> ISSUE-13?

<trackbot> ISSUE-13 -- JavaScriptNode Delays -- raised

<trackbot> http://www.w3.org/2011/audio/track/issues/13

<olivier> ISSUE-14?

<trackbot> ISSUE-14 -- Default value for bufferSize in createJavaScriptNode() -- raised

<trackbot> http://www.w3.org/2011/audio/track/issues/14

<olivier> http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0106.html

<olivier> http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0096.html

I'll start with ISSUE-13. mdjp, would you explain a little more?

mdjp: the main thing is to make people aware that when using the JS node that there is a delay introduced by the node.

<mdjp> I should be muted now

chris: yes, the JS audio node has an inherent latency due to its buffering. We talked about adding a latency method to query what the latency is on a node.
... the point is that we need a latency attribute on the AudioNode.
... there's a Rendering Time attribute that supplies additional information.

<chris> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#AudioProcessingEvent

jernoble: are we asking about adding an attribute to query the latency on all nodes, not just the JS node?

chris: yes.
... in the link above there's an attribute called event time to determine where you are in the playback stream.
... it provides a way for JS nodes to syncronise themselves with other nodes.

olivier: so using this you could compensate in other parts of the graph for this delay.

chris: yes, so using this you would be able to compensate for this.

roc: in mediastream processing you don't need to query nodes for the latency. If we can avoid it in the web audio api that would be best for authors.

chris: my feeling is we can't do everything automatically - there's some cases where you would want to compensate and some where you wouldn't, and the system wouldn't be able to detect reliable which mode to be in.
... c.f Logic Audio 9 screenshot previously on the list.

olivier: could you see a case where the developer might add latency deliberately and this would hurt that?

chris: Referring to a thread on the list where this was discussed.

olivier: could you find that thread and add it to ISSUE-13?

chris: sure.

roc: forcing developers to do latency calculations themselves is something we should avoid.

chris: my feeling is the latency compensation should be opt-in rather than opt-out.

olivier: thanks :)
... perhaps we could add something to the next WD of the spec, to look for feedback.

chris: one of the examples in the previous thread I mentioned was someone playing a MIDI synth along with a generated sequence - if people are trying to use the API in a simple way they won't understand where the delay is coming from.

olivier: do you (chris) have any notion of the performance issues caused by always-on latency compensation?

chris: in terms of CPU load?

olivier: yes. I don't think it would have an appreciable effect. I'm not sure. I'm more concerned about the impact on delays.

<olivier> chrislowis: wanted to point out that the problem we ran into was when using synthesis, when phase was important.

<olivier> … we were missing a few native building blocks, like additions

<olivier> … the more native blocks, the less of an issue it becomes

<chris> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#AudioProcessingEvent

chris: I'd like to bring up the playback time attribute --^
... the playback time is a timestamp, so you know exactly what time things are happen so you can syncronise events in a JS node exactly.

olivier: is this just a matter of documentation?

chris: we should still have the latency attribute, to handle both synthesis and the generation of note events.

<roc> seems to me that using the playbackTime attribute gives you the information you need about latency

<olivier> ISSUE-14?

jussi: if the implementation could put in a default value for good latency that would be an option.

<jussi> np

<trackbot> ISSUE-14 -- Default value for bufferSize in createJavaScriptNode() -- raised

<trackbot> http://www.w3.org/2011/audio/track/issues/14

roc: in mediastreams processing the implementation always chooses the buffer size.

olivier: what does it choose as the default?

roc: it'll be implementation dependent - depending on whether you're buffering ahead. In my implementation it changes dynamically depending on the amount of buffering. I can dig up more information off-call.

olivier: would this make inter-operability difficult?

chris: we've had a bit of a discussion about this a few months ago with Joe. We were going back and forth on whether we should go with roc's suggestion, or whether we should allow the developer to specify the buffer size.
... I don't really have a firm opinion either way, it's a tough question. roc's suggestion has a lot of merit, and I think joe agrees.

<olivier> mute jussi

chris: jussi - do you have any objections?

jussi: I'm not sure, need to think about it.

chris: me too. It's really hard! I think we should talk about it on the list.
... in principle I agree with roc on this.

roc: interop won't be a problem if implementation that are widely used *do* vary the buffer size dynamically, but if devs start working around things, it'll be tricky.

olivier: I think the way forward is to continue the discussion on the list.
... good time to wrap up the call as we're reaching an hour. AOB?

None noted.

olivier: same time next week.

shepazu: before we go...
... did we resolve to go forward with publication?

olivier: we decided we would be we haven't talked about logistics.
... we have a recorded resolution.

shepazu: we'll have to wait a couple of weeks, as there's a moritorium on publication at the moment.

<jussi> thanks everyone! bye

olivier: let's figure out the logistics after the AC meeting.

<gabriel> bye

<mdjp> bye all

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2012/05/09 20:01:43 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/CRogers/chris/
Succeeded: s/document/spec/
Succeeded: s/compentacy/compensation/
Found Scribe: Chris Lowis
Found ScribeNick: chrislowis

WARNING: No "Present: ... " found!
Possibly Present: CRogers Doug_Schepers F1LT3R P0 P1 P10 P12 P7 P8 P9 ScribeNick chris chrislowis colinbdclark foolip gabriel https inserted jernoble jussi kennyluck kinetik mdjp olivier paul_irish roc shepazu trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Regrets: Alistair
Agenda: http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0116.html
Found Date: 09 May 2012
Guessing minutes URL: http://www.w3.org/2012/05/09-audio-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]