W3C

- DRAFT -

Audio WG Teleconference

05 Mar 2012

Agenda

See also: IRC log

Attendees

Present
Regrets
CLowis, Al, ROC
Chair
olivier
Scribe
Joe

Contents


<olivier> Scribe: Joe

<olivier> [large agenda - we won't do everything and should decide what to prioritise]

thanks for the tips Olivier

<olivier> np

<Gabriel> haha

Definite limits to its emotional intelligence

<chris> +chris

<cwilso> yay! Doug's going to SXSW!

I am here

Intro: Welcome Matt Paradis

Olivier: Want to introduce Matt to the group. Matt is another colleague from the BBC who will be joining and will be working on a prototyping implementation.

Matt: Hello group. My background is audio and real time control of synthesis and installation works. Have come to the BBC and look forward to working with this platform

Quick update on rechartering / MIDI work

Olivier: we had a discussion last week on rechartering. Chris Wilson started work on this. I believe we're now done

CWilson: we reached good wording on the rechartering and sent back out
... feel like an open use case level issue around standard midi files
... my inclinatin is that this is a more advanced case than we should start with.
... expect that Jussi and I will take a look at proposal and start drafting into a more formal shape

Olivier: see from list discussions an issue on usage of timestamps. should we be solving this now or deal with later?

CWilson: people have been confusing concept of ticks and tempo and beats. MIDI itself doesn't directly capture these concepts
... there is timestamping in all low level APIs however. This question will resolve itself as we move forward.

Olivier: Doug do you want to speak to process for what happens now [with MIDI recharter]?

Doug: I believe we should simply keep going. Chris, you proposed wording, I gave a time for people to object, think that we're near the expiration of that comment time. I will add that wording to the charter maybe with
... some control-freak modifications and send out to PLH who will be out next week.
... we'll get review of charter by the advisory committee. no problems foreseen. We'll have 5 months to charter
... we'll have to publsh first WD when we're chartered to do so
... let's effectively pretend it's chartered
... we will have to change the milestones to better reflect all deliverables. we'll extend it and give ourselves another 2 years

Olivier: any other questions, no, we'll move on

Intro: Welcome Matt Paradis

Adding the video and audio tracks sync use case

Olivier: this is an issue raised by ROC who is travelling. wanted to get idea from group about this
... I will be the one trying to summarize. ROC said in one of his demos there's a possibility to synchronize and duck several different tracks based on syncing with one video track
... our UCs relate primarily to video but ROC's proposal is interesting and [removing chair hat] for us as broadcasters it's interesting
... wanted to get opinion from group on whether to add this UC or whether it's outside of scope

<olivier> Joe: I see this as a low priority now

<olivier> … though I see the value

<olivier> … could be bolted into any implementation of audio API

<olivier> … through the addition of timestamps

<chris> +chris

<olivier> … conceptual equivalent of click track

Chris: believe there's already an HTML5 speced MediaController to sync different HTMLMediaElements

<jernoble> +jer

Chris: then it's easy to handle

yes i did

CRogers: the media controller API is responsible for grouping a number of HTMLMediaElements and controlling/syncing that set as a unit

Chris, that sounds much cleaner than what I was suggesting

Jer: as far as the spec is concerned WebKit has fully implemented the MediaControler spec

Olivier: sounds as though this is handled in both APIs so suggest we record use case and move on

<scribe> ACTION: Record a use case that calls out a specific requirement and assigns it a priority [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action01]

<trackbot> Sorry, couldn't find user - Record

RESOLUTION: Record a use case that calls out a specific requirement and assigns it a priority

<olivier> ACTION: Olivier to add use case for video sync, add requirement to work well with mediacontroller, clarify on list [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action02]

<trackbot> Created ACTION-35 - Add use case for video sync, add requirement to work well with mediacontroller, clarify on list [on Olivier Thereaux - due 2012-03-12].

<jernoble> Point of clarification: the MediaController spec should be fully implemented on Mac platforms; high resolution timers need to be added for other platforms.

New WD publication for web audio API

Olivier: 2 weeks ago we reviewed the Web Audio API changes and though there are still issues
... I wanted to get the group to decide whether we think there's been enough progress in past 3 months on Web Audio API spec to justify new WD
... reason is that a new WD would be a signal outside the WG to look at these changes
... this is not saying that it's finished but that there is progress that we are making public.
... Any objections to the fact that the spec has been making progress?

[no response]

Olivier: Would there be any objection to a new WD:

[no objections]

RESOLUTION: That the group will publish a new working draft of the Web Audio APi

CRogers: one or two further changes that I would like to make these changes before we publish that have to do with some details
... like maximum delay time allowed, or being able to spec # of channels, splitters, mergers
... would like to address these

Olivier: How about we get the document ready with your new changes and next week have a last-check call on the changes? Then we can publish right away.

CRogers: sounds good. most of these are refinements not controversial changes

Olivier: one question for Doug and Thierry:

Doug: We could publish a changelog, I meant a differences document

Thierry: I don't think the diff document is very mature yet

Olivier: Let me talk with Al to see if that's possible by next week

Doug: I will be at SXSW so won't be able to do much btw now and next Thursday

<cwilso> (I'll also be at SXSW)

<olivier> ACTION: Olivier to talk with Al about getting material from the spec difference document ready for inclusion into umbrella spec [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action03]

<trackbot> Created ACTION-36 - Talk with Al about getting material from the spec difference document ready for inclusion into umbrella spec [on Olivier Thereaux - due 2012-03-12].

Olivier: Can we have a bit of a discussion on a few of these next 3 items

Jussi: we are looking at what we would need for example [...] problematic
... when people use our product they're using them as [...] the number of AudioContexts, we're looking to [...]

<olivier> http://www.w3.org/2011/audio/track/issues/3

<olivier> ISSUE-3

Jussi: We can pare down the number of AudioContexts. We have some ways of working around this

<olivier> ISSUE-3?

<trackbot> ISSUE-3 -- A way to destroy an AudioContext instance -- raised

<trackbot> http://www.w3.org/2011/audio/track/issues/3

Jussi: [couldn't hear]

CRogers: a couple of issues there. The current WebKit impl isn't optimal for this. It is an optimization that could be made and the spec has no real
... limit on the number of AudioContexts. The lifetime question: the AudioContext should be GCed when there are no more refs to it
... spec should mention that if nodes are connected to a context, this will keep it alive.
... don't see other unusual issues regarding lifetime.
... don't quite understand the problem you're having sharing a single context among multiple decoders
... I think it boils down to your running into some difficulties with the WebKit impl limiting the ctx count to 4 or something
... the optimization to increase this hasn't happened yet. I suppose we could track this in WebKit if you would like to file a bug.
... can't fix this tomorrow but it seems like a bona fide issue

Olivier: Is there an issue about when GC happens and curious to know whether this is the kind of detail that is specified in the API or left to the implementors.

Doug: There are always certain things that should be left up to impl, as in "SVG doesn't say don't make 50,000 elements" but that will fail
... if people are really going to make multiple ctxs and seems like a common case then it should be clarified

CRogers: in best practices ctx should be shared b/c while it makes sense to have conceptually distinct contexts you can mix outputs within one context

Doug: what I was saying is that if people are likely to do this, consistency is important. whatever the # of max contexts you want to make it consistent
... so people can code something once and have it work. Maybe spec should speak to multiple contexts and encourage/discourage best practice. For instance IFrames might be mixed on a page.
... so these things will happen, at the same time we should point people in the right direction.
... we should say something, e.g. "You should reuse the same context with subgraphs as a best practice"
... "and use multiple contexts in this situation"

Jussi: This sounds reasonable to me

<olivier> (agreement from Crogers and Jussi)

CRogers: There was another issue about pausing a context. There are sufficient controls where this isn't needed but we can discuss that.
... any scheduled events with AudioBufferSourceNodes can be cancelled. Any type of HTMLMediaElement can be paused. Volume can be muted on streams.
... I feel there is sufficient control already.

<olivier> Joe: agreeing with Doug

<olivier> … wanted to get back to question of GC

<olivier> … should be clear on conditions to release an audio context

Doug: There should be an explicit release of a context in the API

<olivier> Doug: agree with idea from Jussi to have a way to release an audio context

Jer: Want to talk about # of contexts needed at same time

CRogers: Way it currently works is, if no more connections to a ctx, it gets GCed and goes away
... if any connections exist it won't be GCed

Doug: Wonder if in practice it might be better to have a "kill it" explicit API

Olivier: are you [CRogers] thinking this will lead to bad practice?

CRogers: not necessarily. if it was playing some music the impl wouldn't just abruptly kill off whatever was happening with a sharp glitch
... You still might have a reference around to the context in JS but the context would become unusable at that point.
... So after this point the ctx still exists but it is dead, not consuming resources, not playing anything

Jer: would be helpful to clarify why ctx need to be deleted.
... don't see any reason why it couldn't be restarted. but if the reason to stop it is to free up resources then this should be called out explicitly

Jussi: I am wondering if there is any other value in stopping an audio ctx other than resource usage, like a disconnectAll() method

Doug: I can see a reason for that, say a browser extension that immediately stops any active sound
... a kill switch

<Zakim> olivier, you wanted to ask about duplication of feature

Olivier: that's interesting as a UC
... I have a question, about the volume slowly fading on killing a ctx, would that be a duplication of something we could do otherwise?
... if so it sounds like we're adding too many ways of doing something

CRogers: that's the point I was initially trying to make. it's possible to manually disconnect everything from a ctx and then even delete the node so the current API should be possible

Jussi: I think this is ok, the API could schedule a fadeout and then kill the ctx

CRogers: agree

Doug: I am curious if maybe someone might be using one or more audio libs and they don't know exactly what's being held onto. Can one introspect the window and find out what ctxs are active to kill them?

CRogers: not now

<jernoble> +q

CRogers: it's a fair point. but this sounds like something we might not conclude right now.

Olivier: that's fine we don't need to decide right now on this point

Jer: One contrary point, this desire to kill an audio context also exists in a desire to kill video players due to resource leaks via event listeners, etc.
... same argument can be made w/r/t almost any media element or resource

Olivier: Jussi, would you mind summarizing for next meeting the UCs behind the issue(s)

Jussi: yes

<olivier> ACTION: Jussi to record discussion into ISSUE-3 [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action04]

<trackbot> Created ACTION-37 - Record discussion into ISSUE-3 [on Jussi Kalliokoski - due 2012-03-12].

Olivier: is the question of pausing a graph the same as this issue of killing a context?

CRogers: no, pausing a graph is about only part of the activity in a context.
... it's easy to pause a graph through existing mechanisms

Olivier: is this an open issue that no one considers a problem any more?
... Jussi, do you still consider this an issue we need to track?

Jussi: I'd like to see [...] other than that, I'm fine with it

<olivier> s/[…]/a code example/

Olivier: moving to adjourn the call for today. I'll keep track on other agenda and we'll take them on next week.

<shepazu> http://www.w3.org/2011/audio/wiki/Use_Cases_and_Requirements#UC-14:_User_Control_of_Audio

Doug: I've just added this use case to reflect need for user control

<olivier> [ADJOURNED]

<jernoble> bye!

Summary of Action Items

[NEW] ACTION: Jussi to record discussion into ISSUE-3 [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action04]
[NEW] ACTION: Olivier to add use case for video sync, add requirement to work well with mediacontroller, clarify on list [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action02]
[NEW] ACTION: Olivier to talk with Al about getting material from the spec difference document ready for inclusion into umbrella spec [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action03]
[NEW] ACTION: Record a use case that calls out a specific requirement and assigns it a priority [recorded in http://www.w3.org/2012/03/05-audio-minutes.html#action01]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2012/03/05 19:58:10 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/[didn't hear]/PLH/
Succeeded: s/[?]/Doug/
Succeeded: s/important/specified in the API or left to the implementors/
FAILED: s/[…]/a code example/
Found Scribe: Joe
Inferring ScribeNick: joe

WARNING: No "Present: ... " found!
Possibly Present: CRogers CWilson ChrisWilson Doug Doug_Schepers F1LT3R Gabriel Jer Jussi Matt Olivier P0 P11 P3 P4 P5 P8 Ronny Thierry aaaa chris colinbdclark cwilso foolip inserted jernoble joe kennyluck kinetik mdjp paul___irish shepazu smus tmichel trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Regrets: CLowis Al ROC
Agenda: http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0322.html
Got date from IRC log name: 05 Mar 2012
Guessing minutes URL: http://www.w3.org/2012/03/05-audio-minutes.html
People with action items: jussi olivier record

[End of scribe.perl diagnostic output]