W3C

- DRAFT -

Audio WG f2f meeting, TPAC day two

28 Oct 2014

Agenda

See also: IRC log

Attendees

Present
Regrets
Chair
mdjp, joe
Scribe
olivier

Contents


<scribe> Scribe: olivier

<scribe> ScribeNick: olivier

Review of day 1, welcome observers

Editorial issues marked TPAC + Editorial but NOT ready for editing -> https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+label%3A%22V1+(TPAC+2014)%22+label%3AEditorial%2FDocumentation+-label%3A%22Ready+for+Editing%22+

mdjp: higher level agenda today than bug squashing of yesterday
... testing, an opportunity for implementers to give an update
... discussion round on getusermedia
... planning for transition to v1 of the spec - maybe a broader discussion around v1 and the living WD
... also Michael Good - discussion on music notation

Testing

mdjp: the importance of testing comes from our need to prove interoperable implementations to get to recommendation status
... valuable to also look at testing done by implementers
... so we can try and build upon work already done by implementers for their own testing needs

joe: I have some experience testing non-web audio frameworks
... useful to have something run and see the results

<mdjp> chris lowis - blog post on testing

<mdjp> http://blog.chrislowis.co.uk/2014/04/30/testing-web-audio.html

joe: you need approximate yet strict criteria
... the type of testing I fonud useful gave the ability to record "baseline", human validated capture
... sometimes validated by looking at some waveform validator
... but have not built a test suite with the scope we are looking at here

mdjp: posted on IRC http://blog.chrislowis.co.uk/2014/04/30/testing-web-audio.html -> a blog post by Chris Lowis on writing tests for the web audio API
... looking at the comments there does not seem to be a lot of activity, but someone created a test for the waveshapernode as a response to this post
... use a reference output and compare output
... trickier than just comparing two results
... current approach is to get community to write tests, and WG would make sure the tests move along with the spec
... very much about building our own test suite
... question of whether we (re)use work already done by implementers

padenot: in our test suite there are spots that are not tested
... in FF, chrome or the W3c test suite for that matter

<mdjp> test suite - https://github.com/w3c/web-platform-tests

joe: anything in FF that isn't in the W3C test suite

<mdjp> https://github.com/w3c/web-platform-tests/tree/master/webaudio#

padenot: yes, there are a LOT of tests in our test suite, they run all the time

joe: sounds like what you have is much more extensive than what is in the W3C test suite
... lots of tests in the w3c repo fail - even IDL tests

padenot: probably not updated
... 150 test files, close to 1000 tests

olivier: question is how big would it be for full coverage

padenot: when I look at code coverage what we have is all right - not 100% but close

joe: if we get good code coverage it's very likely we cover the spec, but not guaranteed

olivier: W3C has long history of testing, these days the bar is much higher, with a lot of tests per feature, combinatory etc

BillHofmann: (speaking for myself not Dolby) implication that there is an audit that goes on
... painful but perhaps needed
... maybe for v2 we could try and write the test before / as we write the spec

joe: hard to just cook up the expected result - you need some implementation to write the test
... need to build bridges from 2 sides
... on the one side, existing tests from implementors
... on the other, spec and some tests
... if we go through the spec and annotate it

olivier: can also extract a lot of testable assertions from MUST, SHOULD keywords in the spec

joe: might be best to have other people than implementers doing that analysis of the spec
... would seem fair too

mdjp: sensible way forward
... how do we actually make it happen
... assumption that ChrisLo may not have enough time to take it on and lead effort at this point
... suggest we need someone to coordinate the testing work
... [call for volunteer]

joe: permanent benefit is exemption from scribing. And biscuits

olivier: not sure if non-implementor constraint is helping

joe: agree - just think it would be better if non-implementors involved in it

mdjp: another action on us to communicate with other groups, get better understanding of how they do it
... SVG mentioned
... also important not to throw away work done so far - can it be used as a basis?

olivier: some groups have successfully engaged with community in test-writing days

joe: might do so at music conf in Boston soon

padenot: also - upcoming web audio conference
... we can invite people at moz space

Jerry: need to have prior undertanding of the holes in your coverage

joe: we have a repository for that https://github.com/w3c/web-platform-tests/tree/master/webaudio#

BillHofmann: do we have tooling that we need to do perceptual diff

padenot: usually enough to have reference buffer and then compare

olivier: first slice is whether the interfaces work, then all the things that can be compared with ref buffer; for the rest we can use crowdsourced testing like CSS has been doing

joe: wonder how useful is audioworker to the testing?

padenot: all our tests use offlineaudiocontext and SPN

mdjp: would be good to start using it
... would be good to get Chris Lowis on a call in near future to discuss anything we missed, understand his approach
... and then identify person/people to lead the effort

joe: anyone aware of a need for the WG tests to belong to this framework

olivier: not necessary - more of a case of "here's a system"

joe: better to start with whichever suite has better coverage today?

Browser vendor feedback - current implementation status, future plans issues and blockers

mdjp: open the floor to implementers

padenot: no big problem so far - we have pretty much everything implemented

joe: anything in the way of future plans

padenot: audioworker is going to be a bit complicated

hongchan: resume/suspend in the works, seems hard
... rtoyg has started recently

padenot: yes that will be hard

joe: Jerry are you getting what you need from the group?

jdsmith: see a lot of open issues we can engage in; getting awareness of where the spec is

joe: any other particular areas of concern

jdsmith: not right now
... we are engaged in implementing, currently assessing the areas where we might run into issues
... encouraging so far
... haven't yet looked at ambiguities in the spec
... would be what I would prioritise - no specific examples yet
... there's a lot of bugs/issues at the moment so takes time to get head around what is highest priority

mdjp: good to continue the current review?

jdsmith: yes

joe: identification of what we discussed was the chair's take
... were trying to tackle anything which could hinder implementers looking at older version of the spec

mdjp: good to continue feeding back issues, group keen to hear what may hinder implementations

[break]

bugzilla

discussion on how to close remaining bugs in bugzilla, as they tend to confuse people that the tracker is still open

closing bugs for web audio API on bugzilla - hoping it will not create a deluge of emails this time...

all done - looks like the email address most likely to have received a lot of email was Chris Rogers', and it has been inactive for a while

<hongchan> mdjp: moving onto minor/editorial issues

<hongchan> https://github.com/WebAudio/web-audio-api/issues/336

<hongchan> TPAC RESOLUTION: No.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/328

<hongchan> ray: the spec text doesn't say clearly what happens when distance > maxDistance.

<hongchan> padenot: actually spec has the formula

<hongchan> joe: http://webaudio.github.io/web-audio-api/#idl-def-DistanceModelType

<hongchan> joe: spec has the formula

<hongchan> TPAC RESOLUTION: Clarify the formula. To use minimum of distance or maxDistance.

<hongchan> joe: this issue is the same thing - https://github.com/WebAudio/web-audio-api/issues/326

<hongchan> TPAC RESOLUTION: Resolve formulas as recommended

<hongchan> https://github.com/WebAudio/web-audio-api/issues/325

<hongchan> joe: we're not retaining Doppler anymore. Closing.

<hongchan> Issue 324 - https://github.com/WebAudio/web-audio-api/issues/324

<hongchan> https://github.com/WebAudio/web-audio-api/issues/324

<hongchan> rtoyg: when the panning passes the origin (zero) it makes clitche.

<hongchan> hongchan: this is only for HRTF. right?

<hongchan> rtoyg: don't recall right now. have to check.

<hongchan> mdjp: aren't we getting rid of panner node?

<hongchan> padenot: no. we will keep the old one.

<hongchan> TPAC RESOLUTION: Modify formula to use a continuous value as it moves through the listener location.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/318

<hongchan> padenot: this is tricky because we need to jump back and forth between two threads.

<hongchan> joe: what about the 'intrinsic' value and 'computed' one?

<hongchan> padenot: what chrome is doing is a bit nicer, but..

<hongchan> padenot: this requires more talking and questions.

<hongchan> mdjp: so is this fundamental?

<hongchan> padenot: yes, the implementation between chrome and firefox is also a bit different.

<hongchan> mdjp: no resolution right now. moving on.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/318

<hongchan> https://github.com/WebAudio/web-audio-api/issues/314

<hongchan> rtoyg: .createBuffer has limited in terms of sample rates.

<hongchan> joe: why do we have these limitations? should be removed.

<hongchan> rtoyg: we should specify the minimum requirements.

<hongchan> In chrome it supports 3k ~ 192k to make it compatible with Media.

<hongchan> padenot: at least 192k?

<hongchan> TPAC RESOLUTION: specify rate from 8KHz - 192Khz

<hongchan> NOTE: createBuffer also requires update to reflect this change.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/307

<hongchan> mdjp: this is quite strightforward.

<hongchan> TPAC RESOLUTION: Agreed.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/305

<hongchan> TPAC RESOLUTION; should throw NOT_SUPPORTED_ERR exception.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/287

<hongchan> padenot: we have it in the spec now. Closing.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/281

<hongchan> mdjp: we continue to play to the end and not loop.

<hongchan> …: I'll leave it as Clarification.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/269

<hongchan> rtoyg: this is fixed. Closing.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/257

<hongchan> padenot: we can't close this. need more discussion.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/241

<hongchan> padenot: this is not relevant for AudioWorker. Removing,

<hongchan> https://github.com/WebAudio/web-audio-api/issues/135

<hongchan> padenot: this is about Doppler effect, so not relevant anymore. Closing.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/131

<hongchan> rtoyg: we always do the interpolation linearly.

<hongchan> rtoyg: user can set whatever he/she wants to draw the curve.

<hongchan> rtoyg: if you don't want to do it linearly, what do you want to do?

<hongchan> …: that itself opens the another discussion.

<hongchan> padenot: we can spec it as 'linear interpolation' and open again if people want another kind of interpolation.

<hongchan> TPAC RESOLUTION: Spec to clarify linear interpolation. If other interpolation requires a feature request is required.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/129

<hongchan> mdjp: sounds similar to the previous.

<hongchan> padenot: not relevant anymore since we decided to drop the doppler.

<hongchan> RESOLUTION: Arbitrary units are appropriate as doppler is being removed.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/127

<hongchan> padenot: this is kinda closed.

<hongchan> mdjp: closing.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/128

<hongchan> hongchan: we didn't reach to the conclusion on this.

<hongchan> mdjp: I'll make a note on this so we can come back later.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/125

<hongchan> RESOLUTION: Do not include documentation on convolution in the spec.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/121

<hongchan> rtoyg: don't we use double nowadays?

<hongchan> padenot: all the time is double and the sample should be float?

<hongchan> hongchan: ES is using double anyway, is it meaningful to use float in the spec?

<hongchan> rtoyg: we fixed some issues in internal implementation about float/double misuse.

<hongchan> mdjp: paul do you have the consistency in your implementation?

<hongchan> padenot: we're going to review the variables.

<hongchan> rtoyg: if you want to use double in the spec, we don't have any problem.

<hongchan> padenot: yeah

<hongchan> rtoyg: when we convert double to float, we will lose some precision.

<hongchan> rtoyg: paul do you use float internally right?

<hongchan> padenot: yes

<hongchan> rtoyg: then we should just keep both float and double in the spec. just to specify the internal difference in the implementation.

<hongchan> mdjp: do we have to discuss this further?

<hongchan> rtoyg: I am happy with leaving it as float.

<hongchan> padenot: yeah.

<hongchan> RESOLUTION: Specify float for all values except for time. Current implementations use floats internally so specifying doubles would look incorrect when values are inspected.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/118

<hongchan> padenot: sometimes creative application requires extreme resmapling.

<hongchan> ChrisL: I am also arguing we want to have other types of resampling.

<hongchan> mdjp: are we adding some type attribute to the spec?

<hongchan> mdjp: can we put this as 'not quite an editorial issue' and come back later?

<hongchan> https://github.com/WebAudio/web-audio-api/issues/111

<hongchan> padenot: we start to explore this, but it is quite complex.

<hongchan> mdjp: we start adopting CPU gauge into nodes.

<hongchan> padenot: voice screening or cpu saving feature when CPU is overloaded.

<hongchan> mdjp: we can close this one.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/105

<hongchan> ChrisL: we can just put out the silent from the oscillator, when there is no periodic wave defined.

<hongchan> padenot: this might be a bigger issue - what do we when the invalid value set?

<hongchan> mdjp: Closing and raising wider issue around spec wide behavior on setting enum to invalid values.

<hongchan> joe: we're moving onto getUserMedia with Harald.

<hongchan> Scribenick: hongchan

<BillHofmann> Philippe asks - no prompt to play to default output devices - do we want to have a different behavior on non-default devices? (in context of access controls)

<BillHofmann> Scribenick: BillHofmann

Harald notes that headsets have to work, but it has association between input and output (vs headphone) - heuristic is if you've granted input access, implicitly granted output use case

Joe: should the issue be deferred to UAs?

Harald: would be good to have good non-normative recommendations!

padenot: notes that default audio devices are completely different on devices and OSes.

BillHofmann: concern for non-traditional UAs (like digital media adapters)

padenot: if from a secure origin, you can persist the choice, at least.

<ChrisL> "user agents MAY WISH TO allow users to give broad permissions for device access" RFC 6919

Philippe: want to be able to grant (for instance) generic permission to connect audio devices

Joe: if an existing app that plays out to default device then adds device enumeration - picking the same device shouldn't cause a prompt.

ChrisL: the default playout can change based on context (e.g., speakers get muted when you go to a meeting - headphones put in - speakers shouldn't be accessed without prompt)

general discussion on fingerprinting

Harald: certain information doesn't require authorization - can enumerate and get more things

Philippe: permission for enumeration should allow also access (input/output)

Harald: note that the UI may actually require information to build *before* permissions (e.g., remove camera options)

Philippe: perhaps the getDevices is parameterized

BillHofmann: what about expectation of consistent behavior (action when you expect it)

Philippe: initial enumerate would be based on basic UI; once you want to use the camera, you have to request user auth...

Joe: concern - this front-loads the permission process earlier than it might be otherwise relevant - developers will just request at the beginnning

Philippe: restates concern - only one auth

Harald: if we go down that path, should be proposed at getUserMedia/MediaCapture session; lots of negative feedback if we try to re-open

Joe: Summarizing - not assuming anything that might happen in the task force - there's a proposal that would permit enumeration of output devices - will that be transparent or opaque

Harald: same as input devices

BillHofmann: why not a whole lot of info on output device characteristics

Harald: more information will be forthcoming once the security issues are addressed
... Note that things like characteristics of output device are things that Audio WG will be able to help

padenot: passthorugh of compressed streams not relevant to AudioWG - however, Firefox does things like that for e.g. MediaElement playback of MP3 to headphones.

Harald: if we want to extent output device, it'd be appropriate to propose as extensions afterwards
... after v1.0

Joe: at what points in the lifecycle of a context - when can it be attached

padenot: should be switchable - actually relevant in e.g. videochat case

Joe: would you change the AudioContext samplerate when you switch?

padenot: no - you'd put a resampler at the end - too much impact on the graph otherwise.

(All): concerns about order of operations - do you need to know the sample rate of the output device before you can build the AudioContext?

padenot: some suggestion to apply constraints to output device (e.g. sampleRate) when creating the context

Joe: propose a way for the audio api to understand characteristics of devices

(All): again, concern of fingerprinting

Philippe: proper division of labor: characteristics come from WebAudio, permission/API comes from MediaCapture

Harald: seems reasonable

Joe: Let's break for lunch!

(All): general hurrahs!

transition to V1 of the spec. Assign and review actions.

mdjp: is it even appropriate to discuss a timeframe here?

padenot: yes - definitely, with the changes we've discussed. What is out of scope? Plugins, what else? AudioWorker definitely needed, allows us to work around missing nodes.

mdjp: suggests that AudioWorker changes the paradigm

padenot: always there as a concept, but ScriptProcessor was broken

mdjp: do native nodes become less relevant since you can implement everything in script?

padenot: we need both

cwilso: doesn't think that v1 target timeframe is the most important thing - need to be sure that we've made it possible to implement the use cases
... (particularly referring to VST-related issues/plugins)
... a number of other architectural issues before we can say, "We're done"
... and yet - don't think we should have the "pure living standard" approach - need to ship something.

mdjp: without putting a time limit on it - what are the outstanding actions?

BillHofmann: (speaking for himself) - do we have the list of issues that are the real blockers for completing the use cases? (The P0s?)

cwilso: we need to keep prioritizing bugs

(All) more discussion around issues

mdjp: we need to validate that we've met the use cases, or documented why we've dropped them.
... we need to step back from the process stage (CR/LC/...) and determine what steps we need to get to the point that we're ready to do it.
... we need to schedule out our work to that point - a matrix of use case vs bugs + features

<scribe> ACTION: mdjp to put together matrix [recorded in http://www.w3.org/2014/10/28-audio-minutes.html#action01]

<trackbot> Created ACTION-115 - Put together matrix [on Matthew Paradis - due 2014-11-04].

cwilso: what is the date for CR?

mdjp: LC q2 2015; CR q4 2015
... important for us to be crisp and drive to clarity on use cases, and have an API we're happy with, rather than "just ship it"
... proposal to review use cases

(All) reviewing use cases http://www.w3.org/TR/webaudio-usecases/

mdjp: video chat - questions re speed

cwilso: could you spread out TBL speaking? Seems like.

mdjp: 3D game with music and convincing sound effects

cwilso: the reality here is that this is a good example of where performance is going to be really critical; there are subtleties
... humblebundle is an example of something that can help us run this down

padenot: unreal engine 4 shooter can run in emscriptm - and uses pannernode, etc.

mdjp: online music production tool
... one big issue is VST
... this is the major "problem use case" - and may need to be revised to bring up to date with what we understand (e.g., VSTs)

observer: do we really *need* VSTs?

cwilso: talked to a bunch of software providers who assumed that VSTs would be supported; however we don't have a way of supporting inter-app audio communication
... note that for instance an include of a third party JS into e.g. an Abelton online tool - there's a trust issue

observer: proposing that you could release without support of VSTs, for instance, as long as you could implement at a later date.

mdjp: yes, that's more or less where we've come to. we need to make a decision about it - could just stick to use case, meaning no VST support; or look at broader concerns.
... action at the moment is to look at the use cases vs bugs and determine where the real problem use cases are, and where we absolutely do need.

Joe: I think that when we created the use cases we didn't expect to match the capabilities of all competitors; I would argue that it's time to deliver.

mdjp: I think we can quickly identify those few key issues and drive to a v1 spec that we can work from (it can change, but...)
... Next up... Chris Wilson on WebMIDI

<joe> cwilso: current editor's draft of WebMIDI is at http://webaudio.github.io/web-midi-api/

<joe> cwilso: standards progress - I've been pushing this forward for a while. some minor edits last month

<joe> cwilso: 12 open issues in github. only a couple interesting ones needing resolution, state attribute for instance

<joe> cwilso: need to deal with limitations on sharing of ports. InputMap and OutputMap need some reworking

<joe> cwilso: a couple issues I'd like people to weigh in on but other than that it's pretty well done.

<joe> cwilso: open and close are still issues

<joe> cwilso: hotplugging is critically important

<joe> cwilso: mozilla is working on WebMIDI but only one person (labor of love)

<joe> cwilso: before end of 2014 Chrome will give intent to ship email to Blink list

<joe> cwilso: right now users have to enable experimental flag in Chrome

<joe> cwilso: Chrome for iOS is not really Chrome, so MIDI won't be there

<joe> cwilso: but... it could work...

<joe> joe: what questions to the group would help WebMIDI move forward?

<joe> cwilso: there are 3 issues that I'd welcome feedback on

<joe> cwilso: Issue 75 (MIDI port open) problem is per-app exclusivity of MIDI ports

<joe> cwilso: the API doesn't have this concept

<joe> cwilso: implication of open and close w/r/t state

<joe> joe: issue thread is long, need to read it to respond: https://github.com/WebAudio/web-midi-api/issues/75

<joe> cwilso: in the API, there is no concept of open/close, but some underlying OSs require this b/c of exclusive access

<joe> cwilso: inclination is to add explicit open/close, maintain state, and implicitly open when a port is used

<joe> cwilso: virtual midi ports (issue 45)

<joe> cwilso: if you want to build a soft synth in one browser tab and access from another browser tab, you have no way of doing that today

<joe> cwilso: part of why I want to punt this is that it's hard to do

<joe> cwilso: not just in browser env, but also if you want to provide a web-based synth to Ableton Live as a quasi native app

<joe> cwilso: this carries impl burden of implementing a virtual MIDI device driver

<joe> padenot: there's a webrtc demo using webmidi using multiple browsers communicating using webRTC data channels

Music Notation

<olivier> joe: will be the subject of a breakout session tomorrow

<olivier> TBA - link to joe's presentation?

<olivier> joe: rough proposal - form a CG to further consider path forward on a new markup language drawing on past wisdom, eliminate backward-compat and IP concerns

<olivier> [Michael Good, VP R&D makemusic]

<olivier> michael: showing how widely musicXML is implemented, explains desire to eventually transfer to a standards org when mature

<olivier> ... in order to better support the industry's transition from printed to digital sheet music

<olivier> shepazu: w3c CG would be great, but considering AMEI/MMA and W3C have been working together would like some joint publication

<olivier> ... would recommend not bringing it to W3C without support of music-focused orgs

<olivier> joe: a CG can be very pluralistic

<olivier> shepazu: anybody can join

<olivier> ... needs chairs who can curate/cultivate the community

<olivier> ... output could be a w3c CG import (musicXML + changes)

<olivier> olivier: maybe more as a way to look to the future?

<olivier> TomWhite: likelihood of transfering the community, not just the work?

<olivier> michael: understand that transfer would involve work, evolution would not happen magically

<olivier> joe: hopeful to get new contributors to do some of the heavy lifing

<olivier> ... build new momentum

<olivier> TomWhite: can work be done by any?

<olivier> shepazu: no need to be members. CG contribution as individuals, open to all

rapping up!

<olivier> mdjp: anything we haven't covered tin past 2 days

<olivier> cwilso: one issue I may need more input

<olivier> ... https://github.com/WebAudio/web-audio-api/issues/373

<olivier> padenot: it's an issue because sometimes you are creating dozens of nodes per sec

<olivier> ... e.g in gaming every sound effect would be a buffersourcenode

<olivier> cwilso: easier with fire&forget and the reusable nature of nodes

<olivier> ... previously there was (well known?) limit of 5 playing audio elements

<olivier> ... which you needed to have in cache, decoded

<olivier> ... however, if you want reusable sound player

<olivier> ... you can do that but you will have to have a finite set of sounds playing at one time

<olivier> ... number of voices playing

<olivier> ... which you could track, perhaps using buffersourcenode, but will be messy

<olivier> ... using CPU to clean up buffersourcenodes not ideal

<olivier> joe: minimal amount of GC collected, no?

<olivier> cwilso: no, I mentioned that for particular app

<olivier> joe: worst case scenario - hundreds of objects a second

<olivier> padenot: not extreme

<olivier> ... FPS example

<olivier> joe: GC hundred nodes a sec - is it that bad?

<olivier> cwilso: problem is that GC is unpredictable

<olivier> joe: just want to understand if GC causes actual problem

<olivier> cwilso: came from a bug report saying GC was causing problems, because they were using streaming, decoding track and creating buffersourcenodes and chaining them

<olivier> ... if you stutter every 5 seconds, probably bad

<olivier> joe: seems like there will be GC regardless, in this case

<olivier> cwilso: problem remains in a "machinegun" use case

<olivier> padenot: what if you could re-trigger

<olivier> joe: could not play simultaneously

<olivier> padenot: yes but developer could use an array of them

<olivier> joe: seems problem is not well quantified

<olivier> padenot: could be a quick fix

<olivier> cwilso: there is data associated to the bug report

<olivier> cwilso: do we still have playingstate in buffersourcenode?

<olivier> all: no

<olivier> joe: suggest to defer - this can be done with audioworker. Prefer not to do away with fire and forget model too lightly

<olivier> [discussion whether audioworker would create garbage there]

<olivier> cwilso: how long do we defer?

<olivier> joe: we actually need to commit to leaving a few things alone for a while while we focus on "v1"

<olivier> [discussion on next draft to be published in /TR]

<olivier> mdjp: focus on mainly audioworker for the next heartbeat pub?

<ChrisLilley> 19 December through 5 January 2015

<olivier> olivier: suggest group could schedule publications according to big issues which would require significant review

<olivier> mdjp: sounds like tagging all issues with milestones is next action

<olivier> [discussion on what we want to commit to by next teleconference]

next meeting

<olivier> joe: two weeks from Thu?

<olivier> cwilso: 13th Nov

<olivier> ChrisLilley: would it be OK to have a list of what can be published/what needs more work

<olivier> cwilso: unlikely by then

<olivier> ... by then I will either have detailed solution for inputs/outputs or marked all issues

<olivier> ... but probably not both

<olivier> mdjp: suggest milestones first, address issues second

<olivier> joe: time box milestone process so there is enough time to resolve audioworker, migrate to a new WD

<olivier> ... by the moratorium

<olivier> RESOLUTION: statement of intent - the group aims to publish a new WD with audioworker before the moratorium

<olivier> [adjourned]

Summary of Action Items

[NEW] ACTION: mdjp to put together matrix [recorded in http://www.w3.org/2014/10/28-audio-minutes.html#action01]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2014/10/29 00:42:10 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/Topic: testing/Topic: Testing/
Succeeded: s/fro/from/
Succeeded: s/has been/is going to be/
Succeeded: s/tin/in/
Found Scribe: olivier
Inferring ScribeNick: olivier
Found ScribeNick: olivier
Found ScribeNick: hongchan
WARNING: No scribe lines found matching ScribeNick pattern: <hongchan> ...
Found ScribeNick: BillHofmann
ScribeNicks: olivier, hongchan, BillHofmann

WARNING: No "Present: ... " found!
Possibly Present: BillHofmann ChrisL ChrisLilley Cyril Cyril_ Harald Jerry NOTE Philippe Scribenick Shige Shige_ TomWhite all cwilso droh hongchan https jdsmith joe mdjp michael naomi naomi_ observer olivier padenot philcohen ray rtoyg rtoyg_m rtoyg_m_ shepazu shepazutu trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Agenda: https://www.w3.org/2011/audio/wiki/F2F_Oct_2014
Got date from IRC log name: 28 Oct 2014
Guessing minutes URL: http://www.w3.org/2014/10/28-audio-minutes.html
People with action items: mdjp

[End of scribe.perl diagnostic output]