See also: IRC log
<trackbot> Date: 25 October 2015
<paul> haa
<scribe> scribenick: cwilso
<padenot reviews processing model>
http://padenot.github.io/web-audio-api/#processing-model
<hongchan_> https://drafts.css-houdini.org/isolated-workers/#examples
padenot: the Houdini description of running processes on different threads seems helpful in describing our model
<hongchan_> I find this is relevant for us.
<hongchan_> (what paul just pointed out)
Joe: what changes do you see coming out of the Houdini draft that will affect us?
padenot: Houdini thread says you don't parse script on houdini thread (?)
joe: do you think the interfaces described for nodes would be affected by this?
paul: yes, I don't think that
would change.
... I don't see incompatibilities between those apis and
isolatedworker
bill: are they proposing this (in Houdini) as a general interface?
paul: that's the intent.
hongchan/cwilso/paul agree it's intended to be a general concept.
joe: where are we relative to implementability?
paul: speaking as an implementer,
I'm gonna wait a bit. :)
... I've spoken with the person who would likely end up
implementing, and they're not sure exactly how script is
intended to be run, etc.
hongchan: ian kilpatrick has a prototype of the underlying bits; once that settles in, I'll be looking at implementing
<padenot> https://github.com/w3c/css-houdini-drafts/issues/
<padenot> https://github.com/w3c/css-houdini-drafts/labels/Isolated%20Workers
paul: issues 54 and 56 are directly related to audio
joe: processing model is the biggest new thing introduced, and I haven't seen a lot of comments on the group. Is it good, is it complete...
paul: I can tell you it's not complete (yet)
bill: if I understand, the processing model affects the implementer of audioworker, but not the user?
paul: well, sort of - the user of audioworker may have their code break due to the processing model.
joe: small question about processing model: on currentTime. Are you going to merge these changes (annotating sync/async) soon?
paul: blink uses an atomic to update currentTime, we use a stable state. We think the latter is more correct, but open to discussion.
joe: is there something the group needs to discuss in this area? I have a PR that redefines how currentTime is updated. IS that still appropriate?
paul: if you while(1) console.log(context.currentTime), does it ever update? We think no...
joe: would it make sense for the
update of currentTime to take effect while the main loop is
busy?
... the new semantic of currentTime reflects work done by the
audio thread...
cwilso: I think currentTime should be advancing...
joe: also, we're going to be adding mapping between currentTime and DOM time.
cwilso: yeah, we'd end up with more jitter in the timing then.
paul: I think I agree.
joe: processing model needs a step 6 to reflect this.
paul: we should look at how performance time gets updated, maybe?
cwilso: This is blasphemy! This is madness! THIS IS AUDIO!
paul: I'll follow up on this.
<mdjp> https://www.w3.org/community/webtiming/
paul: also somewhat related: on
time-domain mapping, there's another person working on syncing
multiple devices' time clocks. (See url above)
... it's clear we can't just have a number of milliseconds of
latency
cwilso: or can we? :)
joe: with chair hat off, there's
a glaring need for even a bad representation of latency.
... to come back to audioworker...any more to discuss here?
cwilso: naming?
<joe> bhofmann: if we step back, we're letting people write audio nodes
<joe> cwilso: we're letting them write custom audio processing
<joe> bhofmann: should we call them "custom audio processors"?
<joe> cwilso: if we have CustomAudioProcessor we'll have CustomAudioProcessorNode too
<joe> bhofmann: trying to avoid getting caught up in worker
<joe> bhoffman: obviously names matter a lot or we wouldn't have this discussion
<joe> cwilso: we're not trying to harmonize with houdini
<joe> cwilso: I'll come up with some names
matt: people should come up with names during the next two days and write them on the board.
https://github.com/WebAudio/web-audio-api/issues/573
<mdjp> mdjp - cwilso will be judged to be the voice of reason in taking our naming suggestings further.
joe: ray, can you summarize?
rtoy: this is a mashup of 1)
scheduling two things at the same time, and 2) interlacing
between schedule calls
... the model is relatively clear in the spec (other than the
initial value point).
... if you have automation going on, and you schedule a new
event that ends before the one you currently have running
... what happens?
<MUCH DISCUSSION ENSUES>
resolutions: 1) #344 should remain the POR, 2) we should remove the automation events of the same type overriding each other, but the rest is fine.
<break>
</break>
<joe> iankilpatrick: visiting from CSS Workers
<joe> ian: IsolatedWorkers piece of infrastructure that we need for CSS extensibility in the rendering engine
<joe> ian: for example, define your own custom layout. CSS Paint API spec coming soon to create callbacks
<joe> ian: concerns on running user's script in the main thread. what happens if you call setInterval() in this context?
<padenot> https://drafts.css-houdini.org/isolated-workers/
<joe> ian: the first thing we needed is a stripped down JS worker concept
<joe> ian: IsolatedWorkerGlobalSCope is basically empty except for Base64 fns which are convenient to have
<joe> ian: we wanted to start from a clean slate with no dangerous APIs that someone can call from the middle of a rendering engine
<joe> ian: then we came to the next thing we needed: the ability to be able to run on multiple threads and not be tied to main thread at all.
<joe> ian: in the future we'd like to be able to run painting on separate thread
<joe> ian: this is why we wanted the ability to spin up multiple local scopes that are tied to one worker and be able to arbitrarily call into any of them. User can define all of this.
<joe> ian: for purposes of Houdini we'll probably run two IsolatedWorker global scopes and randomly assign calls to each of them
<joe> ian: concerns about devs depending on order of allocation of calls to these global scopes
<joe> ian: s/multiple local/multiple global/
<joe> ian: there isn't the ambiguity of an event callback while you're doing someithing synchronously. see CSS Paint spec
<joe> ian: on the order of 20 Kb per context. doesn't make sense for layouts with 100s of nodes to all have their own context
<joe> ian: other things in spec that try to reinforce this. for example when script is loaded, it's forced into a strict JS parsing mode and wrapped in an anon. function
<joe> ian: goal is to make it really hard for script scopes to communicate accidentally
<ChrisL> (unminuted fast discussion on cancelling automation without holding onto future scheduled values, and the precise meanings of "now" and "as soon as possible")
<mdjp> TAG: what is the problem with making audio params constructable?
<mdjp> cwilso no ability to create an audio param which does not have its values accessed as they are not updated
<mdjp> joe current audio worker audio params are created indirectly from global scope
<mdjp> cwilso they can be created on main thread or within AW script
<mdjp> TAG: if we can not find a way to make progress on this quickly it would not be an issue
<mdjp> joe new audio param and pass in node as frrst argument
<mdjp> TAG is there a param which is gobal and referenced by each node?
<mdjp> cwilso audio worker is not a node - its a factory for type of audio processor - returns a node with audio params attached
<mdjp> cwilso only loads code once in the audio thread - creates objects with their own audio params
<mdjp> cwilso audio params created on instance not on generic audio worker - audio prarams not created until a node is instances
<mdjp> padenot audio params not marshalled, only values which are represented on the audio thread
<mdjp> cwilso we dont have the ability to create an audio param on one node.
<mdjp> joe audio worker param descriptor could be constructed
<mdjp> TAG process of constructing a node causes a parameter to be allocated on the node?
<mdjp> TAG creating audio param on a node is meaningless?
<mdjp> padenot not been thought about as they only appear on node creation
<mdjp> cwilso 121 relationship can be thought of as a read only attribute on nodes
<mdjp> TAG: set up a restriction on audio param constuctor to be inert if not allocated a node
<mdjp> cwilso - need to be careful that you don;t have the expectation of being able to attach an audio param to a worker instance and marshal to the audio thresd
<mdjp> cwilso audio worker can change its param descriptor changing how it looks in the main thread
<mdjp> cwilso if audio param is constuctable there is nothing useful you can do with it
<mdjp> TAG this is fine - its an open question which can be documented and answered in the future
<mdjp> cwilso - proposal desugar all context created nodes adding constructors with context as initial parameter
<mdjp> related https://github.com/WebAudio/web-audio-api/issues/255
<mdjp> TAG: audio worker provides a path to resolve this issue - should not necessarily hold up v1
<mdjp> padenot: some nodes not defined and shoul dbe defined before providing the descriptions
<mdjp> cwilso - move to v2
<mdjp> https://github.com/WebAudio/web-audio-api/issues/251
<mdjp> cwilso fine to subclass nodes but not able to do anyting to their audio processing
<mdjp> cwilso having a custom compositor node would be useful eg - create a chorus node
<mdjp> padenot have you found problems with a connect method
<mdjp> padenot not found issues
<mdjp> cwilso other challenge - forwarding of audio params - may not be achivable in V1
<mdjp> joe may be ok
<mdjp> TAG sounds relitively complete - very difficult to package a library of nodes and make them portable. Hard to come up with useful grouping.
<mdjp> cwilso not tried this to see if it worked and presumed if didn't - will try it
<mdjp> joe are we doing anything to rule out a compositor node approach in the future? We could solve the problem and ook into it later
<mdjp> RESOLUTION: Deferring tfor cwilso to verify behaviour and will close on verification if sucessful
<mdjp> https://github.com/WebAudio/web-audio-api/issues/257
<mdjp> cwilso we should be able to define the html 5 audio elemnt in terms of web audio - we don't have lower level output access
<mdjp> padenot we dont expose anything past the destination
<mdjp> padenot important to address this, good use case to not use the web audio api - companies using script processor or scheduling audio buffer source node.
<mdjp> cwilso what is a media stream - how do we get bits in and out?
<mdjp> joe we are trying to reconsile uncertain areas with media capture and rtc
<mdjp> cwilso what is the appitite to define i/o apis and implement them.
<mdjp> TAF this should not hold up shipping of the api - but commiting to a lower level api or task force would also be valid
<mdjp> joe how much responsability will media capture take on with respect to abstractions of devices
<mdjp> TAG question for tag to take up with the group and something to come back to. Knowing more about media streams would be interesting
<mdjp> joe we are meeting with some of the RTC group tomorrow to talk through a proposal for output devices.
<mdjp> BillHofmann_can we do useful things with the spec as it stands (yes) - TAG: there ar opportunities that we have yet to enable
<mdjp> RESOLUTION - we should describe this in more detail and be a priority for charter post V1 as a task force or cross working group.
<mdjp> https://github.com/WebAudio/web-audio-api/issues/344
<mdjp> cwilso - should not add additional method (assumeCurrentValue) clarify spec to point out possible discontinuities, current value in the future will rever to the last schedule point
<mdjp> https://github.com/WebAudio/web-audio-api/issues/436
<mdjp> cwilso - through acceptingt his proposal does this mean we would look to make ABSNs reuseable - this is a consistant request from users.
<mdjp> joe - should resolve 436 and then investigate reuseable ABSN question
<mdjp> some conversation happens in the bottom corner of the room
<mdjp> joe - proposed resolution, convolver waveshaper and ABSN all output silence once buffer or curve is set to null. Node becomes inactive and can be GC'd new buffer or curve throws error
<BillHofmann> Hello, everyone!
Hi, Bill!
<BillHofmann> https://github.com/WebAudio/web-audio-api/issues/535
<BillHofmann> The order in which Promises are returned, rejected, or resolved is ambiguous
<BillHofmann> joe: this is similar to questions jernoble raised about processing model
<BillHofmann> padenot: my branch has explicit description in async vs sync cases
<BillHofmann> joe: this issue goes away...
<BillHofmann> padenot: what's important is to describe how you queue the events back and when you resolve the process
<BillHofmann> rtoygm: not sure we have this done correctly
<BillHofmann> billhofmann: is the spec language specific enough?
<BillHofmann> joe: jernoble suggests that he's ok with language if explicitly indeterminate
<BillHofmann> rtoyg: note the worker spec isn't specific
<padenot> joe: https://github.com/WebAudio/web-audio-api/commit/6d93bdaa763e71379516ccb6d4d987993108fad9
<BillHofmann> padenot: can close this once padenot's commit referenced above is merged
<BillHofmann> padenot: #647 is the key one
<padenot> https://html.spec.whatwg.org/multipage/webappapis.html#processing-model-8
<BillHofmann> padenot: note this is the processing model in HTML ("what is a microtask"?)
<BillHofmann> Next up: WebMIDI
https://github.com/WebAudio/web-midi-api/labels/needs%20WG%20review
<BillHofmann> cwilso: three issues that need WG review
<BillHofmann> cwilso: Issue #110... determine if hardware supports general MIDI...
<BillHofmann> cwilso: lack of consistent support means that you'll get lots of false negatives
<BillHofmann> kawai: no devices expose this
<BillHofmann> cwilso: however, USBMidi specs it...
<BillHofmann> <discussion amongst Yamaha/AMEI team>
<BillHofmann> joe: James wants it
<BillHofmann> cwilso: not sure if it's worth supporting - could go either way
<BillHofmann> kawai: what about DAW systems?
<BillHofmann> cwilso: no, in that case you're going to set everything up by hand
<BillHofmann> joe: I don't think general midi is a very interesting thing to expose
<BillHofmann> cwilso: inclined to not support, or at least not support in v1
<BillHofmann> kawai: almost no devices support, so we don't think it's valuable
<BillHofmann> jdsmith: indicator isn't reliable, not sure how useful it is
<BillHofmann> cwilso: seems like you'd need to set it up anyway
<BillHofmann> cwilso: will leave in v2, see if it rises from the near-dead
<BillHofmann> https://github.com/WebAudio/web-midi-api/issues/150
<BillHofmann> cwilso: have this issue with software synths in general, need to ask permission for software devices
<BillHofmann> cwilso: takashi notes that we'd need to also prompt re BT devices
<BillHofmann> cwilso: recommendation is that midi options should have a software type, would rather not have separate BT types, even though that means we'd prompt for them.
<BillHofmann> takashi: no strong opinion, that's ok
<BillHofmann> cwilso: note BT MIDI devices just show up on Mac and Windows, but mobile platforms require request
<BillHofmann> <discussion about approach re software synths>
<BillHofmann> cwilso: should ignore BT problem, if we end up prompting everytime on Android, that's fine
<BillHofmann> cwilso: so, proposal: just flag software
<BillHofmann> https://github.com/WebAudio/web-midi-api/issues/148
<BillHofmann> cwilso: hard to consistently get correct information; propose punt.
<BillHofmann> cwilso: these were the only issues I wanted review on
https://github.com/WebAudio/web-midi-api/milestones/V1
<BillHofmann> cwilso: two vendors shipping webmidi-dependent products: Yamaha and Peavey
<BillHofmann> cwilso: Mozilla impl is in development...
<BillHofmann> padenot: it's working on OS/X, padenot has offered help on Win/Linux
<BillHofmann> mdjp: last item on the agenda!
<BillHofmann> joe: should perhaps cover Audio Output stuff
<joe> https://docs.google.com/document/d/1jRVexJ6yM6gJggOZjMFejXUApTByD5aoYRGzAJTsQzM/edit#
<BillHofmann> joe: existing audio output devices API takes either device IDs, magic strings, use them to set the Sink on various objects
<BillHofmann> joe: could use this to set the sink for either HTMLMediaElement or Web Audio to this
<BillHofmann> joe: however, there are some problems with this
<BillHofmann> joe: 1. enumerating devices doesn't really tell you anything of value, and
<BillHofmann> joe: 2. can't get access to characteristics of output devices and
<BillHofmann> joe: 3: they don't really have the info you need
<BillHofmann> joe: 4. sink ids also don't allow any control of output device (e.g. mute)
<BillHofmann> joe: proposal is to have a getUserMediaOutput that works similar to getUserMedia
<BillHofmann> billhofmann: also note use of constrainable pattern
<BillHofmann> padenot: wasn't there an API to select output devices for video?
<BillHofmann> joe: not aware on one...
<BillHofmann> cwilso: need to be able to know the native clock rates
<BillHofmann> joe: we had a comprehensive list of these items - just not in the spec
<BillHofmann> kawai: are multiple inputs supported?
<BillHofmann> padenot: yes, I think!?!
<BillHofmann> standing motion to adjourn
<mdjp> trackbot, end meeting
This is scribe.perl Revision: 1.140 of Date: 2014-11-06 18:16:30 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/developer/implementer/ Found ScribeNick: cwilso Inferring Scribes: cwilso Default Present: (no_one) Present: (no_one) cwilso padenot mdjp joe jdsmith rtoyg_m BillHofmann WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 25 Oct 2015 Guessing minutes URL: http://www.w3.org/2015/10/25-audio-minutes.html People with action items:[End of scribe.perl diagnostic output]