W3C

- DRAFT -

Audio Working Group Teleconference

22 Sep 2016

See also: IRC log

Attendees

Present
OlivierThereaux(obs)
Regrets
Chair
SV_MEETING_CHAIR
Scribe
joe

Contents


<hongchan> hello

<padenot> hi !

<hongchan> Hi Paul!

<hongchan> …and Joe.

<mdjp> Hi hongchan - do you want us to start the hangout so you can listen in?

<hongchan> I've sent an invite. Can anyone (with camera/mic) click the link?

<mdjp> I have but its doesn't look like it wants to connect.

<hongchan> Oh.

<mdjp> "Requesting to join the video call...."

I'll try

<hongchan> I registered Joe's email address.

<mdjp> are you able to add mine (matthewparadis@gmail.com) as thats where the mic is connected.

<hongchan> Okay, just did.

<hongchan> Do we have an agenda today?

<hongchan> I can tune in from 9am-1pm and hope this works for you all.

<mdjp> Introductions

hongchan, could you hear the intros?

<mdjp> https://www.w3.org/2011/audio/wiki/F2F_Sep_2016

<hongchan> I heard it - but didn't get a chance to jump in. Heh.

<hongchan> (honestly, I can't hear matt's voice really well.)

I'll tell Matt, mayebe he can switch mics

matt: <reviewing existing agenda>

hongchan, we moved the mic, let us know if this is still bad, better, whatever

<hongchan> Yeah, I think it's better.

matt: let's have spec editors give an overview of the current open questions with AudioWorklet

<hongchan> (hmm.. the audio is breaking up on my side, but I'll continue to tune in as much as possible.)

padenot: there a number of PRs to look at. at the bottom of some PRs, there are notes that we have to look at issue X or Y so there's a biet of work remaining. Basically in good shape
... this is probably the first Worklet being implemented so that will be an interesting test for the spec

rtoy: I agree with what Paul said. I think we're in good shape, but we need to figure out the lifetime question

<mdjp> Lifetime PR https://github.com/WebAudio/web-audio-api/pull/959

hongchan: I tried to respond to all the feedback from the TAG and believe I covered most all of it. There are a few remaining critical issues
... lifetime issue is one of them. Also a couple more questions like ownership of objects like the global scope and relationship with AudioContext
... we also have some internal storage areas that are not exposed to users and these are unusual features requiring some discussion
... also the messaging system is somewhat different
... I was hoping Ian could drop by if possible

joe: any other feedback from the group

padenot: the mapping relationship between global scope(s) and contexts

<ChrisL> https://github.com/WebAudio/web-audio-api/issues?utf8=%E2%9C%93&q=is%3Aopen%20lifetime

<ChrisL> It is fine for the mentin of tail time to be informative, as the normative part is in the definition of each node type

joe: <summarizes existing proposals on node awareness of input connections and node lifetime>

padenot: I was thinking about the fact that returning false means you're done, but yo ucan get called again and change the return value
... one thing that is important is that if a node finishes, you can trash it right then
... so that exactly at the node's finish, it becomes free
... tjere is no way even if you have a reference to a stopped ABSN to make it do anything

rtoy: I think if we return false from process(), it's all over: we just stop doing anything

<mdjp> joe counter argument, other nodes die because the application sets the node to stop functioning at a pointin time. Difference with worklet is that it can internally reach a point there it has finished processing.

<mdjp> joe: are there other nodes that coninue to live on because there is a reference to it. Eg delay, inputs can be disconnected then reconnected.

<mdjp> micbuffa: the same problem as a js worker. You cannot be sure that it can be gc'd even with no references. Terminate/cloee method can be called. How can you decide that a worker without any reference but is preparing something should diw?

<mdjp> joe the question is

<mdjp> never to be asked (scribe error)

<mdjp> joe perhaps there needs to be a way to know if there are inputs connected AND references exist. The processor would then be aware if it is still relevant.

<mdjp> padenot if num inputs is 0 and you drop all references then you know it can not be reused.

<mdjp> Chris: why doesn't WA use the same GC rules. padenot a node can have no reference but it has to be able to make a sound. Eg oscillator scheduled in the future. Holds a reference which is dropped when finishde

<ChrisL> https://developer.mozilla.org/en-US/docs/Glossary/Truthy

https://rawgit.com/WebAudio/web-audio-api/96fb6d4746fb69a77ef229bfe894c5d401c3395d/index.html

That's the proposal we're looking at

<mdjp> joe with an internal reference approach we do not need to refer to the input array?

https://github.com/WebAudio/web-audio-api/pull/959/files

<mdjp> hongchan: the text suggests that we are only checking active reference not connected inputs joe: that is down to the implementation. Emptu array is a good way to communicate this as it does not break any iteration in the processor.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/475#issuecomment-245960736

<mdjp> hongchan: I think returning true/false on every quantum is wastefull, it might be worth seperating the this out as a seperate function.

<mdjp> hongchan: this is hte same thing but a different pattern

<mdjp> joe language might be " the UA is free to consult the keepAlie method to determine the state of the node." We would not specify the frequency of this check.

<mdjp> joe: one more reason to decide. if process is documented as having a return value it forces developers to think about it.

<mdjp> hongchan: I'd like to note that other types of worklet have a similar issue to determine lifetime and the keepalive pattern is being discussed right now.

<mdjp> hongchan will continue to comunicate with iank_ around this.

<mdjp> joe: we will go with the return boolean idea, its possible that this might change based on other groups implementing worklet but they are essentialy equivalent.

<mdjp> joe: I think we have an answer to the lifetime question...

<ChrisL> 42

padenot: can Worklets communicate with each other?
... I'd say not. but we could do more than JS allows at present. for instance share read-only values between nodes of the same type
... not sure if read-only array slices in ECMAscript proposal have gone anywhere
... if shared memory is achievable then I'd argue against inter-Worklet communication
... (i.e. no shared global scope)

<mdjp> joe: register processor is a method on AWGS . hongchan: nothing stops developers from registering multiple processors in the worklet scope so anything declared here can be shared.

<ChrisL> [iank joins]

<mdjp> joe: if you are calling register processor in global scope that exposes it. How do you avoid the fact that it can refer to other variables declared.

<hongchan> (can someone turn the camera on?)

<hongchan> Thanks! I saw it.

Here's my example:

main scope: import("foo.js"); ========== foo.js: var x = []; registerProcessor("foo", class { function bar() { x[0] = ...; // mutation is visible to other Processor instances } }

<hongchan> Yes. I understood and I believe we should support that.

<hongchan> https://webaudio.github.io/web-audio-api/#AudioWorklet

<mdjp> joe: if there are multiple AWGS then there will be once invocation of a register processor call per audiocontext. Inputs get run against every AWGS so if a new context is created imports have to be rerun. This would have to be a requirement.

<mdjp> padenot this is fine from an implementation standpoint.

<mdjp> joe so we would tie globalscope to audiocontext. When you run import the UA will ensure that the import will be run against all existing GS and all future GS. What does that mean for the promise that gets returned from import?

<hongchan> (+1 to what Paul said)

<mdjp> padenot: whenever a script is imported it can be stored as a resource. Current and future audiocontexts can access this resource.

<hongchan> yes,

<mdjp> joe: summary - one audio worklet GS per context. When a script is imported it is applied to all existing GS (promise resolves). Future audio contexts have access to the same imported scripts. Sound good?

<mdjp> yes

<mdjp> joe what would we change about the processing model for audio worklet node?

<mdjp> hongchan: currently from the implementation perspective chromium has 2 layers, audio node and audio handler (audio worklet processor). Now that the processor is exposed in worklet the spec might need to describe how node and processor are called. Does this make sense spec editors?

<mdjp> are we missing normative decription of how AW nodes operate?

<mdjp> rtoyg_m: worklet should work like an audio node so processing model should look the same.

<mdjp> hongchan fine with that

<mdjp> padenot need to try and determine if we need to do something.

<mdjp> ChrisL leave this in the spec - mark it as at risk, and can be dropped etc if required.

postMessage

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/951

<mdjp> hongchan: we need a simple async messaging mechanism between AW node & AW processor.

ian's example:

registerThing('blah', class {

@expose method1() { .. }

});

<hongchan> Oh, I am familiar with this. You can put down the camera, Matt!

<hongchan> Thanks!

in the document...

var obj = ...;

obj.method1(/*structured clonable stuff*/);

<mdjp> iank_ regProcessor approach for communication into processor. padenot seems good. joe: there could be name conflicts but could be handled. Doesn't feel worse than parameter descriptors.

<mdjp> padenot does not see very complicated. iank_ only ting to nail down is timing. padenot thats fine we had that problem before.

<padenot> https://gist.github.com/dherman/5401735/revisions

<padenot> https://gist.github.com/dherman/5401735 sorry

ian's example of how to expose properties neatly without window-scope wrappers on AudioWorkletNode:

registerProcessor('node', class {

static propertyDescriptors = ['propName'];

onPropertyChange(name, value) { ... }

});

<hongchan> Got it. Thanks!

<mdjp> Thank you honghan for putting in the late night to join us! Catch you tomorrow! Sleep well.

<hongchan> My pleasure! Thanks for setting this up Matt!

<mdjp> https://github.com/WebAudio/web-audio-api/issues/942

<rtoyg_m> Needs WG Review: https://github.com/WebAudio/web-audio-api/issues?q=is%3Aissue+is%3Aopen+label%3A%22Needs+WG+review%22

rtoy: diagram suggests general case of M/K/N but this is not really supported
... also copied stereo case is not fully supported apparently
... this is a breaking change in that we would remove the copied-to-stereo behavior

joe: but the spec never guaranteed that one would get that
... there's still a problem in using the impulse-response in stereo-input mono-impulse mode, where one would expect each input channel to be separately convolved

<mdjp> https://github.com/WebAudio/web-audio-api/issues/975

rtoy: (relative to #942) let me think about it overnight
... it would be much simpler to adopt an approach where input channels are either used or discarded to match the number of output channels

matt: we're adopting recommendation #3 from the issue

<mdjp> https://github.com/WebAudio/web-audio-api/issues/973

rtoy: we need to organize the order of node documentation

matt: simplest is best. most obvious thing is that if everything is in alpha order, you'll know where to find stuff
... suggestion is AudioContext first, then everything else in alpha order

<mdjp> https://github.com/WebAudio/web-audio-api/issues/919

rtoy: Paul's solution LGTM

<mdjp> https://github.com/WebAudio/web-audio-api/issues/908

matt: since we need new better-specified compressors, why don't we spec DynamicCompressor w/r/t the existing implementations rather than changing it up

cwilso: it seems like polishing something myseterious is not good for the spec

<rtoyg_m> https://www.chromestatus.com/metrics/feature/timeline/popularity/638

joe: do we reverse engineer?

cwilso: we'd need to see how weird the implementation actually is

matt: I think we don't change the approach to channel layouts, but add or reference an issue about derermining the nature of the implementation

joe: is there only one implementation in the wild?

rtoy: yes

<mdjp> https://github.com/WebAudio/web-audio-api/issues/850

matt: we will allow a time constant of zero to effect instantaneous change as agreed by spec editors already

<mdjp> https://github.com/WebAudio/web-audio-api/issues/787

matt: I've had problems with this also
... we'll wait for Chris L to explain this to us

<mdjp> https://github.com/WebAudio/web-audio-api/issues/367

<mdjp> https://github.com/WebAudio/web-audio-api/pull/902

rtoy: questions are: 1) factory method? 2) name of AudioPAram, 3) default value == 1?

<mdjp> https://github.com/WebAudio/web-audio-api/issues/344

<rtoyg_m> Proposed algorithm: https://github.com/WebAudio/web-audio-api/issues/344#issuecomment-242499578

joe: need to make the explanation of rewritten setValueCurve() more explicit
... let's get rid of "It is undefined if new events..." by legislating what state the automation curve gets left in

<mdjp> https://github.com/WebAudio/web-audio-api/issues/246

cwilson: sidechain compression won't change algorithm of DynamicCompressor, just need a distinct input for the control signal that drives the gain reduction of the pass-through signal

matt: any breaking effects:

cwilson: no, with no sidechain input, node behaves the same

matt: I think we will add this and mark as Ready For Editing For Real

cwilson: I am volunteering to spec this

<mdjp> https://github.com/WebAudio/web-audio-api/issues/13

<ChrisL> https://github.com/WebAudio/web-audio-api/issues?utf8=%E2%9C%93&q=%5BTAG%5D

https://github.com/WebAudio/web-audio-api/issues/950

joe: <At this point we reviewed a bunch of issues marked for TAG review with Alex Russell, who is satisfied with the resolution of all of them>

query link was: https://github.com/WebAudio/web-audio-api/issues?utf8=%E2%9C%93&q=is%3Aissue%20%5Btag%5D

NoiseGate

<ChrisL> https://github.com/WebAudio/web-audio-api/commit/77323cf07a408fb490efaad3d738b19ca3fe2ee3

cwilson: the thresholds are evaluated per-sample (a-rate)

rtoy: can all of these parameters be automated since they are AudioParams?

cwilson: typically we wouldn't automate them, they could be attributes

joe: need to determine whether rampng is linear or exponential

<ChrisL> https://books.google.pt/books?id=pVIdAAAAQBAJ&pg=PA506&lpg=PA506&dq=noise+gate+gain+linear&source=bl&ots=9ALeQ48RV4&sig=Ocn3Ki7F-t_YaA8ylskmRWkxAK4&hl=en&sa=X&ved=0ahUKEwiq67-bk6PPAhWC7xQKHR_CCZgQ6AEINDAF#v=onepage&q=noise%20gate%20gain%20linear&f=false

<ChrisL> THAT design guide on noise gates http://www.thatcorp.com/datashts/dn100.pdf

matt: next up are V1 issues that are *not* marked as Ready for Editing

<mdjp> https://github.com/WebAudio/web-audio-api/issues/839

<mdjp> https://github.com/WebAudio/web-audio-api/issues/475

<mdjp> https://github.com/WebAudio/web-audio-api/issues/445

https://github.com/WebAudio/web-audio-api/issues/986

rtoy: we're going to document the constructors as being equivalent to factory methods followed by sequential calls to setters of the various attributes supplied in the options parameter

https://github.com/WebAudio/web-audio-api/issues/818

https://github.com/WebAudio/web-audio-api/issues/822

https://github.com/WebAudio/web-audio-api/issues/968

https://github.com/WebAudio/web-audio-api/issues/833

https://github.com/WebAudio/web-audio-api/issues/929

https://github.com/WebAudio/web-audio-api/issues/937

https://github.com/WebAudio/web-audio-api/issues/938

https://github.com/WebAudio/web-audio-api/issues/944

https://github.com/WebAudio/web-audio-api/issues/960

https://github.com/WebAudio/web-audio-api/issues/969

https://github.com/WebAudio/web-audio-api/issues/970

https://github.com/WebAudio/web-audio-api/issues/977

https://github.com/WebAudio/web-audio-api/issues/981

https://github.com/WebAudio/web-audio-api/issues/984

https://github.com/WebAudio/web-audio-api/issues/987

joe: moving on to look at new feature requests that are marked as Ready for Editing -- these should be carefully examined

https://github.com/WebAudio/web-audio-api/issues/264

fix in progress on #264

https://github.com/WebAudio/web-audio-api/issues/132

<ChrisL> (adjourned)

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/09/22 16:19:39 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.144  of Date: 2015/11/17 08:39:34  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/minutes/agenda/
Succeeded: s/chris/cwilso/
No ScribeNick specified.  Guessing ScribeNick: joe
Inferring Scribes: joe
Present: OlivierThereaux(obs)

WARNING: Fewer than 3 people found for Present list!


WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 22 Sep 2016
Guessing minutes URL: http://www.w3.org/2016/09/22-audio-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]