W3C

- DRAFT -

Audio Working Group Teleconference

07 Apr 2016

See also: IRC log

Attendees

Present
jdsmith, hongchan, rtoyg_m, BillHofmann, BillHofm_, padenot, ChrisL
Regrets
Chair
mdjp
Scribe
BillHofmann, jdsmith, hongchan, rtoyg_m

Contents


<mdjp> trackbot, start meeting

<BillHofmann> ScribeNick BillHofmann

<BillHofmann> ScribeNick: BillHofmann

<scribe> Meeting: Web Audio F2F

Goals for F2F

mdjp: agenda reviewed

Audio Worker

<rtoyg_m> https://github.com/WebAudio/web-audio-api/issues/776

<mdjp> https://github.com/WebAudio/web-audio-api/issues/776

<mdjp> https://github.com/WebAudio/web-audio-api/issues/777

<mdjp> https://github.com/WebAudio/web-audio-api/issues/778

<mdjp> https://github.com/WebAudio/web-audio-api/issues/779

hoch: first example is how to import a script - two potential patterns (issue 776)
... either under window or under audiocontext - padenot liked audiocontext

padenot: not sure anymore - may make sense to use first item for consistency

hoch: advantage of audiocontext is binding to context specific data (e.g., samplerate)
... however, audioworkletnode has access to context

padenot: exposing context should address this

rtoyg_m: other items besides samplerate - latency will be added

padenot: as well, currentTime, but you can get that by hand.

hoch: could/should be a param on process method

BillHofmann: what about (e.g.) an object renderer that needs number of channels from destination node?

padenot: plausibly something you can configure on load, or with postMessage
... note that other current examples are 1:1 with target object --> window; note ours could/are specific to audiocontext

BillHofmann: what's the disadvantage of loading once (on window)

padenot: issue is name collision

BillHofmann: would always need to use URL, anyway

hoch: other issue with singleton is with multiple contexts, with context-specific data

padenot: could do init on instantiation...

hoch: what about shared expensive resources (e.g. HRTF)?

padenot: could share immutable data with postMessage (via postMessage)

<hongchan> https://github.com/WebAudio/web-audio-api/issues/778#issuecomment-204138788

hongchan: sharing assets should be done manually

consensus - window scoped context is preferred

<hongchan> https://github.com/WebAudio/web-audio-api/issues/777

hongchan: about how you instantiate worklet
... prefer both new and factory

padenot: people expect a "create" method
... should do both
... (because is consistent with other elements in API)

all: discussion of details of new/create parameter - structure for AudioParams, inputs, outputs, ...

hongchan: details in issue - need to revise for exact WebIDL syntax, but...

padenot: looks good

https://github.com/WebAudio/web-audio-api/issues/778

hongchan: How to initialize - this is difficult

all: discussion of details in issue

hongchan: perhaps should set aside, and take up later

padenot: proposal to allow a new of a local AudioParam

hongchan: currently no constructure for AudioParam, and people will want to know why they can't use this outside

<hongchan> https://github.com/WebAudio/web-audio-api/issues/779

hongchan: messaging for AudioWorklet
... seems easy - postMessage/onmessage

padenot: order is postMessage/onmessage, then process

hongchan: note they're not necessarily synced

padenot: propose handle events first

rtoyg_m: actually, doesn't matter - implementation detail, really.

mdjp: would you have an issue about per-implementation differences?

rtoyg_m: no real way you can tell

hongchan: might want specific information in the event
... e.g. timestamp info

mdjp: what would you want to know that for?

padenot: maybe to determine delay?

hongchan: might want to know currentTime of when event was created

mdjp: will take up at next call.

Issue resolution session 1

<BillHofm_> ScribeNick: jdsmith

<mdjp> https://github.com/WebAudio/web-audio-api/issues/348

rtoyg_m: foolip raised a question on what "balanced" means, which was meant to be an in between for "interactive" (short latency) and "playback" (long latency)
... We are still using these three latency catagories now.

mdjp: If you select "interactive" you get the shortest latency? Yes.

padenot: You'd presumably select that for WebRTC use.

rtoyg_m: That breaks WebRTC on phone, however...
... foolip also suggests using "latencyHint" for the numeric latency hint. I like "baseLatency" to express it's the core latency of the input-output connection, without latency added by other nodes.
... "Balanced" is needed for WebRTC, and would represent 10ms latency. There is some buffering (vs. "Interactive" which is spec'd as the lowest possible latency), but it's still close to real time.

mdjp: Change "processingLatency" in issue 348 to "baseLatency" to clarify it's meaning. We also accept the revision to add the number value.
... Marking ready for edting.

https://github.com/WebAudio/web-audio-api/issues/780

mdjp: Agreed. Ready for editing.

https://github.com/WebAudio/web-audio-api/issues/771

rtoyg_m: Issue is that users can design a lowpass filter that we cannot represent.
... Example in issue shows the difference. You can use cookbook to implement the filter.
... Compare the results. Differences are audible.

<rtoyg_m> http://rtoy.github.io/webaudio-hacks/more/biquad/biquad-lowpass-q.html?usedB=true

mdjp: Decision: Use audio cookbook, but use Q in dB.

https://github.com/WebAudio/web-audio-api/issues/769

rtoyg_m: Should be closed as invalid.

https://github.com/WebAudio/web-audio-api/issues/768

mdjp: Marking V1.

https://github.com/WebAudio/web-audio-api/issues/767

mdjp: Marking V1.

https://github.com/WebAudio/web-audio-api/issues/766

BillHofm: Automations you do yourself would likely have the same behaviors.

mdjp: Marking V1. All V1 items above (3 total) are ready for editing.

https://github.com/WebAudio/web-audio-api/issues/95

rtoyg_m: Playback rate is -100 to +100, but we've never described negative playback behavior.
... cwilso has proposed a behavior in the absence of looping and when looping is present that seems complete.

mdjp: Issue was marked ready for editing 1 year ago.

padenot: Behavior should be symmetrical whether forward or backward.
... Resolution: spec negative playbackRate as have being exactly mirrored behaviour from the positive playbackRate behaviour.

<cwilso> I am in the office, BTW and can jump on the phone if it would help.

<mdjp> cwilso things are going ok but you are always welcome to dial in. See my email for skype details.

https://github.com/WebAudio/web-audio-api/issues/762

rtoyg_m: Requests specifying min/max values for all AudioParams.
... Only ones to be agreed upon are playbackRate and detune.

mdjp: PlaybackRate will be -100 to +100, detune will be -inf to +inf. Others noted in issue.
... Marked ready for edting.

https://github.com/WebAudio/web-audio-api/issues/760

rtoyg_m: Want to change the range for biquad gain. Currently -40 to +40. Suggest -inf to +inf (leaves open to implementation).

https://github.com/WebAudio/web-audio-api/issues/759

<ghaudiobot> [web-audio-api] padenot closed pull request #773: 759 biquad gain is db (gh-pages...759-biquad-gain-is-db) https://github.com/WebAudio/web-audio-api/pull/773

<ghaudiobot> [web-audio-api] padenot pushed 3 new commits to gh-pages: https://github.com/WebAudio/web-audio-api/compare/ab068b55b1f0...f633b27f68af

<ghaudiobot> web-audio-api/gh-pages 0855669 Raymond Toy: Fix #759: biquad gain is in dB...

<ghaudiobot> web-audio-api/gh-pages 60025a5 Raymond Toy: Tidy

<ghaudiobot> web-audio-api/gh-pages f633b27 Paul Adenot: Merge pull request #773 from rtoy/759-biquad-gain-is-db...

rtoyg_m: Editorial. Change pushed already.

padinot: Closed.

https://github.com/WebAudio/web-audio-api/issues/757

padinot: Should just return a typeerror.

mdjp: If array buffer has been neutered then promise rejected with type error. Marked "ready for editing"

https://github.com/WebAudio/web-audio-api/issues/749

"Merge SpatialPannerNode back into PannerNode?"

rtoyg_m: Agree with making this change. Probably need to deprecate the old API, but keep for compat.

jdsmith: Other specs leave deprecated APIs in, but highlight the preferred in text.

BillHofm: My suggestion as well.

padenot: Change is submitted by rtoy. Looks good, ready to merge.
... Resolution: merge back the SpatialPannerNode into the PannerNode .

https://github.com/WebAudio/web-audio-api/issues/739

"loadHRTFDatabase for SpatialPanner" using promise?

padenot: Currently to a lazy load.

roytg_m: Chrome loads fast, possibly shouldn't.

jdsmith: What happens on spatialPanner if it's not loaded?

padenot: We output silence.

royt_g: Same.

mdjp: Resolution move to V.next and consider along side requests for custom HRTFs.

https://github.com/WebAudio/web-audio-api/issues/729

"Multiple calls to getFloatFrequencyData"

Will resume on this after lunch.

<hongchan> ScribeNick: hongchan

https://github.com/WebAudio/web-audio-api/issues/729

mdjp: so we're looking at the difference in implementation.

hoch: is this notion of stable state a part of web platform?

padenot: yes, html.
... should return the same data.

rtoyg_m: we should fix the implementation to follow the expectation.

F2F Resolution: getFrequencyData() should return the same value for multiple calls for the same currentTime.

https://github.com/WebAudio/web-audio-api/issues/703

rtoy_g: most of them are obvious.

BillHofm_: what is our policy on float vs double.

rtoy_g: generally double for time, float for everything else.
... let's start with 696.

https://github.com/WebAudio/web-audio-api/issues/696

rtoy_g: the pattern will be - new FooNode(context, opiotns) options as in property bag.

padenot: we should go by the type name. (e.g. new GainNode)

rtoy_g: we keep the old factory pattern intact, so no property bag init options for them.

ChrisL: so if you pass in these values (channel count and etc) does it change dynamically?

padenot, rtoy_g: yes.

mdjp: it would be weird to exclude them from the options.

https://github.com/WebAudio/web-audio-api/issues/697

F2F Resoultion: Yes we will include these properties inthe property bag.

https://github.com/WebAudio/web-audio-api/issues/698

F2F resolution: No longer deprecated. constructor and property bag required for all parameters.

https://github.com/WebAudio/web-audio-api/issues/699

ChrisL: ScriptProcessor is a great example - we should not encourage people to use this anymore. We keep it, but we don't want the new constructor.

<ChrisL> no-one is relying on a script processor node constructor, so no web compat issues; do not add a constructor

https://github.com/WebAudio/web-audio-api/issues/700

hoch: this is pretty similar to what we discussed previously. (AudioNodeDescriptor in AudioWorkletNode)

F2F resolution: numberOfIn/Outputs should be in the property bag.

rtoy_g: having to specify all the options in the dictionary is cumbersome.

padenot: a dictionary can be inherited, so we can extend on that.

(WebIDL allows it)

BillHofm_: so what happens if an arbitrary property gets passed in?

padenot, rtoy_g: it gets ignored.

BillHofm_: for AudioWorkletNode, when and where this initialization happens? Who's responsible for it?

hoch: that needs to be done in the constructor, the developer of the node is responsible.

https://github.com/WebAudio/web-audio-api/issues/702

rtoy_g: I think the example looks fine.

F2F resolution: Agreed

https://github.com/WebAudio/web-audio-api/issues/703

ChrisL: can we make the imag part optional? in most cases they are zero.

F2F: Agreed with caveat that if real or imag are undefined they will default to array of 0s.

https://github.com/WebAudio/web-audio-api/issues/671

rtoy_m: he is requesting power-law automation. reasonable.

ChrisL: yes.

mdjp: so is this v1 or v2?

F2F: this should be v2.

https://github.com/WebAudio/web-audio-api/issues/740

hoch: this is irrelvant now.

ChrisL: just close it.

https://github.com/WebAudio/web-audio-api/issues/737

padenot: we have the processing model specced, but I will add a clear definition in there.

F2F: Update to define as 1 block

https://github.com/WebAudio/web-audio-api/issues/730

<rtoyg_m> https://www.chromestatus.com/metrics/feature/timeline/popularity/1251

rtoyg_m: according to this metric, we can remove this safely.

F2F: No web compat issue setVelocity will be removed

https://github.com/WebAudio/web-audio-api/issues/652

mdjp: related to this we can quickly review this issue too.

https://github.com/WebAudio/web-audio-api/pull/665

https://github.com/WebAudio/web-audio-api/issues/606

rtoyg_m: negative rolloff does not make sense.

mdjp: any idea on this?

rtoyg_m: what happens if you swap min and max? garbage in garbage out?

ChrisL: yes I think so.

https://github.com/WebAudio/web-audio-api/issues/251

<mdjp> cwilso - are you available for comment on https://github.com/WebAudio/web-audio-api/issues/251

mdjp: we can come back to that.

https://github.com/WebAudio/web-audio-api/issues/12

(everyone reading the thread…)

padenot: the implementation wise, we can use the latency information from different platform libraries.

mdjp: what is the next step?

<rtoyg_m> https://github.com/pozdnyakov/web-audio-api/commit/a20fe47f0bf084db909a9960fa1d13d803b7f112

padenot: I want to clear up the confusion between dynamic/static latency in the PR.

rtoyg_m: what happens if you plug-in a different audio device with a different latency?

padenot: the latency value is static, but it can be changed when the system changes.
... notification might be possible from AudioOutputDeviceAPI, not us.

mdjp: Paul, can you take over?

padenot: yes.

https://github.com/WebAudio/web-audio-api/issues/251

hoch: the problem here is that we can't subclass or create a custom node from a subgraph without using hacky overriding on connect() method.

padenot: yeah, overriding connect() can cause some collision between libraries.

mdjp: let's connect with cwilso@ for more progress on this.
... take 20!

<mdjp> 25;-)

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/783

ChrisL: this is v.next; but this should be a separate node.

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/784

mdjp: 2) exposing latency in AudioWorkletNode

padenot: note that this is the latency of individual node, not the tail time.

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/785

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/786

mdjp: we're not going to implement Doppler effect.

<rtoyg_m> scribenick: rtoyg_m

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/787

BillHofm_: Streaming decodeAudioData is very useful; but need to define use cases. It's definitely v.next

<ChrisL> https://github.com/WebAudio/web-midi-api/issues/161

mdjp: Analyzer node issue, basically FFT processing

<ChrisL> https://github.com/WebAudio/web-audio-api/issues/788

<hongchan> http://chuck.cs.princeton.edu/doc/language/uana.html

<hongchan> (I'll just put it here for the reference, ChucK with UAna)

hongchan: Chuck can do fft processing.

padenot: Current webaudio implementation of this would be difficult. The time domain processing is important part.

mdjp: Webmidi for Firefox and Edge.
... Scanning over v2 issues

#496: encodeAudioData.

padenot: Discussed in video working group, but it's odd since video is tied to a real clock.
... Connect offline context to media stream to encode data.
... Like a video editor on the web.

https://github.com/WebAudio/web-audio-api/issues/468

Keep.

https://github.com/WebAudio/web-audio-api/issues/457

https://github.com/WebAudio/web-audio-api/issues/456

hongchan: Not clear it's really useful to have bypass on each AudioNode.
... What about source node?

BillHofm_: It becomes a noise generator. :-)

<BillHofm_> Now that I think about it, probably better a 60/50Hz line noise. :)

padenot: Leave in for now.

https://github.com/WebAudio/web-audio-api/issues/373

padenot: Issue was one-shot ABSN was generating too much garbage

https://github.com/WebAudio/web-audio-api/issues/371

Leave in.

https://github.com/WebAudio/web-audio-api/issues/367

Leave in.

https://github.com/WebAudio/web-audio-api/issues/359

Leave in.

https://github.com/WebAudio/web-audio-api/issues/358

Leave in.

https://github.com/WebAudio/web-audio-api/issues/331

Leave in.

https://github.com/WebAudio/web-audio-api/issues/318

Leave in.

<ghaudiobot> [web-audio-api] padenot pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/6a0938bc46cc7030577f285378ec864705eefc01

<ghaudiobot> web-audio-api/gh-pages 6a0938b Paul Adenot: Make it so that the lower bound for the required sample-rate is 8000...

https://github.com/WebAudio/web-audio-api/issues/303

Close this one because offline context has suspend/resume.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/04/07 20:59:46 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.144  of Date: 2015/11/17 08:39:34  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/discusison/discussion/g
Found ScribeNick: BillHofmann
Found ScribeNick: jdsmith
Found ScribeNick: hongchan
Found ScribeNick: rtoyg_m
Inferring Scribes: BillHofmann, jdsmith, hongchan, rtoyg_m
Scribes: BillHofmann, jdsmith, hongchan, rtoyg_m
ScribeNicks: BillHofmann, jdsmith, hongchan, rtoyg_m
Present: jdsmith hongchan rtoyg_m BillHofmann BillHofm_ padenot ChrisL
Found Date: 07 Apr 2016
Guessing minutes URL: http://www.w3.org/2016/04/07-audio-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]