W3C

- DRAFT -

Audio Working Group Teleconference

26 Oct 2015

See also: IRC log

Attendees

Present
rtoyg_m, mdjp, jdsmith, Chris_Lilley, Martin_Thomson
Regrets
Chair
SV_MEETING_CHAIR
Scribe
Chris_Lilley

Contents


<trackbot> Date: 26 October 2015

<mdjp> mdjp - discussion on closure for v1

<mdjp> padenot implementation is following quickly on from spec changes

<mdjp> padenot testing is improving but not there yet, miozilla trying to contribute to web platform tests

<mdjp> cwilso biggest bits are processing model audio worker - detail around how compression works but all captured as issues. TAG review very productive, feel strongly that we need to get the low level bedrock apis defined.

<mdjp> joe review issues and current status

<mdjp> joe 23 open issues (unassigned) 60 ready for editing in total (v1)

<mdjp> joe v1 7 issues that are not ready for editing

<mdjp> joe what can we do to speed up the resolution of the issues? Most are straight forward to resolve. There are a lot of issues assigned to people that are not moving. They should be unassigned if assignees are unable to work on them.

<mdjp> cwilso looknig forward to charter suggest that forking the work into specific areas (low level, processing etc) rather than a single monolithic would be a better approach rather than concentrating on V2 as a single spec

<jdsmith> billHomann: Can we adopt an agile approach to moving forward? Review issues as they are raised and not adopt set goals.

<jdsmith> joeb: We've pretty much adopted a sprint model in the last few months and it's worked well.

<jdsmith> mdjp: Need to get the issues to the point where we have a working draft that is in good shape, where we are happy with the feature set and status of the spec.

<jdsmith> BillHofmann: Beyond the spec, what about tests? What's the full scope of work that we need to complete?

<jdsmith> joeb: Recommend we start discussion on other remaining issues, and then revisit status of V1 later in the day.

<jdsmith> jdsmith: Tempting to say that there are a handfull of larger issues and an editing push to close "ready for editng" issues and be very close to V1 completion.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/118

<jdsmith> joeb: Let's start with issues not marked "ready for edting".

<jdsmith> joeb: On issue 118, Paul recommends we leave resampling open for diferent UA implementations. Is it acceptable to do that and let diferent UA's compete?

<jdsmith> padenot: I've never heard of anyone complain about resampling behavior.

<jdsmith> mdjp: It can be good to have that kind of competition in the absence of compelling reasons to standardize.

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to 372-StereoPanner: https://github.com/WebAudio/web-audio-api/commit/ebe890de4006e4aea9dc8da5fdbacd9b1eb1ab78

<ghaudiobot> web-audio-api/372-StereoPanner ebe890d Chris Wilson: Clamp loopStart, loopEnd and offset...

<jdsmith> rtoyg: I don't object to that.

<jdsmith> joeb: Then we shoudl close the issue.

<jdsmith> Issue 118 is now closed.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/348

<jdsmith> joeb: jernoble proposed providing categories of buffering

<jdsmith> jdsmith: We've had requests from companies to help get this issue resolved. There are applications that are real time, and they could set smaller buffers and use more power.

<jdsmith> cwilso: Not sure last discussion on this was going the right way. It would be good to expose some kind of knob to tune buffering/power.

<jdsmith> rtoyg: Knob can be set to play music, with longer buffers and latency. Or may want RTC, which short buffers and limited latency. Could be so short that stream would drop out frequently. Need some way to balance.

<jdsmith> joeb: Are general categories not sufficient? If so, what do we need?

<jdsmith> joeb: Proposal had "interactive", "non-interactive" and "media". Is "media" like RTC?

<jdsmith> cwilso: Media would be like streaming from Pandora. Could be long buffering, longer latency, minimized battery.

<jdsmith> mdjp: How do we know what the diferent values mean?

<jdsmith> cwilso: "interactive" is lowest latency, "non-interactive" is somewhat higher and useful for RTC, and "media" is the longest latency.

<jdsmith> joeb: What about using actual numbers?

<jdsmith> jdsmith: Those vary by device and are hard to quantify.

<jdsmith> padenot: And sample rate as well.

<jdsmith> cwilso: Interactive, normal and playback?

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/0a0111a03051ae4857e17b5937744e69f53429ca

<ghaudiobot> web-audio-api/gh-pages 0a0111a Chris Wilson: Clamp loopStart, loopEnd and offset....

<jdsmith> joeb: Define the terms: Intereractive: loest latency without glitching, Normal: balance latency and stability/power consumption, Playback: latency not important, sustained playback without interruption is priority. Lowest power consumption.

<jdsmith> joeb: Normal would be the default.

<jdsmith> padenot: Should put this on the constructor, when we have that.

<jdsmith> joeb: Not done yet, but should do that or we'll have a mess.

<jdsmith> Jdsmith: Correction, interactive should be the default, and should be like current implementations.

<jdsmith> joeb: "Normal" does sound like it's the default though.

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/44ebce6044882da08c41f30a835b3564c922a407

<ghaudiobot> web-audio-api/gh-pages 44ebce6 Chris Wilson: Return destination node from connect()...

<jdsmith> joeb: Reorder to balanced, interactive, playback.

<hongchan> https://github.com/WebAudio/web-audio-api/issues/532

<jdsmith> "Impedance mismatch between the Web Worker text and what is possible/needed for the AudioWorker"

<jdsmith> cwilso: Need to remove all remaining references to WebWorker and keep this bug open until that's done.

<jdsmith> joeb: Assigned to cwilso.

<jdsmith> https://github.com/WebAudio/web-audio-api/issues/628

<jdsmith> "Specify the processing model for the Web Audio API"

<jdsmith> joeb: This was discussed yesterday.

<ghaudiobot> [web-audio-api] cwilso deleted 372-StereoPanner at ebe890d: https://github.com/WebAudio/web-audio-api/commit/ebe890d

<jdsmith> joeb: Can close it when the Pull Requests for AudioWorker are completed.

<jdsmith> joeb: Will now look at issues that aren't marked "needs review", but also are not "ready for editing".

<jdsmith> https://github.com/WebAudio/web-audio-api/issues/13

<jdsmith> "A NoiseGate/Expander node would be a good addition to the API."

<jdsmith> cwilso: There is a proposal attached to the issue on the new node type.

<jdsmith> joeb: jer pushed back on it.

<jdsmith> cwilso: We discussed this in the working group. In the absence of someone reviewing the proposal, I'm reluctant to do the commit.

<jdsmith> cwilso: I feel strongly we need this node type in the spec.

<jdsmith> joeb: What do the other editors want to do with this? Is it spec'd sufficiently to roll into the spec? If not, it may have missed the window.

<jdsmith> padenot: I'd rather add the spec for the dynamics compressor. It's in the spec, but the details of it are not.

<jdsmith> cwilso: The only two problems with DynamicsCompressorNode are that it's not spec'd. It also has no side-chaining output.

<jdsmith> cwilso: Noisegate is important. You'll want to put one on every mic stream.

<jdsmith> joeb: It feels weird to me to add a new node that's similar to another one that's not yet already spec'd. If we work them together, the one might fall out of the work for the other.

<jdsmith> cwilso: Not clear how that helps one fall out of the other.

<jdsmith> joeb: I think there will be more comfort accepting the commit once DynamicsCompressorNode fully spec'd.

<jdsmith> joeb: Next up is the bug on accessing different output devices (https://github.com/WebAudio/web-audio-api/issues/445). To be discussed later today.

<jdsmith> https://github.com/WebAudio/web-audio-api/issues/631

<jdsmith> rtoyg: This one is ready to implement.

<jdsmith> joeb: Marked that way.

<jdsmith> Now working uncommitted issues.

<jdsmith> https://github.com/WebAudio/web-audio-api/issues/606

<jdsmith> "Valid rolloff factors for Panner?"

<jdsmith> joeb: Is this something we can resolve here, or does it require some deep diving?

<jdsmith> joeb: Suggest we move on and come back to this later.

<rtoyg_m> https://webaudio.github.io/web-audio-api/#the-pannernode-interface

<hongchan> https://github.com/WebAudio/web-audio-api/issues/611

<jdsmith> "Distance models: maxDistance vs refDistance"

<jdsmith> joeb: Throwing errors works great if you have a method call.

<jdsmith> rtoyg: So we'll just let people set these any values, whether they make sense or not? That's probably okay with me.

<rtoyg_m> Panner formulas: https://webaudio.github.io/web-audio-api/#the-pannernode-interface

<jdsmith> joeb: Could update formulas to clamp dref to the min of dref and dmax.

<jdsmith> joeb: I'd prefer to solve this in the formula definitions rather that throw errors.

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/2677a0f777fc180dc30d21fddec072eb255b679d

<ghaudiobot> web-audio-api/gh-pages 2677a0f Chris Wilson: Remove references to Web Workers....

<hongchan> https://github.com/WebAudio/web-audio-api/issues/612

<jdsmith> "Behavior of exponentialRampToValue when the previous event is less than equal to 0"

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/078b15f29fd7cd4ccb357a88d2133354c9fc73f6

<ghaudiobot> web-audio-api/gh-pages 078b15f Chris Wilson: loop modification mixup....

<jdsmith> rtoyg: Have a proposed behavior that seems okay (hold the value from the end of the previous event) if V0 is negative.

<jdsmith> roytg: Can do the same if either V0 or V1 are negative. If both are negative, the equation is valid.

<jdsmith> joeb: Can redefine the formula for the cae when V0 and V1 have opposite signs to evaluate to V0.

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/013e6ad482af195e8844c083a365171d11871c59

<ghaudiobot> web-audio-api/gh-pages 013e6ad Chris Wilson: Rearrange description of "currentTime"...

<jdsmith> joeb: i.e. preserve the previous value.

<jdsmith> https://github.com/WebAudio/web-audio-api/issues/640

<jdsmith> "Specify the nominal range for SpatialPannerNode's AudioParam"

<jdsmith> paul: AudioParam's generally need nominal values.

<jdsmith> joeb: So do we just need to say vectors can be -infinity to +infinity?

<jdsmith> padenot: Yes.

<jdsmith> https://github.com/WebAudio/web-audio-api/issues/642

<jdsmith> "Conflict on resuming context"

<jdsmith> joeb: Resume and suspend do feel pretty different. Should we just close this issue?

<jdsmith> joeb: Suggest we change the behavior of the OfflineAudioContext to match the online one - resolve the promise if the context is in the running state, and eject only if it is closed.

<jdsmith> Next topic: Testing

<jdsmith_> rtoyg: All of our tests are running in a Chrome test harness as well.

<jdsmith_> joeb: Had an intern that got existing tests running on different browsers.

<jdsmith_> joeb: My take after that work was that it wouldn't be that hard to get all the tests running on every browser.

<jdsmith_> joeb: That would give us another positive piece for finishing V1.

<jdsmith_> joeb: How many of Mozilla's old tests aren't in the W3C format?

<jdsmith_> padenot: Most of them.

<padenot> https://github.com/w3c/web-platform-tests

<mdjp> https://github.com/DanielShaar/web-platform-tests

<mdjp> https://github.com/DanielShaar/web-platform-tests

<jdsmith_> joeb: It sounds like testing work might be characterized as migrating tests into the w3c test harness.

<jdsmith_> mdjp: That wraps up the testing discussion.

<ghaudiobot> [web-audio-api] cwilso pushed 1 new commit to gh-pages: https://github.com/WebAudio/web-audio-api/commit/938cf2bf8f81a4febb0228eae03c9d0af5103cbc

<ghaudiobot> web-audio-api/gh-pages 938cf2b Chris Wilson: connect() typo, and tidy cleanup.

<mdjp> paging shepazu

<mdjp> zakiim, pick a victim

oh zakim, get a grip

<scribe> scribenick: Chris_Lilley

v1

joe: all the issues have equal weight so far
... unprioritised backlog. Leyts decide the most pressing issues. even idf some things are not done.

s/Leys/Lets

scribe: what things are essential?

joe: audio worked, dynamics compressor, automation cleanup are clearly needed
... other things are clarifications and we could leave thm as is for now. Gives focus.

BillHofmann: just 2 buckets?

joe: loking for showstoppers, things we can't ship without

BillHofmann: what about 251?

cwilso: nothing much needed, could be v1

BillHofmann: just tagged as v1. looking at the milestone, 65 of them

mdjp: need to think of timescale, if we can get to that point. All on my list was covered already, so where can we get to

jdsmith: reassess when you have all the top issues donew. does not stop them being implemented

joe: prefer to identify key things we will focus on

mdjp: in a months time, review where we are and so on
... make sure no one person has all the issues

joe: rebalance if necessary
... metric is the ones that are really needed to get v1 out the door

BillHofmann: question, open pull requests - there are a lot, what is happening? done means merged?

joe: yes

padenot: editors will merge them

joe: 20 open PR

BillHofmann: longer they sit, more likely they will not cleanly merge

joe: so balance work away from spec editors to compensate

padenot: we just need to get on and do it

rtoyg: some PR have significant amouts of stuff

padenot: yes, authors need to address comments and send new PR that address them

BillHofmann: a week is a reasonable timeframe
... okay, understood

joe: lets pick important v1-required issues

<joe> https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Web+Audio+V1%22+sort%3Acreated-asc

joe: 10 https://github.com/WebAudio/web-audio-api/issues/10

cwilso: requirement

joe: 12 is being worked on by mozilla

https://github.com/WebAudio/web-audio-api/issues/12

joe: 30 https://github.com/WebAudio/web-audio-api/issues/30

cwilso: this is editorial

31 https://github.com/WebAudio/web-audio-api/issues/31

rtoyg: must be done

Chris_Lilley: its trivial except all the must and MUST need to be checked and perhaps reworded

joe: do last in case language changes and it needs to be redone

34 https://github.com/WebAudio/web-audio-api/issues/34

(not critical)

45 https://github.com/WebAudio/web-audio-api/issues/45

mdjp: do later, in case of changes

52 https://github.com/WebAudio/web-audio-api/issues/52

joe: I have a PR on that

85 https://github.com/WebAudio/web-audio-api/issues/85

joe: there is a PR on this from me

cwilso: conflicts with one I just checked in, definition of current time

86 https://github.com/WebAudio/web-audio-api/issues/86

mdjp: has PR

95 https://github.com/WebAudio/web-audio-api/issues/95

joe: padenot this is yours

padenot: okay

jdsmith: not in first bucket

padenot: okay, postpone

97 https://github.com/WebAudio/web-audio-api/issues/97

joe: sure I have a PR on that

110 https://github.com/WebAudio/web-audio-api/issues/110

joe: PR on that too

130 https://github.com/WebAudio/web-audio-api/issues/130

joe: is this adressed by what we discussed yesterday?

rtoyg: yes, mostly

cwilso: no initial schedule point, this confuses people. ramp with endpoint has no start oint and so jumps, no defined begin. Can't directly solve it
... could saythere is a magic initial point, but goofy.
... interleaved points, carl said this is goofy, diagram changes. but that is how a scheduler works

joe: close it?

cwilso: implemnenbtations are consistent but it is not described

joe: should be

132 https://github.com/WebAudio/web-audio-api/issues/132

(no takers)

BillHofmann: weresolved a year ago, cwilso had a comment.

joe: feel we can leave this as-is

246 https://github.com/WebAudio/web-audio-api/issues/246

joe: is this the same as the noisegate one?

cwilso: its on dynamic compressor, straightforward. Thought I had done this already

mdjp: speccing the compressor is essential

cwilso: (thinks) grave mistake to have a dynamic compressor with no sidechain. its trivial to input, just taking a control from a different input. and incredibly useful
... no easy way to fake it
... also good to have the envelope follower output

joe: we resolved to add a second input

cwilso: from f2f last year

joe: roll into dynamics compressor?

cwilso: not blocking on it
... different from noise gate where you rely on attack and decay controls

Chris_Lilley: very easy and widely used

247 https://github.com/WebAudio/web-audio-api/issues/247

padenot: fixed by a PR in flight

264 https://github.com/WebAudio/web-audio-api/issues/264

joe: I can take it

300 https://github.com/WebAudio/web-audio-api/issues/300

joe: this is a nice to have.

cwilso: important because so many mobile devices have high sample rates; otherwise 8bit audi in games comes out at 48k. we have no access to the resample bit
... streaming rate not under control; else we could make a contwext at stream SRand align samples. Can't do that so clicks
... its a reasonable solution. we could cut it but
... not convinced we should drop it for v1

322 https://github.com/WebAudio/web-audio-api/issues/322

padenot: PR in flight

324 https://github.com/WebAudio/web-audio-api/issues/324

not current sprint

330 https://github.com/WebAudio/web-audio-api/issues/330

rtoyg: trivial

BillHofmann: not in sprint

cwilso: resolved a year ago. not done yet

335 https://github.com/WebAudio/web-audio-api/issues/335

joe: third optional argument

jdsmith: doesn't block anything

336 https://github.com/WebAudio/web-audio-api/issues/336

joe: put this off

344 https://github.com/WebAudio/web-audio-api/issues/344

BillHofmann: we did this yesterday

cwilso: yes but we got complaints
... not responded yet
... adda cancel-and-hold with no time on it

346 https://github.com/WebAudio/web-audio-api/issues/346

joe: spec already clear (they can't)

cwilso: not specifically stated though

348 https://github.com/WebAudio/web-audio-api/issues/348

rtoyg: think we should do this

cwilso: IN CURRENT SPRINT?

rtoyg: yes

cwilso: OMGG (flips table)

joe: think its not going to happen in current sprint
... OK we have 10, and 5 assigned to cwilso

cwilso: node composition is no change

joe: will take the tag ones

cwilso: constructability is a lot of work and not fundamental

joe: straightforward but annoying

cwilso: 12, someone is working on, who? not clear
... https://github.com/WebAudio/web-audio-api/issues/12

padenot: in gecko but not exposed yet

cwilso: think (lots of proposals, all over the place in this thread)
... we need a way to understand when current time is played in performance now time

padenot: that is hard

cwilso: independent of destination node latency

joe: many ways to map back and forth

cwilso: need latency of output device, timestamp accounts for chunking
... need to work out where they are. output node latency is a different issue

joe: is that 340? no
... oh because 12 takes that on
... this is one of the big remaining defects for v1. we need solutons to both. or can't tell when something is heard

cwilso: needs input latency too

joe: this gives latency cleanly, and another number to solve issue 12

cwilso: one edit to that michael said,
... context performance timestamp, when that cuirrent time represents
... performance.now +x = when it comes out the speakers
... sometimes destination node latency is unknowable and can't be done automatically
... when current time on the audoio context is, in terms of performance time
... its when we send those bits to the destination. plus dest latency on top

joe: how can we say that is spec prose?

cwilso: context.performsanceOutputTime
... when is current time going to hit the speakers. alweays in the future. buffer + latency. monotonically increases but jumps in blocks

joe: leave what he had in there?

cwilso: not useful as it stands. expose dest not output latency separately or not

joe: very important for app tuning
... people ask this a lot

cwilso: yes, I agree
... smooth out buffered chunking. exposing that is fine. but one number or two
... two sources of latency
... do you want that on average or right now

joe: on average

cwilso: one number is current time in performance time
... but current time chunks forward, so it is not uniform. depends when you are in the buffer
... don't need the average, you can calculate it

padenot: they are not clocked together
... so they drift

cwilso: but as an average latency it does not matter

padenot: if they drift apart the average will move

rtoyg: yes as not on same clock

padenot: so you re-add each time

cwilso: yes
... never schedule minutes or hours in a buffer. goal is to get enough info to estimate when to do things. not asmple accurate across two clock sources
... to a reasonable degree. not phase synch for example. but enough for drum hits ie ams or so
... gets in the ballpark with opne number. can smooth over time if you really want
... input and output latency, eventally, I hope

<joe> https://github.com/WebAudio/web-audio-api/issues/12

mdjp: we have other issues related. use consistent language for them

cwilso: this should still represent when it hits the speakers, as an estimate
... output node latency is included in that

joe: ok so we addressed 340 in effect, and bailed on it

cwilso: yes, lets you solve 340
... you don't have a smoothed estimate but you do have one.

rtoyg: will do audio param scheduling

cwilso: this is the "no initial point" one

joe: so lets get these done in a month. actually, much sooner

BillHofmann: when are the next meetings?

joe: 2-3 weeks from thurs, so 12 Nov

cwilso: doubnt i can do that

joe: lets defer telcon schedule until the end of the meeting

break

<mdjp> device output api proposal

<mdjp> joe we are tryint to introduce facilities into media capture for output devices

<mdjp> joe alowing us to discover and select output devices - default to a device which best fits an application

<mdjp> HA what do you mean by default?

<joe> https://docs.google.com/document/d/1jRVexJ6yM6gJggOZjMFejXUApTByD5aoYRGzAJTsQzM/edit#

<mdjp> joe there could be a stripped down policy to give information about the default device.

<mdjp> joe selection of output device should be available to the user

<mdjp> ??? some of these usecases are similar to input

<mdjp> Martin - current proposal - read only missing

<mdjp> joe - how does the source object work

<mdjp> Martin - its a stream which draws from a url source object is the same but draws from a media stream

<mdjp> joe - we would also do the same with proposals for waapi you cna construct a context around an id or output deice abstraction

<mdjp> joe - no pre defined ids as they should correspond to devices defined by constraints

<mdjp> Dan - for inputs the reason for tracks - you can send them or use other apis to manipulate, not sure if you can do something similar with output

<mdjp> Dan - at a high level likes the anaolgy

<mdjp> Harald - media team track encapsulates selection of a source and processing instructions for the material coming from that soure

<mdjp> Martin - decision was explicit - not just stream of media but some things associated with it also

<mdjp> Harald - control surface for the source

<mdjp> joe - enumerate devices provides no information about output device

<mdjp> Martin - model here is superior to original proposal

<mdjp> Harald - they were originally based on a mozilla proposal

<mdjp> Dan - what would it mean to clone a media output device. Important for input devices.

<mdjp> Harald - If you grab the same device twice will you ever have the need to do something different to one compared to the other one.

<mdjp> joe - there is a problem, connection between audio context and device is magic - there are no apis that describe it. Device may exist at a level beyond what you can do with web audio. Muting for example - if you mute a sink all elements connected should go silent.

<mdjp> Martin - if you restrict to audio that you have control over - you could manage the playout yourself and mute indepenDan - tly.

<mdjp> joe - final bit - set of constraints, capabilities and settings

<mdjp> joe - channel layout and sample rate are essentials

<mdjp> joe - propose to extend media devices with getUserMediaOutput - takes constraints and returns promise

<mdjp> Dan - selectMediaOutput - proposed name

<mdjp> joe - its analogous to getUserMedia and has permissions requirements - lynch pin is abiity to aquire output device this way.

<mdjp> Harald - in windows there is a set of predefined devices which the user expects to be able to select. No ay the browser can figure out the properties of those devices - eg ring tone speaker and music speaker one placed to left and right

<mdjp> joe - enumeration of music vs comms (semantic descriminator)

<mdjp> Martin - this would be a useful feature

<mdjp> Martin - If we are able to reduce the control surface (or eliminate) then this is the wrong model - we have media device structure for enumerating which provides this information - removing proposed device would mean going back to sink ids - nice to have an object but if nothing in it then its not much value. We need to eliminate the control surface aspects of this.

<mdjp> joe - we arrived at this proposal we doubted the enumerated devices would give us enough information without the prompt

<mdjp> BillHoffman increasingly use of html on non pc or mobile devices eg smart TV - a lot of the permissioning is strange in this case.

<mdjp> Dan - a lot of conversation about permission - we could not put requirements on how permissions are given, it could be implicit in the software rather than something that needs to be prompted. Allow browsers to innovate.

<mdjp> Martin - the origin operating the UI in this context is known to the browser so can imply pre permission (in the smart tv case) This would not violate permissions model.

<mdjp> Martin - on device changed event allows enumeration of devices - eg switch from surround to stereo output

<mdjp> Martin - 2 avenues to explore 1 we shoudl determine if control surfaces are needed in the use cases (in our proposal)

<mdjp> joe - define control surface

<mdjp> Martin - change volume - volume is writable, apply constraints - writable, mute - writable. Justify each of these with a use case or get rid

<mdjp> joe - if we killed them all

<mdjp> Martin - if you could then how much can we expose with enum devices, and constraints to create a single choice for output

<mdjp> joe - don't necesarrily need constraints

<mdjp> Martin - q is if you have info you just pick what you want - problem is you dont want to expose all of the information (latency, current volume) to much for the fingerprinting surface.

<mdjp> Dan - we need to change enum dev to give more info before permissoin prompt - yes you need info from enumerate devices, may be that level of info for output is different as perms model is different

<mdjp> Martin - analogy between vid and audio - a lot if information available - nothing we can do about fingerprinting in this case. Number of channels - enumerate, lowers fingerprinting surface - trade of of value from what we expose compared to fingerprint

<mdjp> Dan - might be multiple levels of permission

<mdjp> BillHofmann - - list that mapped well to system available info

<mdjp> Dan - draw line of conditional and unconditional info - can use same conds as getUserMedia

<mdjp> joe - we wind up where we started, enum devices only lifts the filtering if you call gUserMedia

<mdjp> Harald - does anyone here understand the permission api

<mdjp> Dan - doesn't like the request permission aspect - contextless requets for perms not good

<mdjp> Dan - having a way for app to query for information (withough permission) requested removal of request from api request(permission) generates the prompt for the permission

<mdjp> Dan - we need some way for app to request transition from one state to the other

<mdjp> joe - device choices useless without the name which s the most fingerprinted

<mdjp> Martin - posibility of creating images to deal with passing names of devices - on hold at the moment

<mdjp> shepazu - in web payments, match payment instruments in wallet with instruments that are accepted by the site.

<mdjp> Dan - rtc - codec negotiation is a similar problem

<mdjp> Harald - get setting call could be a solution to store settings - not available on the output the moment its on the track

<mdjp> Martin - when you enum a device you may find that snapshotting the info info media device info workd

<mdjp> joe - not keen on that at the moment

<mdjp> Dan - make sure you think about what evil people might do (regarding permissions)

<mdjp> Martin - almost done with V1 - strong desire to complete - this fits with V2

<mdjp> Martin - getUserMedaOutpu - should it narrow to a single device pre or post permissions

<mdjp> joe - - mistaken idea - constraints dictionary not such a good idea.

<mdjp> trackbot, end meeting

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.140 (CVS log)
$Date: 2015/10/27 07:55:02 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.140  of Date: 2014-11-06 18:16:30  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

FAILED: s/Leys/Lets/
Succeeded: s/dynamic/dynamics/
Found ScribeNick: Chris_Lilley
Inferring Scribes: Chris_Lilley
Present: rtoyg_m mdjp jdsmith Chris_Lilley Martin_Thomson

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 26 Oct 2015
Guessing minutes URL: http://www.w3.org/2015/10/26-audio-minutes.html
People with action items: 

[End of scribe.perl diagnostic output]