W3C

- DRAFT -

Audio Working Group Teleconference

13 Jun 2012

Agenda

See also: IRC log

Attendees

Present
+1.862.201.aaaa, ot, cwilso, +1.978.314.aabb, +1.510.334.aacc, joe, Doug_Schepers, crogers, jussi, +1.650.214.aadd
Regrets
Chair
SV_MEETING_CHAIR
Scribe
ot

Contents


<trackbot> Date: 13 June 2012

Testing

Olivier: several conversations about testing, now seems like a good time to start discussing our approach, agree on a strategy and get a test lead (for the group or for each deliverable)

<chris_> http://svn.webkit.org/repository/webkit/trunk/LayoutTests/webaudio/

Doug: testing is very helpful to help where specs are unclear

… also important when you start having several implementations

… it has been useful in other groups to have a "test lead" who helps coordinate the test effort

… may be the person who makes the tests, or extracts test requirements from the spec

… or encourages other people to add them

… can think of the test coordinator as the editor for the test

… will need one for each spec

… for the midi API it is likely that one of the two editors can manage the tests

for the web audio API, Chris has a lot of work with the spec, it may be best to have another person in charge of the testing

doug: it also engages more people, and adds checks and balance (Chris is not in charge of everything and wielding too much power)
... do we have any volunteer to be test controller for each of the spec

crogers: we already have over 60 tests written -> http://svn.webkit.org/repository/webkit/trunk/LayoutTests/webaudio/

… any of those can be shared

doug: that's good. one bit of value in the tests is that the implementors can use them to test their own build

… it's importance for the spec interoperability, but in practice it's useful to test the implementations

… so it's helping when each implementor contributes tests

… are these unit tests?

crogers: fairly basic tests, testing the gain node, etc

… pretty targeted

doug: will need hundreds, maybe thousands

Olivier: I've seen simple specs with thousands of tests, testing combination and parameters… those tend to be automated

doug: w3c is developing a test framework, a test harness

<jussi> https://github.com/w3c/testharness.js

http://w3c-test.org/resources/testharness.js

http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0725.html

Olivier: wondering if the webkit tests look like the method mentioned by Philip?

crogers: very similar to pixel tests

Olivier: how do you cater for the fact that you don't get a predetermined signal as output

crogers: a human being has to generate the baseline for each rendering engine

… the tests will be shared, but the baseline may not

crogers: a majority of our tests don't compare against a ref file

… what they do is render audio internally and use js to walk the rendered data and verify that it is correct

… with a very small tolerance

… many of those tests can be shared

olivier: did you have pixel tests with svg, doug?

doug: probably similar to what is called ref test in css

… a reference view, sometimes reproduced with a combination of simpler features

… in svg, we actually had reference images, and we did the testing manually

… hundreds of tests and we had f2f meetings where we ran tests together in a room

… also important for competitive reasons

… for svg2 we plan on doing web tests

crogers: a majority of these tests can be automated on baseline

olivier: similar to the graph that Philip sent today

crogers: not for oscillators, but for e.g biquad, yes

… similar to rendering lines

… lines will not be rendered exactly

… the same

… all are an approximation of a mathematical ideal

… most of the audio stuff is not that complicated, it's actually precise

joe: wanted to share a useful testing technique

… we alternate between the baseline and the test

crogers: you mean a diff?

… might not work so well for web audio

doug: I like the idea of having a baseline

crogers: 2 kinds of tests

… one fully automated with js walking the data and comparing

… the other compares the generated audio with a baseline

… bit-exact comparison

doug: is there some functionality in the API enabling this?

crogers: we had to build an extra tool in our test harness

doug: would that be useful for some audio analysis use cases

crogers: probably not because in a normal environment you can generate the wav file and upload it to a server via xhr

… we just have a specific hook in our test harness

… details would be different in another test tool

jussi, you've posted a link to some tests

jussi: testing that the interfaces are there

<jussi> https://github.com/jussi-kalliokoski/web-midi-test-suite

jussi: not sure about more complex tests

… don't think you can do automated test

… you'd need a tester with MIDI device

ot: could that be "faked"?

jussi: a virtual midi device could work

cwilson: would be hard to make cross-platform, obviously

jussi: I think OSX lets you do that

crogers: yes I think you can do that

… harder to write that test than to write the MIDI implementation itself

jussi: I had a virtual device in my original proposal, decided to keep it out of v1

… some platforms don't support it at all.

Olivier: I will send a call for volunteers on the list, to see who wants to be in charge of testing for each of the specs

jussi: if noone else volunteers I can be in charge for the MIDI API, for now

rechartering update

http://www.w3.org/2011/audio/charter/2012/charter-proposed.html

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2012/06/13 19:55:47 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/riso/rison/
No ScribeNick specified.  Guessing ScribeNick: ot
Inferring Scribes: ot
Default Present: +1.862.201.aaaa, ot, cwilso, +1.978.314.aabb, +1.510.334.aacc, joe, Doug_Schepers, crogers, jussi, +1.650.214.aadd
Present: +1.862.201.aaaa ot cwilso +1.978.314.aabb +1.510.334.aacc joe Doug_Schepers crogers jussi +1.650.214.aadd
Agenda: http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0692.html

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 13 Jun 2012
Guessing minutes URL: http://www.w3.org/2012/06/13-audio-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]