W3C

- DRAFT -

Audio Incubator Group Teleconference

18 Oct 2010

See also: IRC log

Attendees

Present
Regrets
Chair
Al MacDonald
Scribe
Al MacDonald

Contents


<trackbot> Date: 18 October 2010

<f1lt3r> scribe: Al MacDonald

<f1lt3r> ScribeNick: F1LT3R_tm2

<f1lt3r> meeting: Audio Incubator Group Bi-Weekly Teleconference

Conference code : 28346

Bridge Numbers: http://www.w3.org/2005/Incubator/audio/wiki/Audio_Incubator:Current_events

<chris> lots of noise on the line

<joeberkovitz> got dropped rejoining

ok

<rikrd> hi

<rikrd> I'm in

hi rikrd

<chris> i'm in too

<eric_carlson> should we have a scribe so the conversation is recoreded?

Scheduling of pure code events

joeberkovitz: This relates to sheduling callback events. Whille working with the webkit spec this weekend it looked like something that would be useful to add.

chris: I agree with Joe that I think this will be a useful thing. At the moment I m working hard at getting my spec into Webkit, so I need to stop the spec shifting around too much. So perhaps we make another page for version 2 features that should be delt with next.

joeberkovitz: Lets keep track of these features. I can probably add a link from my spec page to another page so we can work on that there.

Envelopes and loop points

joeberkovitz: These ideas seem pretty non-controversial but I wanted to raise this case anyone wanted to comment on this.

chris: We can use these to add envelopse at any point in the graph. It make sense to split out the envelope in the same way as the audio panner node. When I was originally workong on this is was just built in to the source itself.

Media cue points

joeberkovitz: This relates to agenda item 1 a little bit. Some media formats allow cue points embedded in the media file. Could the API use these cue points? I am not familiar with this personally, but I think it's a good point but I am not sure where it fits in to the overall picture?

chris: It would be good to understand which formats were are talking about here, I know there are some binary formats that do this.

joeberkovitz: FLV supports this and I think MP4 supports this also.

chris: I think we could look at using this data from MP4s at some point. We could also keep these key points in JSON which may be useful. I don't think we have to define a data format as such.

joeberkovitz: I think Chris's point was one of workflow, and that it would be good to have the capability of interfacing with the authoring software rather than separating the payload.

chris: I think we're in a difficult situation as media formats go as there are so many and different browsers are currently supporting different codecs.

eric_carlson: I think with regards to key points defined in media files, we should just wait to piggy back on the work that is being done on the HTML Acessibility Workgroup. The media elements will be getting a cue API that wont be at the resolution that you would want for processing real-time audio, but are intented mostly to be used for handling captions in syncwith the media elements. But the part that may be applicable is that there should be an API to

get access to the times of the cue-points defined in the media file so we could use those from script to se up the higher resolution callbacks.

chris: I think it will be a while before we can get to implementing this, but the pieces are falling into place.

Inability to schedule a subgraph of nodes in current API proposal

joeberkovitz: I don't think this is a version 1 feature, but think that we should think of it ahead of time so as not to back ourselves into a corner.
... You can take the example of a matrix filter, as you would use in SVGs, without a feature like that, image what working with SVG would be like? By analogy, the audio API is scheduled on an absolute basis. Imagine the factory code that has to build this thing, the engine could be messy, what I am hoping for is a mechanism that applys an overall timeshift to a group of notes that are acting relative to eachother in time. My first proposal was a

"performance" object.

shepazu: It would also be useful to transform/transpose events in groups such as raising the key/pitch of a group of notes or changing the tempo of a group.

chris: My concern is keeping things simple. The kind of abstractions you are talking about joeberkovitz could be done in JavaScript exactly as you are describing and handle all of the sub-scheduling. It should not be too hard to write that kind of code. My inclination is that it probably wont be messy, but I am open to exploring this idea.

joeberkovitz: We did this in standing wave, so I think the implementation complexity is not that much of an issue. I just want to be thorough on this.

Generating sample-accurate curves from JavaScript

F1LT3R_tm2: I am wondering if we are talking about looping through the whole array and setting each sample in JavaScript or passing this to the engine, I am concerned that we don't have to process every sample in JavaScript.

chris: Yes I am thinking of having a JavaScript curve that you define certain points in JavaScript and then would be passed off to the engine to do that actual calculation.

F1LT3R_tm2: So it sounds like it is not an issue with your API then.

chris: One thing that is interesting, is... what parameters can take these curves? Gain, Bi-Quad filters, resonance and cuttoff frequency paramters, pitch etc... then the question is, do you want to be able to control those at a sample-accurate level?
... The more parameters we instriduce, the more and more complex it becomes to control sample accurate changes to the parameter. The filter cutoff may be one that you do wnat to control on a sample accurate basis. If you don't change it at a per-sample basis, you can't get quite as much control.

joeberkovitz: These parametric curves probably have sample-accurate lookup tables, but parametric curves can be pre-generated.

chris: Lets say you have a compressor that has 10 parameters in it to control the dynamics, and in side of the DSP code you are dealing with all of those 10 parameter values, and so in a normal case you would not be automating all ten at the same time, so you usually write the code so the parameters are being controlled more slowly, with less resolution (as needed).

F1LT3R_tm2: Are we talking about setting different levels of granularity for each enevlope?

chris: No, we are talking about letting the software make a decision. In computer music CSound has the notion of audioRate (per sample basis) and K-Rate which could be once every half milliseond.

deliverables for charter

shepazu: I am going to try to write up the charter this week for the Audio Working Group. It would be useful if people could list what people think are the most important deliverables, in terms of specifications and other documents we should be working on.
... I think Eric made a point earlier made a point about the cue points for media elements, in writing this we should also talk about what kind of liasons we have with other groups such as the DAP working group and the media API for cue-points and subtitles.
... You can do this on the email list. Often times a really tight charter helps with getting it approved as it limits the scope, making things easier to deal with. Chris are there any issues with IP on the work you have done, you havn't filled for patents?

chris: No we have not applied for patents and I am not aware of any patent issues with the work I have been doing.

shepazu: Lets consider what audio tools that can be used to help create this content so that we might be able to have a workflow built in.

chris: Midi isthe first thing that comes to mind.

joeberkovitz: Music XML would be a good one to take into consideration.

<MGood> That's MusicXML - no space

shepazu: So send me your thoughts to the email list, preferably today and I will look to get the charter taken care of as soon as possible.

MGood: "MusicXML" has no space.

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.135 (CVS log)
$Date: 2010/10/18 17:02:22 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.135  of Date: 2009/03/02 03:52:20  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found Scribe: Al MacDonald
Found ScribeNick: F1LT3R_tm2

WARNING: No "Present: ... " found!
Possibly Present: Eric_Carlson F1LT3R_tm2 HP MGood MikeSmith_ P9 ScribeNick aaaa aabb aacc aadd aaee aaff audio chris dpenkler f1lt3r inserted joeberkovitz joined rikrd shepazu trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Found Date: 18 Oct 2010
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]