Re: Aiding early implementations of the web audio API

On Wed, May 23, 2012 at 3:10 PM, <mage@opera.com> wrote:

> Citerar Chris Wilson <cwilso@google.com>:
>
>> On Wed, May 23, 2012 at 1:00 AM, Marcus Geelnard <mage@opera.com> wrote:
>> Den 2012-05-22 19:55:54 skrev Chris Wilson <cwilso@google.com>:
>>
>>> I have to disagree with the definition of "trivial," then.  The only node
>>>> types I think could really be considered trivial are Gain, Delay and
>>>> WaveShaper - every other type is significantly non-trivial to me.
>>>>
>>> I'd say that at least BiquadFilterNode, RealtimeAnalyserNode (given our
>>> suggested simplifications), AudioChannelSplitter and AudioChannelMerger
>>> are
>>> trivial too. In fact, if the spec actually specified what the nodes
>>> should
>>> do, the corresponding JavaScript implementations would be quite close to
>>> copy+paste versions of the spec.
>>
>> Again - we must have radically different ideas of what "trivial" means.
>>  AudioChannelSplitter/Merger, perhaps - I haven't used them, so haven't
>> closely examined them - but I definitely wouldn't put filters and
>> analysers
>> in that bucket.
>
> Ok, I admit I might have misused the word "trivial" a bit. However I was
> actually thinking about our proposed simplification of the analyzer node (
> http://www.w3.org/2011/audio/**track/issues/74<http://www.w3.org/2011/audio/track/issues/74>i.e. remove the FFT part - just keep a copy of the last N samples around),
> and for the filter I mainly considered the core filter operation which is
> basically a one-liner (excluding parameter setup etc which could admittedly
> amount to a few lines of code):
>
>  y[k] = b0*x[k] + b1*x[k-1] + b2*x[k-2] - a1*y[k-1] - a2*y[k-2];


This crossed over into the "non-trivial" bucket for me, yes.

...which is why there are JS libs. The Web Audio API is already too
>>> complex to use for most Web developers, so there are already
>>> libs/wrappers
>>> available for making it easier to build basic audio applications.
>>>
>>>
>> I'm not sure what you're trying to say.
>>
>
> Ok. I'll try to explain my reasoning a bit further:
>
> The Web Audio API has a certain level of complexity (don't get me wrong -
> it's by necessity). Quoting a blog post [1]: "the API is extremely
> low-level and cumbersome to use". The blogger (Matt Hackett) then went on
> and provided a simple wrapper library (Audia) for simple audio playback.
> This reminds me a lot of the situation with WebGL, where helper libraries
> such as GLGE [2] and three.js [3] came along quite quickly to provide
> higher level functionality on top of the very low-level WebGL API.
>
> So, I expect JS audio libraries to emerge (some already have, but once we
> have cross-browser support I think we will see much more serious work
> happen). Whether these libraries use native nodes or provide custom JS
> implementations will be completely equivalent to the library user.


Hmm.  I don't know quite how to put this.

Sure, you could simplify the API down even more, as Audia has - but
fundamentally, he complains about four steps in that blog post:
1) checking the AudioContext, because it may not exist/be present in all
browsers.  Umm, okay.  He tests for Audiasupported(), so I don't think this
is any different.
2) creating a buffersourcenode, and connecting it to output.  His
corresponding example, of course, has to create an Audia object; so all you
skip is connecting it.  What if you don't want to connect it straight to
output?
3) Fetching the sound file using standard XHR techniques, as has been done
for the past decade or so.  It seems he radically prefers .src, which I
would be fine with - except his example just plays the sound file whenever
it happens to finish downloading.  This simply isn't what game developers,
e.g., want to do!
4) creating a buffer from the XHR response and calling noteOn.  Yeah - so
you can reuse the buffer, and call noteOn() when it's appropriate in the
game.

I get that as a replacement for <audio src="foo.mp3">, the Web Audio API is
not a good solution.  If all you really want (as the Audia sample page
seems to) is a different way to say <audio src="foo.mp3">, you are barking
up the wrong tree.  But that isn't our use case set.

There's too much complexity, it's already having to be wrapped for
>> real-world developers, so let's push more complexity on them?
>>
>
> But who are we pushing this added complexity on? The ones writing the
> wrappers are usually prepared to spend quite some time to get things
> working properly and making life easier for "real-world developers". In
> that context writing some JS code for implementing filters etc would not be
> THAT much work. There are already JS audio processing libraries available
> (e.g. dsp.js [4]), and I'm quite confident that getting the core audio
> nodes implemented in JS would be a surmountable task (at least it should be
> simpler than putting it all in a spec and getting it implemented natively
> in all browsers).
>

I think there's a big cliff between "I'm gonna hide the complexity of
AudioBufferSourceNode, and give you a PlaySound() API that takes buffers",
and implementing filters in JS. But I'll go back to the middle of this
discussion - if you want to cut out the filters and other transformatory
nodes, as well as management, expecting all those to be rolled up from
scratch in JS libs, I'm not entirely sure why you would keep AudioNodes;
why do you want anything more than a buffered output system?

-C

Received on Wednesday, 23 May 2012 22:35:14 UTC