Re: Missing information in the Web Audio spec

On Fri, May 18, 2012 at 9:23 AM, Philip Jägenstedt <philipj@opera.com>wrote:

> On Thu, 17 May 2012 01:36:15 +0200, Robert O'Callahan <
> robert@ocallahan.org> wrote:
>
>  On Thu, May 17, 2012 at 11:02 AM, Chris Rogers <crogers@google.com>
>> wrote:
>>
>>  As it stands right now, the Web Audio API makes no claims about whether
>>> the underlying implementation uses a block-based or per-sample approach.
>>>
>>
>>
>> That is good and we should definitely preserve it.
>>
>>  From a purist API perspective it really doesn't have to, because in the
>>> future such performance limitations may become moot.  But until that time
>>> is reached, practically speaking we may have to spell out some
>>> limitations
>>> (minimum delay time with feedback...).  This is what I would suggest.
>>>
>>
>>
>> So then, one approach would be to specify that in any cycle of nodes,
>> there
>> should be at least one DelayNode with a minimum delay, where the minimum
>> is
>> set in the spec. The spec would still need to define what happens if that
>> constraint is violated. That behavior needs to be carefully chosen so that
>> later we can lower the minimum delay (possibly all the way to zero)
>> without
>> having to worry about Web content having accidentally used a too-small
>> delay and relying on the old spec behavior in some way. (I know it sounds
>> crazy, but spec changes breaking clearly-invalid-but-still-**deployed
>> content
>> is a real and common problem.)
>>
>> Alternatively we can set the minimum to zero now, but then we need to
>> write
>> tests for cycles with very small delays and ensure implementations support
>> them. If there's a JS processing node in the cycle that will not be
>> pleasant...
>>
>
> I think this is a sane approach unless everyone is prepared to support
> per-sample processing, which I suspect is not the case. Chris, how large
> are the work buffers in your implementation? How large can we make the
> limit before it becomes a problem to generate useful, real-world effects?
>

Hi Philip, the buffer size we use for rendering is 128 sample-frames.  In
our implementation it's a power-of-two size become some of the effects use
FFTs, where this makes the buffering easier.  We also like to keep this a
relatively small power-of-two size (and would even consider going down to
64) to reduce latency for those audio back-ends which can support it.  For
those audio back-ends which don't support it, we simply process multiple
work buffers to satisfy one hardware request for more data.

I think this size is small enough to allow for a good range of useful
real-world delay effects.  I don't want to go larger because of the latency
hit.

Chris



>
> https://www.w3.org/2011/audio/**track/issues/42<https://www.w3.org/2011/audio/track/issues/42>is another issue where the internal implementation choices of work buffer
> sizes would give audible differences, possibly there are more similar
> issues.
>
>
> --
> Philip Jägenstedt
> Core Developer
> Opera Software
>

Received on Friday, 18 May 2012 17:39:12 UTC