This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
Consider the following code: > var ctx = new AudioContext(); > var source = ctx.createBufferSource(); > // some_loaded_buffer is a valid AudioBuffer > source.buffer = some_loaded_buffer; > > var gain = ctx.createGain(); > var delay = ctx.createDelay(); > // defaults to zero anyways, just there to be explicit > delay.delayTime = 0.0; > source.connect(gain); > gain.connect(delay); > delay.connect(ctx.destination); > // cycle > delay.connect(gain); > > source.start(0); The spec does not describe the case where the |delayTime| parameter of the |delay| node is zero. In fact, problems would arise if the |delayTime| is lower than |128/ctx.sampleRate|. While it is fairly easy to detect something like |delay.delayTime.value = 0.0| where |delay| is in a cycle (and we could throw), there are be some cases where we can't easily predict the value of the AudioParam. I propose that in such hard-to-detect cases, the subgraph containing the cycle is treated as a silent input, no error being thrown. It could also be of interest in such case to warn the author in the eventual error console of the UA.
Having to recheck for cycles every time we sample the delay AudioParams would be pretty bad I think. So I think we should clamp the DelayNode's delay to a minimum of 128 cycles at all times. Then cycle-checking just needs to ensure there's at least one DelayNode in every cycle and we're fine.
Regarding clamping: Delay times below |128/ctx.sampleRate| are very useful, e.g. in chorus effects, flangers and when adjusting group delay time in parallel signal paths.
We actually resolved about a year ago to clamp to a minimum delay: http://lists.w3.org/Archives/Public/public-audio/2012JulSep/0768.html Making the minimum delay be one block was really the only viable option.
Right. For what it's worth, in case of loop without at least a one block delay, Pd refuses to starts the processing ("error: DSP loop detected"), and Max mutes the subgraph containing the cycle, but does not log an error. Reading the 2012-09-12 minutes, we still have to agree on whether we want to clamp in ms or block. I'd say that one block is better. Either way, we could warn the author in the console (maybe only the first time it occurs), that his value has been clamped.
I do want to ensure we're talking about clamping ONLY in the case of a cycle, not without a cycle (where short delays are useful for phase effects).
Yes, we are talking about clamping only for DelayNodes that are part of a cycle. Are we fine with having a clamping value of one block? (128 frames, 128 / sampleRate seconds) ?
I think clamping to a minimum of one sample block (128 samples) is fine.
(In reply to comment #2) > Regarding clamping: Delay times below |128/ctx.sampleRate| are very useful, > e.g. in chorus effects, flangers and when adjusting group delay time in > parallel signal paths. I agree with this statement. Short delays are also useful for wind / pipe synthesis, etc. A lot of the graph based synthesis software seem to allow this (e.g. NI Reaktor), I wonder how the implementation is done... I think it would even be fine to fall back to per-sample processing if the delay goes under the block size. It's nicer to pay a performance price for features than to not have them at all. In cases of delay node + gain node (i.e. finite response) feedback loops you could usually even optimize it away into vector operations. This would have the additional benefit of not exposing implementation details, i.e. the block size.
Clamping to a minimum of one block is only going to happen when the DelayNode is part of a cycle. Hopefully your use-cases for low delays don't require DelayNodes in cycles.
I also support clamping down to a minimum of 128 samples in the looping case.
(In reply to comment #9) > Clamping to a minimum of one block is only going to happen when the > DelayNode is part of a cycle. Hopefully your use-cases for low delays don't > require DelayNodes in cycles. Unfortunately they do; One common way of achieving wind synthesis is to have a noise and DC source which is then fed to a short feedback loop. There you get feedback resonation, and the length of the delay essentially becomes the base wave length of the note you want to play (similar to how sound moves in a pipe). In practice it's not quite as simple as this of course, there's a lot of filter-work to be done afterwards, but this phase is crucial for the process. If you clamp the delay time down to 128 samples, at the sample rate of 48kHz this gives you the hard limit of 48000Hz/128=375Hz which doesn't really suffice for any wind instrument.
OK. Unfortunately that's just too bad. You'll need to write your simulator in JS.
(In reply to comment #12) > OK. Unfortunately that's just too bad. You'll need to write your simulator > in JS. Hahah, yeah, I expected as much. I'll just have to wait until for worker processing (and that I make a new audio processing library) before finishing my work on it. Kinda sad how even the simplest parts of Web Audio API, such as delay, fail to be usable in some cases. There probably won't be a backwards compatible way of supporting this use case in the future either, aside from adding another delay node type with different semantics.
*** Bug 17326 has been marked as a duplicate of this bug. ***
Web Audio API issues have been migrated to Github. See https://github.com/WebAudio/web-audio-api/issues
Closing. See https://github.com/WebAudio/web-audio-api/issues for up to date list of issues for the Web Audio API.