This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
When changing accessories, the max number of channels can change, which has an impact on virtualization and 3D positioning, you wouldn't use the same settings and algorithms when switching from headset to speakers. In case you are switching from local speakers to headphones, you are really sending the same stream to the same low-level driver, and the switch is typically handled in the audio codec hardware. You will have continuity of the playback by construction, and the only time you'd need to reconfigure the graph is if you have any sort of 3D positioning. But if the new output is HDMI, Bluetooth A2DP, USB, there will be a delay and volume ramps when switching, and it'd be perfectly acceptable to stop and reconfigure without any impact to user experience. It'd be interesting to capture this difference in the notification.
one proposal: enum OutputType { “speakers”, “headset” }; interface AudioDestinationChangeEvent : Event { readonly attribute double float sampleRate; readonly attribute unsigned long maxChannelCount readonly attribute OutputType outputType; };
Why would we expose the sample rate in this way? Also, I assume that you're planning to extend OutputType in the future, right?
(In reply to comment #2) > Why would we expose the sample rate in this way? > > Also, I assume that you're planning to extend OutputType in the future, > right? yes, OutputType should be extendable. for sample rate, do you have any idea for output change?
Why should the web application change about the sampling rate change? Is it not reasonable to expect the implementation to handle any resampling if needed?
Web Audio API issues have been migrated to Github. See https://github.com/WebAudio/web-audio-api/issues
Closing. See https://github.com/WebAudio/web-audio-api/issues for up to date list of issues for the Web Audio API.