This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
From the “Review of Web Audio Processing: Use Cases and Requirements” http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0852.html Operating systems have a feature called "ShowSounds", which triggers a visual indication that an important sound like an alert has occurred. Enabling certain types sounds, like audio sprites, to take advantage of this feature may be important. I expect someone else to provide more details on this requirement but wanted to put a placeholder in this message.
In http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0854.html Doug wrote: On first reading, this seems like something the Web Notifications WG should be addressing. Or if you are suggesting a browser-based analog of this functionality, that should be a requirement at the content level, not the Web Audio API level. I agree with this. It does not seem to be in scope for an audio processing/synthesis API. I will document the need for this and why this should be addressed by other APIs in the use cases document.
Looks like this could be added to the “Playful sonification of user interfaces” scenario. Joe, what do you think?
Actually, I don't think it belongs in that scenario, because the scenario is concerned with sonification of a visual interface. Thus there's no need for that scenario to consult a Show Sounds option to determine whether visuals should be displayed. My understanding of ShowSounds is that it's a method of hinting to the browser about the most appropriate modalities for certain kinds of high level interactions -- a sort of user preference that may or may not guide apps to do this or that. It would probably be implemented pretty far outside the realm of audio processing and in ways that are hard to anticipate. For instance an app might respond to Show Sounds by creating DOM audio elements rather than making use of the Web Audio API. I guess there could be another scenario but since it doesn't feel like it's about this API I'd rather punt.
(In reply to comment #3) > For instance an app > might respond to Show Sounds by creating DOM audio elements rather than making > use of the Web Audio API. Yes, you're right, it's not a processing/synthesis requirement. I'll mark as WONTFIX — Michael and PFWG should feel free to reopen if we misunderstood the requirement.