<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE bugzilla SYSTEM "https://www.w3.org/Bugs/Public/page.cgi?id=bugzilla.dtd">

<bugzilla version="5.0.4"
          urlbase="https://www.w3.org/Bugs/Public/"
          
          maintainer="sysbot+bugzilla@w3.org"
>

    <bug>
          <bug_id>17700</bug_id>
          
          <creation_ts>2012-07-05 14:28:46 +0000</creation_ts>
          <short_desc>Integrate with OS-level ShowSounds</short_desc>
          <delta_ts>2012-09-10 10:20:19 +0000</delta_ts>
          <reporter_accessible>1</reporter_accessible>
          <cclist_accessible>1</cclist_accessible>
          <classification_id>1</classification_id>
          <classification>Unclassified</classification>
          <product>AudioWG</product>
          <component>Web Audio Processing: Use Cases and Requirements</component>
          <version>unspecified</version>
          <rep_platform>PC</rep_platform>
          <op_sys>All</op_sys>
          <bug_status>CLOSED</bug_status>
          <resolution>WONTFIX</resolution>
          
          
          <bug_file_loc></bug_file_loc>
          <status_whiteboard></status_whiteboard>
          <keywords></keywords>
          <priority>P2</priority>
          <bug_severity>normal</bug_severity>
          <target_milestone>TBD</target_milestone>
          
          
          <everconfirmed>1</everconfirmed>
          <reporter name="Olivier Thereaux">olivier.thereaux</reporter>
          <assigned_to name="Joe Berkovitz / NF">joe</assigned_to>
          <cc>cooper</cc>
          
          <qa_contact>public-audio</qa_contact>

      

      

      

          <comment_sort_order>oldest_to_newest</comment_sort_order>  
          <long_desc isprivate="0" >
    <commentid>69653</commentid>
    <comment_count>0</comment_count>
    <who name="Olivier Thereaux">olivier.thereaux</who>
    <bug_when>2012-07-05 14:28:46 +0000</bug_when>
    <thetext>From the “Review of Web Audio Processing: Use Cases and Requirements”
http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0852.html

Operating systems have a feature called &quot;ShowSounds&quot;, which triggers a
visual indication that an important sound like an alert has occurred.
Enabling certain types sounds, like audio sprites, to take advantage of
this feature may be important. I expect someone else to provide more
details on this requirement but wanted to put a placeholder in this message.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>69837</commentid>
    <comment_count>1</comment_count>
    <who name="Olivier Thereaux">olivier.thereaux</who>
    <bug_when>2012-07-11 15:54:35 +0000</bug_when>
    <thetext>In http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0854.html
Doug wrote:
On first reading, this seems like something the Web Notifications WG 
should be addressing. Or if you are suggesting a browser-based analog of 
this functionality, that should be a requirement at the content level, 
not the Web Audio API level.

I agree with this. It does not seem to be in scope for an audio processing/synthesis API. I will document the need for this and why this should be addressed by other APIs in the use cases document.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>72285</commentid>
    <comment_count>2</comment_count>
    <who name="Olivier Thereaux">olivier.thereaux</who>
    <bug_when>2012-08-16 14:50:12 +0000</bug_when>
    <thetext>Looks like this could be added to the “Playful sonification of user interfaces” scenario. Joe, what do you think?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>72287</commentid>
    <comment_count>3</comment_count>
    <who name="Joe Berkovitz / NF">joe</who>
    <bug_when>2012-08-16 15:15:34 +0000</bug_when>
    <thetext>Actually, I don&apos;t think it belongs in that scenario, because the scenario is concerned with sonification of a visual interface. Thus there&apos;s no need for that scenario to consult a Show Sounds option to determine whether visuals should be displayed.

My understanding of ShowSounds is that it&apos;s a method of hinting to the browser about the most appropriate modalities for certain kinds of high level interactions -- a sort of user preference that may or may not guide apps to do this or that. It would probably be implemented pretty far outside the realm of audio processing and in ways that are hard to anticipate.  For instance an app might respond to Show Sounds by creating DOM audio elements rather than making use of the Web Audio API.

I guess there could be another scenario but since it doesn&apos;t feel like it&apos;s about this API I&apos;d rather punt.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>72291</commentid>
    <comment_count>4</comment_count>
    <who name="Olivier Thereaux">olivier.thereaux</who>
    <bug_when>2012-08-16 15:46:36 +0000</bug_when>
    <thetext>(In reply to comment #3)
&gt; For instance an app
&gt; might respond to Show Sounds by creating DOM audio elements rather than making
&gt; use of the Web Audio API.

Yes, you&apos;re right, it&apos;s not a processing/synthesis requirement.

I&apos;ll mark as WONTFIX — Michael and PFWG should feel free to reopen if we misunderstood the requirement.</thetext>
  </long_desc>
      
      

    </bug>

</bugzilla>