w3c/wbs-design
or
by mail to sysreq
.
The results of this questionnaire are available to anybody.
This questionnaire was open from 2011-01-13 to 2011-01-24.
10 answers have been received.
Jump to results for question:
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR8. User agent (browser) can refuse to use requested speech service. | 2 |
FPR11. If the web apps specify speech services, it should be possible to specify parameters. | 8 |
FPR12. Speech services that can be specified by web apps must include network speech services. | 8 |
FPR31. User agents and speech services may agree to use alternate protocols for communication. | 3 |
FPR32. Speech services that can be specified by web apps must include local speech services. | 6 |
FPR33. There should be at least one mandatory-to-support codec that isn't encumbered with IP issues and has sufficient fidelity & low bandwidth requirements. | 7 |
FPR40. Web applications must be able to use barge-in (interrupting audio and TTS output when the user starts speaking). | 10 |
FPR58. Web application and speech services must have a means of binding session information to communications. | 5 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR8. User agent (browser) can refuse to use requested speech service. |
|
FPR11. If the web apps specify speech services, it should be possible to specify parameters. |
|
FPR12. Speech services that can be specified by web apps must include network speech services. |
|
FPR31. User agents and speech services may agree to use alternate protocols for communication. |
|
FPR32. Speech services that can be specified by web apps must include local speech services. |
|
FPR33. There should be at least one mandatory-to-support codec that isn't encumbered with IP issues and has sufficient fidelity & low bandwidth requirements. |
|
FPR40. Web applications must be able to use barge-in (interrupting audio and TTS output when the user starts speaking). |
|
FPR58. Web application and speech services must have a means of binding session information to communications. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR2. Implementations must support the XML format of SRGS and must support SISR. | 8 |
FPR4. It should be possible for the web application to get the recognition results in a standard format such as EMMA. | 10 |
FPR19. User-initiated speech input should be possible. | 9 |
FPR21. The web app should be notified that capture starts. | 9 |
FPR22. The web app should be notified that speech is considered to have started for the purposes of recognition. | 9 |
FPR23. The web app should be notified that speech is considered to have ended for the purposes of recognition. | 9 |
FPR24. The web app should be notified when recognition results are available. | 10 |
FPR25. Implementations should be allowed to start processing captured audio before the capture completes. | 9 |
FPR26. The API to do recognition should not introduce unneeded latency. | 9 |
FPR27. Speech recognition implementations should be allowed to add implementation specific information to speech recognition results. | 8 |
FPR28. Speech recognition implementations should be allowed to fire implementation specific events. | 7 |
FPR34. Web application must be able to specify domain specific custom grammars. | 9 |
FPR35. Web application must be notified when speech recognition errors or non-matches occur. | 9 |
FPR42. It should be possible for user agents to allow hands-free speech input. | 9 |
FPR43. User agents should not be required to allow hands-free speech input. | 3 |
FPR47. When speech input is used to provide input to a web app, it should be possible for the user to select alternative input methods. | 6 |
FPR48. Web application author must be able to specify a domain specific statistical language model. | 9 |
FPR50. Web applications must not be prevented from integrating input from multiple modalities. | 10 |
FPR54. Web apps should be able to customize all aspects of the user interface for speech recognition, except where such customizations conflict with security and privacy requirements in this document, or where they cause other security or privacy problems. | 9 |
FPR56. Web applications must be able to request NL interpretation based only on text input (no audio sent). | 6 |
FPR57. Web applications must be able to request recognition based on previously sent audio. | 2 |
FPR59. While capture is happening, there must be a way for the web application to abort the capture and recognition process. | 10 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR2. Implementations must support the XML format of SRGS and must support SISR. |
|
FPR4. It should be possible for the web application to get the recognition results in a standard format such as EMMA. |
|
FPR19. User-initiated speech input should be possible. |
|
FPR21. The web app should be notified that capture starts. |
|
FPR22. The web app should be notified that speech is considered to have started for the purposes of recognition. |
|
FPR23. The web app should be notified that speech is considered to have ended for the purposes of recognition. |
|
FPR24. The web app should be notified when recognition results are available. |
|
FPR25. Implementations should be allowed to start processing captured audio before the capture completes. |
|
FPR26. The API to do recognition should not introduce unneeded latency. |
|
FPR27. Speech recognition implementations should be allowed to add implementation specific information to speech recognition results. |
|
FPR28. Speech recognition implementations should be allowed to fire implementation specific events. |
|
FPR34. Web application must be able to specify domain specific custom grammars. |
|
FPR35. Web application must be notified when speech recognition errors or non-matches occur. |
|
FPR42. It should be possible for user agents to allow hands-free speech input. |
|
FPR43. User agents should not be required to allow hands-free speech input. |
|
FPR47. When speech input is used to provide input to a web app, it should be possible for the user to select alternative input methods. |
|
FPR48. Web application author must be able to specify a domain specific statistical language model. |
|
FPR50. Web applications must not be prevented from integrating input from multiple modalities. |
|
FPR54. Web apps should be able to customize all aspects of the user interface for speech recognition, except where such customizations conflict with security and privacy requirements in this document, or where they cause other security or privacy problems. |
|
FPR56. Web applications must be able to request NL interpretation based only on text input (no audio sent). |
|
FPR57. Web applications must be able to request recognition based on previously sent audio. |
|
FPR59. While capture is happening, there must be a way for the web application to abort the capture and recognition process. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR3. Implementation must support SSML. | 8 |
FPR29. Speech synthesis implementations should be allowed to fire implementation specific events. | 4 |
FPR41. It should be easy to extend the standard without affecting existing speech applications. | 7 |
FPR46. Web apps should be able to specify which voice is used for TTS. | 8 |
FPR51. The web app should be notified when TTS playback starts. | 9 |
FPR52. The web app should be notified when TTS playback finishes. | 10 |
FPR53. The web app should be notified when the audio corresponding to a TTS element is played back. | 9 |
FPR60. Web application must be able to programatically abort tts output. | 10 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR3. Implementation must support SSML. |
|
FPR29. Speech synthesis implementations should be allowed to fire implementation specific events. |
|
FPR41. It should be easy to extend the standard without affecting existing speech applications. |
|
FPR46. Web apps should be able to specify which voice is used for TTS. |
|
FPR51. The web app should be notified when TTS playback starts. |
|
FPR52. The web app should be notified when TTS playback finishes. |
|
FPR53. The web app should be notified when the audio corresponding to a TTS element is played back. |
|
FPR60. Web application must be able to programatically abort tts output. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR7. Web apps should be able to request speech service different from default. | 8 |
FPR9. If browser refuses to use the web application requested speech service, it must inform the web app. | 8 |
FPR10. If browser uses speech services other than the default one, it must inform the user which one(s) it is using. | 3 |
FPR30. Web applications must be allowed at least one form of communication with a particular speech service that is supported in all UAs. | 6 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR7. Web apps should be able to request speech service different from default. |
|
FPR9. If browser refuses to use the web application requested speech service, it must inform the web app. |
|
FPR10. If browser uses speech services other than the default one, it must inform the user which one(s) it is using. |
|
FPR30. Web applications must be allowed at least one form of communication with a particular speech service that is supported in all UAs. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR5. It should be easy for the web appls to get access to the most common pieces of recognition results such as utterance, confidence, and nbests. | 9 |
FPR6. Browser must provide default speech resource. | 5 |
FPR36. User agents must provide a default interface to control speech recognition. | 7 |
FPR38. Web application must be able to specify language of recognition. | 10 |
FPR39. Web application must be able to be notified when the selected language is not available. | 9 |
FPR44. Recognition without specifying a grammar should be possible. | 7 |
FPR45. Applications should be able to specify the grammars (or lack thereof) separately for each recognition. | 10 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR5. It should be easy for the web appls to get access to the most common pieces of recognition results such as utterance, confidence, and nbests. |
|
FPR6. Browser must provide default speech resource. |
|
FPR36. User agents must provide a default interface to control speech recognition. |
|
FPR38. Web application must be able to specify language of recognition. |
|
FPR39. Web application must be able to be notified when the selected language is not available. |
|
FPR44. Recognition without specifying a grammar should be possible. |
|
FPR45. Applications should be able to specify the grammars (or lack thereof) separately for each recognition. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR13. It should be easy to assign recognition results to a single input field. | 9 |
FPR14. It should not be required to fill an input field every time there is a recognition result. | 9 |
FPR15. It should be possible to use recognition results to multiple input fields. | 9 |
FPR61. Aborting the TTS output should be efficient. | 7 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR13. It should be easy to assign recognition results to a single input field. |
|
FPR14. It should not be required to fill an input field every time there is a recognition result. |
|
FPR15. It should be possible to use recognition results to multiple input fields. |
|
FPR61. Aborting the TTS output should be efficient. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR16. User consent should be informed consent. | 9 |
FPR20. The spec should not unnecessarily restrict the UA's choice in privacy policy. | 5 |
FPR55. Web application must be able to encrypt communications to remote speech service. | 6 |
Skip to view by choice.
Responder | Section 3.3.1 Security and Privacy Speech System Requirements |
---|---|
German Research Center for Artificial Intelligence (DFKI) Gmbh (Marc Schröder) | |
Mozilla Foundation (Olli Pettay) | |
Deborah Dahl (Deborah Dahl) | |
Nuance Communications, Inc. (Milan Young) | |
Google LLC (Bjorn Bringert) | |
AT&T (Michael Johnston) | |
Microsoft Corporation (Robert Brown) | |
Openstream, Inc. (Ravi Reddy) | |
Voxeo (Daniel Burnett) | |
Loquendo, S.p.A. (Paolo Baggia) |
Choice | Responders |
---|---|
FPR16. User consent should be informed consent. |
|
FPR20. The spec should not unnecessarily restrict the UA's choice in privacy policy. |
|
FPR55. Web application must be able to encrypt communications to remote speech service. |
|
summary | by responder | by choice
Choice | All responders |
---|---|
Results | |
FPR1. Web applications must not capture audio without the user's consent. | 10 |
FPR17. While capture is happening, there must be an obvious way for the user to abort the capture and recognition process. | 8 |
FPR18. It must be possible for the user to revoke consent. | 9 |
FPR37. Web application should be given captured audio access only after explicit consent from the user. | 8 |
FPR49. End users need a clear indication whenever microphone is listening to the user. | 8 |
Skip to view by choice.
Choice | Responders |
---|---|
FPR1. Web applications must not capture audio without the user's consent. |
|
FPR17. While capture is happening, there must be an obvious way for the user to abort the capture and recognition process. |
|
FPR18. It must be possible for the user to revoke consent. |
|
FPR37. Web application should be given captured audio access only after explicit consent from the user. |
|
FPR49. End users need a clear indication whenever microphone is listening to the user. |
|
Everybody has responded to this questionnaire.
Compact view of the results / list of email addresses of the responders
WBS home / Questionnaires / WG questionnaires / Answer this questionnaire
w3c/wbs-design
or
by mail to sysreq
.