This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 26336 - Support speech recognition on specific media stream
Summary: Support speech recognition on specific media stream
Status: RESOLVED WONTFIX
Alias: None
Product: Speech API
Classification: Unclassified
Component: Speech API (show other bugs)
Version: unspecified
Hardware: PC All
: P2 normal
Target Milestone: ---
Assignee: Glen Shires
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-07-15 02:12 UTC by Shih-Chiang Chien
Modified: 2018-08-06 10:32 UTC (History)
2 users (show)

See Also:


Attachments

Description Shih-Chiang Chien 2014-07-15 02:12:00 UTC
Current speech recognition API cannot specify media stream, we could introduce an optional parameter in SpeechRecognition.start() to enable following use cases:
  1. multiple microphone selection via gUM media constraints
  2. remote audio stream (WebRTC)
  3. stream from audio file
Comment 1 xians 2014-10-02 10:08:32 UTC
We are working on hooking up gUM audio track with WebSpeech in Chrome, but we are not going to support 2, 3 due to concerns of server abuse. That says, we only allow hooking up audio track from microphone to WebSpeech, a track uses non-microphone source (like a file or remote audio track) will be thrown a exception when connecting to WebSpeech.

The new API allows WebSpeech to benefit from gUM technologies, ex. AEC. This will substantially improve the recognition performance during a conference call.
Comment 2 Philip Jägenstedt 2018-08-06 10:32:18 UTC
This work was started in Chrome in https://crbug.com/408940 but wasn't finished and then removed again. To revisit this issue, I suggest filing an issue on https://github.com/w3c/speech-api and getting implementers talking to each other.