This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
For some speech recognition backends, addFromURI and addFromString may involve processing the grammar, which we don't want to block on. Ideally, these methods would return a Promise that resolves or rejects with a SpeechGrammar object. A SpeechGrammar object would only be appended to the SpeechGrammarList in case of success, but a SpeechGrammar could also be created upon failure to be passed to the Promise's reject handler, so the client code will know which grammar failed to be loaded. Finally, attempting to call SpeechRecognition.start() when SpeechRecognition.grammars is undefined or an empty SpeechGrammarList (which could happen while a SpeechGrammar is being loaded asynchronously) could raise InvalidStateError.
Glen, we need this for Firefox backend. Are you willing to implement it? Thanks, Andre
If there is still a need for this, please propose specific semantics (IDL and wording). I disagree that calling SpeechRecognition.start() when SpeechRecognition.grammars is an empty SpeechGrammarList should raise InvalidStateError. The current implementation in Chrome does not require a SpeechGrammarList to perform dictation (large vocabulary speech recognition). However, another mechanism could be used for similar purposes. Perhaps an error when .start() is called prior to all grammars having completed a successful or failed load?
(In reply to Glen Shires from comment #2) > I disagree that calling SpeechRecognition.start() when > SpeechRecognition.grammars is an empty SpeechGrammarList should raise > InvalidStateError. This is dependent upon the speech recognition engine. Some engines require a grammar; some don't. So, one can not legislate this for all values of SpeechRecognition::serviceURI.
I think this bug can likely be resolved as invalid. Generically when addFromURI() or addFromString() is called there is not sufficient information to process a grammar. This can only happen later when start() is called. For example: var sr = new SpeechRecognition(); sr.lang = "en-US"; var sgl = new SpeechGrammarList(); sgl.addFromString("#JSGF V1.0; grammar test; public <simple> = this is a demo | of the voice inputs ;",1); // Is sgl valid at this point in time? What language is sgl interpreted as being in? What grammar format is sgl in? var sr1 = new SpeechRecognition(); sr1.lang = "es-ES"; sr1.grammars = sgl; // Is sr1.grammars valid at this point in time? What is the final value of sr1.lang the user wants to use? sr1.lang = "en-US"; sr1.start(); // Here is the only point we can be sure that sr1.lang and sr1.grammars are set as intended The example shows that it is only when start() is called that we can be sure the user is satisfied with the grammar and language setting. Thus, it is only then that grammar processing can occur.
Re kdavis comment 3: I agree that some engines require a grammar and some don't. Some addFromURI methods may require a URI fetch, other URIs may refer to builtin grammars. [1] Some addFromString implementations may process synchronously, others may not. I agree we should consider how to handle both sync and async cases. I agree we should consider firing an error if SpeechRecognition.start() is called and the grammars have not yet loaded or been "processed". What I specifically disagree with is that calling SpeechRecognition.start() when SpeechRecognition.grammars is an empty list always results in firing an error. I'm unsure what the reference to "serviceURI" means, but I think we should strive for a consistent API for all speech engines. (That's not to say that everything needs to be sync or everything needs to be async, it's simply saying that one should be able to write to the API in a manner that is independent of the speech engine.) [1] https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi.html#dfn-addGrammar
Re kdavis comment 4: I understand your point that for some implementations, "that it is only when start() is called...that grammar processing can occur." I'm unsure how to interpret your comment that "this bug can likely be resolved as invalid." If you propose a change, please propose specific semantics (IDL and wording). You could do that in this bug, or a new bug, or on the public-speech-api@w3.org mailing list.
(In reply to Glen Shires from comment #5) > Re kdavis comment 3: > > What I specifically disagree with is that calling SpeechRecognition.start() > when SpeechRecognition.grammars is an empty list always results in firing an > error. We don't disagree. I feel the same way. Calling SpeechRecognition.start() may or may not fire an error if SpeechRecognition.grammars is empty. If it does or does not in this situation is dependent upon the actual speech recognition engine. > I'm unsure what the reference to "serviceURI" means https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi.html#dfn-serviceuri
(In reply to Glen Shires from comment #6) > Re kdavis comment 4: > > I'm unsure how to interpret your comment that "this bug can likely be > resolved as invalid." > > If you propose a change, please propose specific semantics (IDL and > wording). You could do that in this bug, or a new bug, or on the > public-speech-api@w3.org mailing list. I am proposing closing this bug. I think the current spec is fine here. In other words, I propose changing the status of this bug to Resolved->Invalid.
Re kdavis comment 8: Since people from @mozilla.com created this bug, and since kdavis@mozilla.com wishes to close this bug as Resolved->Invalid, I'll go ahead and do that. If anyone wishes to re-open this bug, we can do that. Alternatively you can open a new bug if necessary.