This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 20590 - reconsider which objects are event targets
Summary: reconsider which objects are event targets
Status: RESOLVED FIXED
Alias: None
Product: Speech API
Classification: Unclassified
Component: Speech API (show other bugs)
Version: unspecified
Hardware: All All
: P2 normal
Target Milestone: ---
Assignee: Glen Shires
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-01-07 21:12 UTC by Trevor Saunders
Modified: 2013-03-23 00:51 UTC (History)
3 users (show)

See Also:


Attachments

Description Trevor Saunders 2013-01-07 21:12:14 UTC
It seems like SpeechSynthesis should be an event target so you can listen for events related to all utterances.  If we do that it would make sense for SpeechSynthesisEvent to have readonly attribute SpeechSynthesisUtterance utterance so that you can tell which utterance the event is for.  Its not clear that SpeechSynthesisUtterance needs to be an event target since you can easily filter events for the utterance you want, but maybe people want it enough the short cut is worth having.
Comment 1 Dominic Mazzoni 2013-01-14 21:13:42 UTC
This is a reasonable future enhancement, but it doesn't seem necessary. I'd like to see some real-world examples of code that would be significantly easier to write if you could do this, otherwise it just adds complexity.

Perhaps a simpler change would be just to allow the events to bubble up somewhere else, like the window - so it'd still be possible to write a listener for all possible speech events if you want.
Comment 2 Trevor Saunders 2013-01-31 22:23:33 UTC
(In reply to comment #1)
> This is a reasonable future enhancement, but it doesn't seem necessary. I'd
> like to see some real-world examples of code that would be significantly
> easier to write if you could do this, otherwise it just adds complexity.

the first thing that comes to mind is an app where you speak two different types of thing somewhat similar to a site that has things that are aria-live=polite and aria-live=assertive, and you only care about events for the aria-live=assertive speech (maybe so you could make sounds coordinated with start / end).

> Perhaps a simpler change would be just to allow the events to bubble up
> somewhere else, like the window - so it'd still be possible to write a
> listener for all possible speech events if you want.

302 Olli
Comment 3 Olli Pettay 2013-01-31 22:53:33 UTC
Propagating events to window would be horrible.
But events could propagate from SpeechSynthesisUtterance to SpeechSynthesis, if there is a use case for that.
Comment 4 Glen Shires 2013-02-06 16:56:18 UTC
In trying to summarize this thread, I think there's three proposals here:

(1) Keep it as is: only SpeechSynthesisUtterance receives events.

(2) Bubble events from SpeechSynthesisUtterance to SpeechSynthesis.

(3) Only SpeechSynthesis receives events, SpeechSynthesisUtterance does not.

For (2) and (3) the IDL would change as follows:

  interface SpeechSynthesis : EventTarget { ... }  // add EventTarget

  interface SpeechSynthesisEvent : Event {
    readonly attribute SpeechSynthesisUtterance utterance; // add attribute
    ...
  }

And for (3) the following would also change:

  interface SpeechSynthesisUtterance { ... } // remove EventTarget



Option (1) has the advantage that event handlers with different behaviors can be applied to specific utterances, but the disadvantage that common event handlers need to be repeated for each utterance.

Conversely, option (3) has the advantage that common event handlers can be added to a single object, but the disadvantage that specific behaviors require filtering in the common event handler.

I see (2) as providing the advantages of both, so the developer can choose where best to implement events.


I prefer (2).  Which do you prefer?  Are there other options?
Comment 5 Olli Pettay 2013-02-06 17:04:54 UTC
(3) feels like most common-like API so I prefer it, but I'm ok with (2) too.
It reminds a bit like IndexedDB event propagation, which of course
is close to DOM tree event propagation.
Comment 6 Eitan Isaacson 2013-02-06 17:08:40 UTC
(In reply to comment #5)
> (3) feels like most common-like API so I prefer it, but I'm ok with (2) too.
> It reminds a bit like IndexedDB event propagation, which of course
> is close to DOM tree event propagation.

The problem with #3 is that an event could not be directly associated with a past Speak() call, at least not as easily as in options #1 and #2.
Comment 7 Olli Pettay 2013-02-06 17:10:00 UTC
That is why the event would have .utterance
Comment 8 Eitan Isaacson 2013-02-06 17:10:33 UTC
(In reply to comment #7)
> That is why the event would have .utterance

Ah, missed that.
Comment 9 Dominic Mazzoni 2013-02-06 18:36:27 UTC
#3 seems like it could lead to a lot of inefficiency.

Suppose you create a reusable web widget of some sort, that has the ability to speak, and it animates while speaking by listening for speech events that apply to that widget's utterances.

Now suppose some web developer puts 100 instances of your widget on a webpage. They wouldn't all speak at once, of course, but every one of them would have the ability to speak.

My worry is that this would likely lead to 100 event listeners, each one of which would have to compare the utterance target to one of its own utterances.

Of course, all of the widgets could try to share one listener, but it still requires extra bookkeeping.

So I'm strongly against #3, as that would mean that a pretty common scenario would either be inefficient or require extra boilerplate code.

I have no objection to #2 - bubbling sounds good to me.
Comment 10 Glen Shires 2013-02-25 20:55:16 UTC
Option #2 appears to be the consensus, so I propose the following text for the errata. If there's no disagreement I'll add this to the errata on March 11.
Note: should there be any additional text about whether this event is cancelable.


Section 5.2 IDL:
  "interface SpeechSynthesis" should be "interface SpeechSynthesis : EventTarget".

  The following attribute is added to "interface SpeechSynthesisEvent : Event"
  "readonly attribute SpeechSynthesisUtterance utterance;"
    ...

Section 5.2.4 SpeechSynthesisUtterance Events
  Add "These events bubble up to SpeechSynthesis."

Section 5.2.5 SpeechSynthesisEvent Attributes: Add the following definition:
  "utterance attribute
   This attribute contains the SpeechSynthesisUtterance that triggered this event."
Comment 11 Glen Shires 2013-03-23 00:51:24 UTC
I've updated the errata with the above change (E08):
https://dvcs.w3.org/hg/speech-api/rev/3254a90fcfc8

As always, the current errata is at:
http://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi-errata.html