[whatwg] Accessibility and the Apple Proposal for Timed Media Elements

Re: http://webkit.org/specs/HTML_Timed_Media_Elements.html

There are three things I'd hope to see from a <video> element:

1) Ease of use compared to <object> (A common API contributes to this,
and v2 might approach it with a default UI. Lack of agreement on a
baseline format is a major problem here.)

2) Experiments in hyperfilm.

3) Better accessibility features than are provided by the current
<object> or <embed> + plugin architecture.

Unless I've missed something. The Apple proposal does not discuss
accessibility. Where do closed captions, audio descriptions, signed
alternatives, subtitles, dubbing, and transcripts fit into the proposed
scheme?

Why is <video>'s fallback limited to inline content? How is inline
content supposed to be an equivalent alternative to a whole video? Why
shouldn't <video> contain a <dialog> as fallback, for example?

If audio container formats contain (for example) captions, how precisely
are users supposed to access them, given that the spec says: "If the
source is an MP3 file containing synchronized lyrics, for example, the
user agent must render only the audio and not the text."

One issue does talk about extracting track information, but seemingly
not track content:

http://webkit.org/specs/Timed_Media_Elements-Open_Issues.html

Would captions have to be extracted server-side and displayed in an Ajax
live region?

--
Benjamin Hawkes-Lewis

Received on Wednesday, 4 April 2007 23:44:31 UTC