From HTML WG Wiki
Multimedia Accessibility <Audio> <Video>
Multimedia presentations (rich media) usually involves image, sound and motion. This can present accessibility barriers to some people with disabilities, for instance visual impairments, hearing loss, photosensitive epilepsy, cognitive and learning disabilities, attention deficit disorder, and dyslexia.
Visual impaired users can't directly access the visual components of a multimedia presentation. Likewise, users who are deaf or hard of hearing will not be able to directly access auditory information. Motion stimuli can adversely affect people with epilepsy, attention deficit disorder, and dyslexia.
To address the multiple accessibility issues of multimedia, mapping and controlling media assets with some kind of machine-recognizable mechanism would allow users to use the format that best suits their needs. People have different ways of processing information and allowing multimedia users to shift at will among audio, video, graphic, and written media may help make these technologies more accessible.
In an ideal world, the accessibility features would be in the video. In the real world, often they aren't. The page creator may not be able to modify the audio or video. Sometimes this is a matter of not having the video (embedded 3rd party videos) or not having legal authority; sometimes it is just a matter of not knowing how. By all means encourage authors to put the accessibility information within the video. But there needs to be a fallback for cases where that doesn't happen.
- Media Accessibility User Requirements
- Media Accessibility Tech Requirements
"Content may be provided inside the video element. User agents should not show this content to the user; it is intended for older Web browsers which do not support video, so that legacy video plugins can be tried, or to show text to the users of these older browser informing them of how to access the video contents. In particular, this content is not fallback content intended to address accessibility concerns. To make video content accessible to the blind, deaf, and those with other physical or cognitive disabilities, authors are expected to provide alternative media streams and/or to embed accessibility aids (such as caption or subtitle tracks) into their media streams."
The editor's rationale is, "...a fundamental principle of how this feature was designed is that any accessibility features and metadata features must be within the video or audio resource, and not in the HTML markup. The hypothesis is that this results in the optimal experience for all users."
A serious concern may be whether the audio/video resource supports accessibility features, and if it does, whether client-side software can extract it. This approach also requires the audio/video file to be retrieved over the network, unless the textual alternatives are at the start (prior to the audio/video data). The latter is problematic for those with slow connections or strict band width quotas.
Advice Request to PFWG
- Request for PFWG WAI review of multimedia accessibility requirements <audio> <video> - Laura Carlson, Steve Faulkner, Joshue O Connor, Gregory J. Rosmaita, Robert J Burns, Leif Halvard Silli, Philip TAYLOR, Debi Orton - September 23, 2008.
- Followup Re: Request for PFWG WAI review of multimedia accessibility requirements <audio> <video> Laura L. Carlson, May 5, 2009.
- Change Proposal: New Declarative Syntax for Associating Synchronized Text to Media Elements
- Bug 5758: insufficient accessibility fallback for <audio> or <video>
- Bug 8187: Section 4.8.7 on video makes no reference to audio description
- Bug 8657: Allow UA to reload fallback content if it fails to load
- Bug 8658: Availability of captions or additional audio tracks
- Bug 8659: Media events to indicate captions and audio descriptions
- Bug 8736: Decision to playback for media should be left to the user agent
- Bug 9452: Handling of additional tracks of a multitrack audio/video resource
- Bug 9471: Introduce declarative markup to associate timed text resources with media elements
- Bug 9673: Remove any reference to a specific Time Stamp format for video captioning from the specification at this time
- Bug 9773: There is not a clear difference between "subtitles" and "captions". These are mostly used to describe "closed captions" (binary transmissions in TV broadcasts) vs "subtitles" (text files), as far as I know.
- Bug 9774: "consecutive lines displayed below each other" - since subtitles tend to be rendered at the bottom, it's actually better for new subtitles to be rendered higher up.
- Bug 9775: "positioned to a multiple of the line dimensions of the first line of the cue" - enforcing the same line height for every line ihurts text rendering appearance for no apparent reason.
- Bug 10419: <video> should allow muted as a content attribute
- Bug 10446: Consider limiting the roles of certain media and plugin elements
- Bug 10693: Need a means for navigating between related timed tracks of media elements
- Bug 10723: support for media fragment URIs in relevant HTML5 elements
- Bug 10837: playbackrate: undefined behavior when the user agent can't play back at the requested rate
- Bug 10839: Providing visible indication that descriptions and captions are available.
- Bug 10840: Allow the user to independently adjust the volumes of the audio description and original soundtracks.
- Bug 10841: We require a method to allow the user to control playback timing in order to have more time to understand the material.
- Bug 10842: Support the isolation of speech from other background sounds in AV media
- Bug 10843: Support user control over the visual presentation of caption text.
- Bug 10941: Media elements need control-independent "pause" for presenting lengthy descriptions/captions
- Bug 10944: WebSRT seems to too much focus on captions
- Bug 11207: Make track element additions technology neutral
- Bug 11391: Provide examples of actual <track> usage, user agent implications
- Bug 11395: Use media queries to select appropriate <track> elements
- Bug 11593: the <track> @kind attribute should include the all of the identified accessibility content types
A Media TextAssociations draft proposal from the HTML Accessibilty Task Force introduces declarative markup into the audio and video elements of HTML5 to link to external resources that provide text alternatives for different roles, such as captions, subtitles and textual audio descriptions. This includes styling defaults and a resource selection algorithm for when there are alternative resources available.
Explicit Association with Separation of Media Assets and a Preference-Style Selection Mechanism
A clean, semantic, explicit association to transcripts, text descriptions, captions, audio descriptions and/or streams that could be toggled on or off by the end user would be very beneficial as these items will get lost if they are outside the parent element. We need a way for authors to link items together, within the element. This model would ensure that the linkage is there, and if the author chooses to also provide an in-the-clear linkage to one or more of these support pieces than this is a win.
Multi-media = multi-modal/multi-sensory so there are many permutations where one or more "modes" may not be available, so we need to try and address each mode as a separate entity as well as the default "combined" or multi-media asset. The most suitable file available should be made available to the user, in accordance with user and user agent preferences and capabilities.
Separation of these media assets is key as they are different from one another. They include but are not limited to:
Some kind of mechanism is needed. Separate attributes for media assets on <video> <audio> could do it; e.g. longdesc, transcript, etc.
In any event ideally all the support pieces should be direct descendants (children) of the parent <video> or <audio>, but unique and separate. In a perfect world all would be supplied, but even when less than perfect a method to provide one or more support pieces should exist.
The client side should always have the right to have "what are my choices (i.e. options)" answered, rather than having to answer, "what are your choices (i.e. preferences)" to the server. The server can still track which option gets exercised. But in the end if the user wants to take the time to browse the versions rather than having one picked in the feed forward processing, they should have that capability.
The following examples hint on some best practices and suggest the way that not only native players and but HTML 5 as a whole approaches multi-media content. Keeping assets separate allows for a finer granular control over those pieces, whereas bundling everything together as a single file may mean larger file sizes, require more post processing on the user-end to achieve full access regardless of AT used, etc.
In an Example at Stanford, the source code has params for the video, the caption. JW FLV (an open-source flash player) works by keeping all "elements" of the total on-screen presentation separate: the .flv (or recently H.264 .mov - if the user has Flash 9) is referenced via a parameter setting, so too a static JPEG as the opening screen-shot, the semi-transparent "logo" (watermark) and the time-stamped transcript (currently .srt, but apparently also DFXP XML, although this seems somewhat tricky at this time).
Because the text transcript remains external to the media (as opposed to "embedded") it becomes easier to re-purpose and share that piece of the "Multi" multi-media asset. It would probably require less user-end processing to output the transcript as Braille-output (for example) then to require a program or user-agent to re-process embedded content in the media asset to "extract" this information, and provide the alternative output.
Added benefits could be provided through further enhancements through some relatively simple manipulations. For instance in a second Stanford example the subtitle is as straightforward as getting a translation. Some simple scripting provides multi language support. The full transcript is available via the transcript link.
- Search the transcript for the term 'repackage'.
- Select that word.
- The video jumps to that place in the clip.
It is uncertain how this could be as easily achieved with embedded transcripts.
Provide Reserved rel Attribute Values
Perhaps we can provide some attribute values for the rel attribute along these lines:
*"transcript" *"longdesc"or "textdesc" *"download" *"slideshow"
Using the <figure> element:
<figure id="figVideo1"> <video [...][...] </video> <legend> <a href="..." rel="transcript"> Obtain the transcript. </a><br> <a href="..." rel="textdesc"> Obtain the text description. </a><br> <a href="..." rel="download">Download Quicktime version.</a><br> <a href="..." rel="download">Download Ogg Theora version.</a> </legend> </figure>
User agents could then present as they like, while legacy UAs would just provide the fallback for the video plus a series of links. Plus, if UAs choose not to provide any special handling of the content, at the very least it's still accessible to everyone. Editors comment on this proposal: "I encourage people to register these rel="" values in the wiki and to try them. I am very interested in what experience with this teaches us. If it turns out to be a good idea, it's definitely something we could add to the spec."
A Unified Approach to HTML5's Media Specific Elements
Towards A Unified Approach to HTML5's Media Specific Elements, version 0.1 - Gregory Rosmaita.
Automatic Selection of Media Files Based on User Preference
Accessibility for the Media Elements in HTML5 Proposal - DW Singer et al. Summary of email discussion on this proposal:
- Media queries, though similar, are probably not right for the user-needs matching.
- We don't need to handle the 'what fallback is shown if no source matches' problem since it's easy to write the HTML so that the case doesn't arise (at least, because of accessibility filtering).
- Transcripts and other non-temporal annotative information might be wanted both (a) by users also viewing the content (i.e. perhaps non-accessibility related) and (b) accessibility users, so they should not be expressed as an alternative to the media.
Single File With Separate Source Tracks
Employ flexible authoring of video and audio content that does not rely exclusively on the capabilities of the various container formats for alternate tracks.
<video> <source' media='<a media query>' > <track src='avideofile' > <track src='anaudiofile' > <track src='acaptionfile' languages='<language metadata>' > <track src='asubtitlefile'' languages='<language metadata>' > <track src='anothersubtitlefile'' languages='<language metadata>' > ... </source> ... <source src='afile2' media='<a media query>' ></source> <source src='afile3' media='<a media query>' ></source> ... </video>
Variation using text and category:
<video src="http://example.com/video.ogv" controls> <text category="CC" lang="en" type="text/x-srt" src="caption.srt"></text> <text category="SUB" lang="de" type="application/ttaf+xml" src="german.dfxp"></text> <text category="SUB" lang="jp" type="application/smil" src="japanese.smil"></text> <text category="SUB" lang="fr" type="text/x-srt" src="translation_webservice/fr/caption.srt"></text> </video>
Variation using text and the role attribute:
<video src="http://example.com/video.ogv" controls> <text role="CC" lang="en" type="text/x-srt" src="caption.srt"></text> <text role="SUB" lang="de" type="application/ttaf+xml" src="german.dfxp"></text> <text role="SUB" lang="jp" type="application/smil" src="japanese.smil"></text> <text role="SUB" lang="fr" type="text/x-srt" src="translation_webservice/fr/caption.srt"></text> </video>
This solution, like SMIL, turns a user agent (Web browser) into a media parsing and compositing unit. Another approach may be to have such a compositiong language available on a Web server who knows which tracks it has available for a user agent and the user agent can communicate with the server to determine which compisition it would like. The server then composes the right media file together and streams it off. This kind of compositing is required also if we want to be able to deliver just fragments of a media file.
Add SMIL as yet another extension format supported within HTML5 or referenced for embedding by the src attribute (or finally bite the bullet and add an IE8 / 'XML namespaces' compatible namespace mechanism to HTML5 ). SMIL has switches to provide alternative formats including text. One can switch between anything, video, audio, text, image, animation, graphics, whatever might be appropriate to replace the inaccessible format. Maybe HTML should at least have a meta element for any element as a container for structured meta information or alternatively elements to structure the meta information in the head and the ability to point to fragments to identify the target of the structured meta information. This could already help authors to provide useful descriptions for problematic content, for whatever reason the content is considered to be problematic to understand. Using the SMIL Meta information module and RDF, SMIL provides similar possibilities too.
- This would provide the needed machine-recognizable association.
- IE currently doesn't support ARIA.
- Part of the intention of ARIA is to be replaced by host-language functionality once the @aria-* attributes are no longer necessary. It may be better to have a more general global equivalent attribute that allows a list of IDREFs to label an arbitrary element.
<figure> <video ... aria:describedBy="transcript"> ... </video> <legend>Video Title (<a href="#transcript">view transcript</a>)</legend> </figure> <div id="transcript"> ... </div>
It has been suggested that the video and audio elements are not really even needed, and the big advancement would be to bring the attributes and DOM interfaces for those elements to the already supported object element. The source element might also be useful. Instead of using HTML:img, HTML5:audio, HTML5:video, HTML5:canvas etc authors can simply always use HTML:object referencing a document of the formats SVG, SMIL, DocBook etc to get access to an advanced method to provide alternatives and meta information, or they can use a compound document - but this is doesn not not really the intended concept of HTML5? Obviously such an approach increases the requirements for a user-agent to provide the information at all, because more formats are involved than necessary for this purpose. (It has also been suggested that desirability of these elements is not the subject of this issue; the accessibility of multimedia is, and that the impact of the use of object element is not clear.)
Introducing DOM and UA UI to access media metadata
Allowing UANormAndDOMForMediaPropeties: DOM and UI access to media immanent metadata]] for audio, video, still images and other non-text media would provide some access to text equivalents, even when the HTML author fails to provide them.
In the Clear Hyperlink with No Preference-Style Selection Mechanism
Using the <a> element to provide an ordinary link to a transcript or full text description, possibly including the link within the video caption. This solution does not provide a synchronize equivalent alternative or an on/off toggle. However, placing the link within the video caption provides an association with the video itself. Easy for authors and some end users. But forcing people to put all the content directly into a page with big visible links simply won't fly. The zillions of dollars put into techniques for hiding stuff in the misguided hope that they still appear for the people who need them (image replacements, stuff positioned off-screen, and so on) show that designers would rather expend considerable effort and money than actually make the data visible. However, transcripts are useful for more people, including non-disabled people, and incorporating such a feature into the page design of a page is not difficult, and common practice.
<figure> <video src="/videos/diary-2008-09-11.mkv"></video> <legend>My school holiday trip to Cairns. <a href="/transcripts/diary-2008-09-11.html">Read transcript</a>.</legend> </figure>
Configuring the Chosen Media Resource
Some media systems have the capability of configuring the media resource. For example in track-based resources such as MP4 files or QuickTime movies, tracks can be enabled or disabled. In other systems such as 3GPP DIMS, MPEG4 LASeR or Adobe Flash, the media resource may have embedded scripts etc. which could react to user preferences, as well as presenting affordances to configure the resource. Since it could be tedious for a user with an accessibility need to configure, manually, every resource, may be desirable if it's possible for resources to adapt, when possible, based (at least initially) on user preferences. Conversely, it is frustrating for end users of all stripes to have to be locked in to a specific default. The toggling of captioning from the player controls would be beneficial. For for instance a hearing enabled user, may want to toggle captions on or off, perhaps even mid-stream in a media presentation, so ease of user-choice should be paramount.
- MP4: see the 'Media file formats' white papers
- QuickTime: file format
- 3GPP DIMS: 3GPP Dynamic and Interactive Multimedia Scenes (DIMS), TS 26.142
- MPEG-4 LASeR: see the 'Lightweight Scene Representation' white papers
User Roles and Cases
Visual Impaired Users
Visual impaired users can’t directly access the visual components of a multimedia presentation.
Deaf or Hard of Hearing
Users who are deaf or hard of hearing will not be able to directly access auditory information.
People with Photosensitive Epilepsy
People with photosensitive epilepsy can have seizures triggered by flickering or flashing. Photosensitive epilepsy is a form of epilepsy that is triggered by visual stimuli, such as flickering or high contrast oscillating patterns, and it's believed that around 3% to 5% of people with epilepsy are susceptible to photosensitive material. Photosensitive epilepsy is usually triggered where the flicker rate is between 16Hz to 25Hz, although it's not uncommon for seizures to be triggered by flicker rates between 3Hz to 60Hz. The condition most commonly effects children, and is usually developed between the ages of 9 and 15 years, and most prevalent in females.
People with Attention Deficit Disorder/Dyslexia
Movement/animation may be extremely distracting to people with attention deficit disorder/dyslexia. User control of motion is needed.
People with Cognitive or Learning Disabilities
Some people have good cognitive skills, while others may have more artistic and creative skills. Learning disabilities are problems that affect the brain's ability to receive, process, analyze, or store information. Howard Gardner identified eight intelligences in his 1983 book, "The Theory of Multiple Intelligences". A toggle switch for different modalities would aid in providing learning style preferences (Visual, Aural, Read/Write, Kinesthetic, multi-modal). For instance images, sound, motion, captioning, description, etc all have a place in learning preferences.
Types of Learning Preferences
This preference includes the depiction of information in images. These folks visualize information in their "minds' eye" in order to remember something.
This perceptual mode describes a preference for information that is "heard". People with this modality report that they learn best from lectures, recordings. etc.
This preference is for information displayed as words.
By definition, this modality refers to the perceptual preference related to the use of experience and practice (simulated or real). Although such an experience may invoke other modalities.
A fifth category for those with strong preferences in several modalities: multi-modal.
Others Who Benefit from Accessible Multimedia
People who have:
- Hardware limitations.
- Software limitations.
- Connectivity limitations. Slow dial-up connections (still common in rural areas as well as outside of the U.S.)
- Other universality use cases.
Definitions of Media Assets
Transcript = the verbatim audio track. This transcript is then time-stamped and can become the caption asset, but, if/when time stamped with DFXP (an XML language) it can be sieved through an XSLT style sheet to generate on-screen HTML as well. It also can be used for search, both for appropriate assets (large-scale search) but also some experiments are ongoing at a more local level for searching within a longer media asset for key words, and then "clicking" on the key word instance and being taken to that point in the video.
Text Description = the transcript of audio content that includes spoken words and also non-spoken sounds like sound effects. It is akin to the notation used for a play. Text description is often not verbatim accounts of the spoken word, but contain additional descriptions, explanations, or comments that may be useful. They are helpful to the deaf, hard of hearing and many others. They allow anyone that cannot access content from web audio or video to access a text version instead.
Audio Description = narration, spoken out loud. It explains visual details. This allows visual content to be accessible to the blind or those with vision impairments. Audio descriptions of visual content is important if, for example, a video provides content that is relevant to the overall understanding of the video but is not available/ recognized through the through the default audio already present. For example an audio description can take a movie, and talk you through it. The narrator tells you everything that is happening on the screen that you cannot figure out just from the soundtrack. Information that is presented exclusively visually needs an audio description, and this audio description needs to be synchronized with the presentation. Audio descriptions describe items that take place visually which are vital for the complete understanding of a multimedia presentation. The descriptions are part of the audio track and are inserted in during lulls in the audio conversation. A transcript does not provide an equivalent experience, as the presentation's message is dependent upon the simultaneous interaction between its audio and video portions. Including extended audio-descriptions, which pause the video, may also be a consideration.
Captioning = the process of capturing the spoken word into text and displaying text at the same time the words are spoken. Captions are needed for prerecorded audio content in synchronized media. For more info visit Understanding SC 1.2.2
There is sometimes mention of high-contrast media; they can be useful for people with partial disabilities, for example. In high-contrast video 'important' material is more clearly visible, backgrounds are uncluttered when possible etc. High-contrast audio is similar; it strives to make the semantically important audio more clearly heard while minimizing background music, noises, and so on.
Policies, Guidelines, and Law
- WCAG 2 Guideline 1.2: Time-based Media: Provide alternatives for time-based media.
- Guideline 2.3 Do not design content in a way that is known to cause seizures.
- WCAG 1.4 (Priority 1): For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation.
- Section 508 1194.22 (b): Equivalent alternatives for any multimedia presentation shall be synchronized with the presentation.
- University of Minnesota Multimedia Accessibilty Standard
Accessibility Task Force Media Meetings
- June 5, 2012 Minutes
- May 10, 2012 Minutes
- August 17, 2011 - May 9, 2012 No Minutes. No Meetings?
- August 3, 2011 meeting canceled
- July 27, 2011 Minutes
- July 20, 2011: No Minutes?
- July 13, 2011 Minutes
- July 6, 2011 Minutes
- June 29, 2011 Minutes
- June 22, 2011 Minutes
- June 15, 2011 Minutes
- June 8, 2011 Minutes
- June 1, 2011 Minutes
- May 25, 2011 Minutes
- May 18, 2011 Minutes
- May 11, 2011 Minutes
- May 4, 2011 Minutes
- April 27, 2011 Minutes
- April 20, 2011 Minutes
- April 18, 2011 Minutes
- April 13, 2011 Minutes
- April 11, 2011 Minutes
- April 6, 2011 Minutes
- March 31, 2011 Minutes (Reconstructed)
- March 20, 2011 Face-to-Face Minutes
- March 19, 2011 Face-to-Face Minutes
- March 9, 2011 Minutes
- March 2, 2011 Minutes
- February 23, 2011 Minutes
- February 16, 2011 Minutes
- February 9, 2011 Minutes
- February 2, 2011 Minutes
- January 26, 2011 Minutes
- January 19, 2011 Minutes
- January 12, 2011 Minutes
- January 5, 2011 Minutes
- December 15, 2010 Minutes
- December 8, 2010 Minutes
- December 1, 2010 Minutes
- November 17, 2010 Minutes
- November 10, 2010 Minutes
- November 4, 2010. Media accessibility minutes, from the Face to Face meeting in France
- November 1, 2010. Media accessibility minutes, from the Face to Face meeting in France
- October 20, 2010 Minutes
- October 13, 2010 Notes
- October 6, 2010 Notes
- September 29, 2010 Minutes
- September 22, 2010 Minutes
- September 15, 2010 Minutes
- September 8, 2010 Minutes
- September 2, 2010 Minutes
- August 25, 2010 Minutes
- August 18, 2010 Minutes
- August 11, 2010 Minutes
- July 28, 2010, Minutes
- July 21, 2010, Minutes
- July 14, 2010, Minutes
- July 7, 2010, Minutes
- June 30, 2010, Minutes
- June 23, 2010, Minutes
- June 16, 2010, Minutes
- June 9, 2010, Minutes
- June 2, 2010, Minutes
- May 26, 2010, Minutes
- May 19, 2010, Minutes
- May 12, 2010, Minutes
- May 5, 2010, Minutes
- April 28, 2010, Minutes
- February 17, 2010, Minutes
Media Meetings Prior to Task Force Formation
- Accessibility of Media Elements in HTML 5 Gathering, November 1, 2009
- Agenda for TPAC Video Breakout Session, Nov 6, 2009
- HTML Accessibility Task Force Media Sub-Group
- All HTML Accessibility Task Force Media Wiki Pages These include:
- G158: Providing a full text transcript for the audio
- G159: Providing a full text transcript of the video content
- State of Media Accessibility in HTML5 - Silvia Pfeiffer.
- Video captioning - WHAT WG Wiki. The purpose of this page is to lay out and hammer down a specification for implementing captioning, subtitling, and timed text support for media HTML elements: both Video and Audio. This is a work in progress, and is currently being authored by User:Millam.
- Accessibility/Video a11y Study08 - Silvia Pfeiffer. (Mozilla Funded Study)
- September 11, 2008 Teleconference Discussion
- Notes on the Multi-Modal Interaction Architecture from an Accessibility Angle
- Video Universality Some AT (accessible technology) users may also have universality problems, without the latest and greatest equipment or connectivity.
- Dynamic content injection via canvas and video elements
- Accessibility of HTML 5 video and audio elements - Bruce Lawson.
- Autoplay is Bad for All Users - Emma Sax
- 2009: Accessibility for the HTML5 <video> element - Silvia Pfeiffer
- Multi-Media "Matters": Coordination, Collaboration, Issues & Reviews - Section of the PF/XTech wiki is to foster coordination and collaboration on all things multimedia.
- The most pressing Accessibility issue in HTML5 today? <video> - John Foliot.
- The Different Aspects of Video Accessibility - Silvia Pfeiffer.
- Bug 7403 What should happen when audio does not have controls but author specifies display:inline? Show fallback? Show blank box with the same size as if it had controls? Force display:none?
- Jumping to Time Offsets in HTML5 Video - Silvia Pfeiffer.
- Bug 7253 Media elements should provide a "next" property to gaplessly play back another media object after it has finished.
- W3C Workshop/Barcamp on HTML5 Video Accessibility
- Accessibility of Media Elements in HTML 5 Gathering
- Bug 8187: Section 4.8.7 on video makes no reference to audio description reported by Kelly Ford.
- Accessibility/HTML5 captions Accessibility/HTML5 captions - Silvia Pfeiffer, wiki.mozilla
- Accessibility/Experiment1 feedback - Silvia Pfeiffer
- Accessibility/HTML5 captions v2 Accessibility/HTML5 captions v2 - Silvia Pfeiffer, wiki.mozilla
- Accessibility/Experiment2 feedback - Silvia Pfeiffer
- The model of a time-linear media resource for HTML5 - Silvia Pfeiffer
- Manifests for Exposing the Structure of a Composite Media Resource - Silvia Pfeiffer
- HTML5 Video Element Is Effectively Unusable, Even in the Browsers Which Support - John Gruber
- <Hixie> i'm starting to look at this subtitles stuff - April 20, 2010.
- WebSRT and HTML5 Media Accessibility - Silvia Pfeiffer.
- <video>, Accessibility and HTML5 Today - John Foliot
Email Discussion Threads
- RE: Accessibility of <audio> and <video>
- acceptable fallbacks
- Privacy implications of automatic alternative selection
- Multimedia Accessibility <Audio> <Video> Wiki Page
- Accessibility for the Media Elements in HTML5
- 4.3 Source fallback
- Buffered bytes for media elements
- Pause on exit from Cue Ranges
- Request for PFWG WAI review of multimedia accessibility requirements <audio> <video>
- RE: Accessibility of <audio> and <video>
- Re: About video & audio elements
- Re: Buffered bytes for media elements
- Re: Pause on exit from Cue Ranges
- Re: Request to Strengthen the HTML5 Accessibility Design Principle Silvia Pfeiffer updates on Multimedia accessibility activities outside of the HTML WG.
- Codecs for <video> and <audio>
- Shifting gears for a second (was RE: Codecs for <video> and <audio>)
- Re: Codecs for <video> and <audio>
- Captions (was Re: Shifting gears for a second (was RE: Codecs for <video> and <audio>))
- Microsoft web video
- Re: Codecs for <audio> and <video>
- Synchronized Multimedia and HTML5
- Re: Discussion: Accessibility Issues Procedure - Silvia Pfeiffer
- Re: Seeking feedback on HTML5 video accessibility experiment
- Minutes: Accessibility of Media Elements in HTML 5 Gathering, 01 Nov 2009
- Topics for "Video" Breakout Group
- <video> and <audio> (was RE: Implementor feedback on new elements in HTML5)
- Timing model of the media resource in HTML5
- YouTube with new accessibility features
- timing model of the media resource in HTML5
- Re: <video> and <audio> (was RE: Implementor feedback on new elements in HTML5)
- FW: Attn UAAG friends (FW: Public feedback on HTML5 video)
- (media) HTML5 Accessible Multimedia sub-group, activity
- Re: timing model of the media resource in HTML5
- Re: minutes: HTML A11y TF telecon 2010-02-04
- a11y-media sync-up call
- Moving forward with captions / subtitles
- Media Subgroup once-off conference call
- SMIL and video accessibility
- minutes 2010-02-17 telcon, Media accessibility group
- FW: HTML 5, SMIL, Video
- Survey ready on Media Multitrack API proposal
- Survey ready on Media Text Associations proposal
- Re: Issue-9 (video-accessibility): Chairs Solicit Proposals
- Requirements for external text alternatives for audio/video
- Re: a11y TF CfC on resolution to support "Media Multitrack API" change proposal for HTML issue 9
- Re: Moving forward with MultiTrack API
- Re: Requirements for external text alternatives for audio/video
- Initial version of Synchronization Issues
- Change Proposals toward Issue-9: "how accessibility works for <video> is unclear"
- Proposed teleconference on media accessibility
- WHATWG started requirements collection for time-aligned text
- Timed tracks
- UAAG 2.0 guidelines for video
- technical reply to Dick's concerns (was Re: farewell)
- Re: Format Requirements for Text Audio Descriptions (was Re: HTML5 TF from my team)
- New element has been added to the HTML5 spec: track
- Some requirements links & questions
- Re: Timed tracks
- Accessibility Requirements of Media - please add to
- Media Requirements: Structural Navigation (plus some misc reqs)
- restructuring of the requirements document
- Further edits to the Media Multitrack API
- Survey on Media Accessibility Requirements
- Resending: Survey on Media Accessibility Requirements
- feedback mechanism needed for media accessibility requirements
- Re: Please Read (was RE: Survey on Media Accessibility Requirements)
- JTC1 SWG, Accessibility considerations for people with disabilities, Part I
- Load balancing the review of the requirements document
- Addressing "Captioning" feedback on requirements document
- Media Requirements--Sec. 2.4 Clear Audio
- Media Requirements--3.2 Granularity Level Control for Structural Navigation
- Addressing "2.7 Extended Captioning" feedback
- Media--Sec. 25. Content Navigation by Content Structure
- Media--Structural Nav Comes to the iPhone
- Media--Sec 2.9 Transcripts
- Media--Sec. 3.3 Time Scale Modification
- Addressing "3.1 Keyboard Access to Interactive Controls"
- Addressing "3.7 Requirements on the use of the viewport"
- Comments on J&J's 5 sections ready for next Wednesday
- deep linking into video and audio
- Media Requirements--Sec. 2.8 Sign Translation
- Media Requirements--Sec. 3.4 Production practice and resulting requirements.
- Media Accessibility Requirements - We're almost there
- deep linking into video and audio
- Media Requirements - 3.5 Discovery and activation/deactivation of available alternative content by the user.
- Media Requirements - 3.6 Requirements on making properties available to the accessibility interface
- Media Requirements - 3.8 Requirements on the parallel use of alternate content on potentially multiple devices in parallel.
- Addressing "3.7 Requirements on the use of the viewport"
- Re: Addressing "2.7 Extended Captioning" feedback
- Addressing "3.1 Keyboard Access to Interactive Controls"
- Part 2 - Media Accessibility Requirements - We're almost there
- Next stop on the accessible media road trip
- Media--Technical Implications of Our User Requirements
- Media--Summary of Technical Requirements Exposed by our User Requirements
- Fwd: Timed tracks for <video>
- summary and discussion of proposed media a11y framework by WHATWG
- FYI - U.S. Senate Passes S.3304 by Unanimous Consent! | Coalition of Organizations for Accessible Technology
- discussion on how to render captions in HTML5
- High-lighting of caption files
- Media Captioning Question
- Summary of edits and copyedits to Section 1 of media accessibility user requirements
- Media Accessibility User Requirements Published
- creation of checklist table
- Re: Enhanced change control after the Last Call cutoff
- Requesting Spec Text Additions
- Re: Requesting Spec Text Additions
- handling multitrack audio / video
- Media--Additional Requirement for Sec. 2.6 Captioning?
- Media: Ancillary Content Redux
- Handling multitrack audio / video
- Media Matrix Issue: TVD shouldn't list audio
- Media Matrix: Second issue with Clear Audio
- Media Matrix: Issue with DV-4
- Media Matrix: Issue with DV
- Matrix: CN-1 missing the hierarchical concept
- ACTION-188: Check on the status of the request for spec additions
- Media Matrix: Realign DV-1, DV-3, & DV-10
- DV-14 Media Accessibility and API for Media Resource 1.0
- Categorization of media a11y requirements
- action to provide WebSRT comparison
- RE: handling multitrack audio / video
- Draft Summary Doc for our Media Call Today
- Ian's timed tracks message on the WHATWG list
- discussion of transcript needs
- Moving forward on the Multitrack Media API (issue-152)
- Tech Discussions on the Multitrack Media
- using TTML for caption delivery, discussion
- The specification of the MPEG DASH manifest format
- ISSUE-9 video-accessibility: Call for Consensus to close in favor of specific issues, or Alternate/Counter-Proposals
- Re: Tech Discussions on the Multitrack Media (issue-152)
- track event
- RE: ISSUE-9 video-accessibility: Call for Consensus to close in favor of specific issues, or Alternate/Counter-Proposals
- close audiodescription support
- Included rendering considerations into media multi-track wiki page
- change proposals for issue-152
- Media Subteam--Scheduling Additional Teleconferences
- conversion of 608 to WebVTT
- How to handle multitrack media resources in HTML
- issue-152: documents for further discussion
- Progress on multitrack api - issue-152
- adding autoplay requirements to requirements doc
- CP relating to bug 11207
- Feedback on MediaController
- proposed a11y TF letter on issue-152
- Accessibility Task Force consensus on Issue-152, media multitrack
- Issue-152 (multitrack-media-resources): Call for Consensus
- Re: Track kinds
- Re: Feedback on MediaController
- alt technologies for paused video
- Re: alt technologies for paused video (and using ARIA)
- FW: alt technologies for paused video (and using ARIA)
- Re: alt technologies for paused video (and using ARIA)
- Action item. definition and use of Clean audio in European television
- RE: issue-152: documents for further discussion
- what to do about "clean audio"
- text alternatives for video
- New @roles for media descriptions, etc. (was RE: Agenda: HTML-A11Y Media Subteam on 25 May at 21:30Z for 90 Minutes)
- ARIA Elements Support Markup
- Bug 12794 Add a non-normative note on how to provide text alternatives for media elements
- Fwd: ISSUE-163 navigating-tracks: Chairs Solicit Change Proposals
- how to support extended text descriptions
- RE: Track kinds
- demos of Google with WebVTT
- Meaning of audio track kind 'descriptions'
- Re: text alternatives for video
- Navigation of media by content structure (Issue-163)
- Fwd: ISSUE-163 navigating-tracks: Chairs Solicit Change Proposals
- frame descriptions for chapter markup