DRAFT: Accessibility Features of SMIL

W3C Draft NOTE 26 July 1999

This Version:
Previous Version:
Latest Version:
Marja-Riitta Koivunen (mrk@w3.org)
Ian Jacobs (ij@w3.org)


This document summarizes the accessibility features of the Synchronized Multimedia Language (SMIL), version 1.0 Recommendation ([SMIL10]). This document has been written so that other documents may refer in a consistent manner to the accessibility features of SMIL.

Status of this document

This document is a draft W3C Note made available by the W3C and the W3C Web Accessibility Initiative. This NOTE has not yet been jointly approved by the WAI Education and Outreach Working Group (EOWG), the WAI Protocols and Formats Working Group (PFWG), and the Synchronized Multimedia (SYMM) Working Group.

Publication of a W3C Note does not imply endorsement by the W3C Membership. A list of current W3C technical reports and publications, including working drafts and notes, can be found at http://www.w3.org/TR.

Table of Contents

1 Introduction

Multimedia presentations rich in text, audio, video, and graphics are becoming more and more common on the Web. They include news casts, educational material, entertainment, etc. Formats such as SMIL 1.0, can be used to create dynamic multimedia presentations by synchronizing the various media elements in time and space.

Authors can make SMIL 1.0 presentations accessible to users with disabilities by observing the principles discussed in the "Web Content Accessibility Guidelines" [WAI-WEBCONTENT]. The Guidelines explain how to create documents that account for the diverse abilities, tools, and software of all Web users, including people with combinations of visual, auditory, physical, cognitive, and learning disabilities.

Dynamic multimedia presentations differ from less dynamic pages in some important ways that affect their accessibility to people with disabilities:

Part of the responsibility for making SMIL 1.0 presentations accessible lies with the author and part with the user's software, the SMIL player. Authors must include equivalent alternatives for images, video, audio, and other inaccessible media. They must design documents so that users can override author layout and style preferences when necessary. Authors must synchronize tracks correctly, describe relationships between tracks, provide useful default behavior, and mark up the natural language of content.

In turn, SMIL players must allow users to control document presentation to ensure its accessibility. The "User Agent Accessibility Guidelines" [WAI-USERAGENT] explain principles for creating accessible user agents. User control of style and layout as well as user agent configurability are central to these guidelines.

For instance, users with low vision must be able to select a large font size, whatever the author's original design. Users with color deficiencies must be able to specify suitable color contrasts, even if that means overriding the author's preferences.

Players must provide users access to author-supplied media objects, their accessible alternatives, or both. Users must also be able to turn on and off alternatives and control their size, position, and volume. For instance, users with both low vision and hearing loss must be able to select a large font size for text captions. Users might also want to specify how to render synchronized audio tracks, for instance, by using a male pitch voice for the auditory description to contrast with female voices in the audio track.

Since users with some cognitive disabilities or people using combinations of assistive technologies such as an refreshable Braille and speech synthesis may require additional time to view a presentation or its captions, players must allow them to speed up, slow down, or pause a presentation (as one can do with most home video players) Some users may require that time-sensitive information be rendered in a time-independent form altogether. For example, SMIL 1.0 allows authors to create links whose destination varies over time. Some users may not have enough time to select or even notice these links, so players should provide access to them in a time-independent manner. Multimedia players may also offer an index to time-dependent information in a time independent form.

This Note describes the accessibility features of [SMIL10] and explains how authors and SMIL players should make use of them. Note. Recommendations for authors and SMIL players are made in accordance with the recommendations made in [WAI-WEBCONTENT] and [WAI-USERAGENT].

2 Equivalent Alternatives

Multimedia presentations have two main types of equivalent alternatives: discrete and stream. Discrete equivalents do not contain any time references or have intrinsic duration. In SMIL, discrete text equivalents are generally specified by attributes such as the alt attribute of the img element.

Stream equivalents, such as text captions or auditory descriptions, have intrinsic duration and may contain references to time. For instance, a text stream equivalent consists of pieces of text associated with a time code. Stream equivalents may be constructed out of discrete equivalents, for instance by using the par (parallel) or seq (sequential) elements and the timing attributes begin and end.

Text equivalents are fundamental to accessibility since they may be rendered visually, as speech, or by a braille device. In multimedia presentations, text stream equivalents must be synchronized with other time-dependent media.

The Web Content Accessibility Guidelines also require that, until user agents can automatically read aloud the text equivalent of a visual track, authors provide an auditory description of the important information of the visual track of a multimedia presentation. This benefits users who may not be able to read text or may not have access to software or hardware for speech synthesis or braille.

The following sections describe in more detail the SMIL features for specifying discrete and stream equivalents for video, audio, text, and other SMIL elements.

2.1 Discrete Equivalents

Authors specify discrete text equivalents for SMIL elements through the following attributes. Discrete text equivalents, when rendered by players or assistive technologies to the screen, as speech, or on a dynamic braille display, allow users to make use of the page, even if they cannot make use of all of its content. For instance, providing a text equivalent of an image that is part of a link will enable someone with blindness to decide whether to follow the link.

For media objects (image, video, audio, textstream, etc.). Specifies a short text equivalent that conveys the function of the media object element. Alternative text may be rendered instead of media content, for instance when images or sound are turned off or not supported by the player.
For media objects. Specifies a link to a long, more complete description of media content supplementing the description provided by alt attribute. Authors should provide long descriptions of complex content, such as charts and graphs.
For media objects, par, and seq. Summarizes the content of media objects.
For media objects, par, and seq. Names the author of media objects.

The following example includes a video element that presents a dynamic graph to illustrate trends in Web commerce and privacy. The alt, title, and abstract attributes specify discrete equivalents that provide information with different granularity. The longdesc attribute designates a more complete text equivalent of the video presentation, with details about what information is being displayed in the graph, the units of the graph, etc. The long description itself might also include links back to anchors associated with key points of the presentation (not shown here) so that users can navigate back and forth.

<video src="rtsp://foo.com/graph.imf"
       title="Web Trends: Graph 1"
       alt="The number of online stores
            and consumers is increasing, but privacy
            is decreasing."
       abstract="The number of Web users, online stores, and      
                 the influence of Web communities are 
                 all steadily increasing while privacy for
                 Web users is slowly diminishing. This graph
                 explains the trends and Web technologies
                 that will most impact the future of
                 Web commerce."

2.2 Stream Equivalents

Two stream equivalent formats that promote accessibility are captions and auditory descriptions. A caption is a text transcript of spoken words and non-spoken sound effects that are synchronized with an audio stream. Captions benefit people with deafness or who are hard of hearing. They also benefit anyone in a setting where audio tracks would cause disturbance, where ambient noise prevents them from hearing the audio track, or when they have difficulties understanding spoken language.

An auditory description is a recorded or synthesized voice that describes key visual elements of the presentation including information about actions, body language, graphics, and scene changes. Like captions, auditory descriptions must be synchronized with other audio streams. Since users may have difficulty processing multiple audio tracks at once, auditory descriptions are generally scheduled to play during intervals of the sound track with no dialog. Consequently, long auditory descriptions may affect the timing of the original audio and video tracks since these intervals may be too short. Auditory descriptions benefit people with blindness or low vision. They also benefit anyone in an eyes-busy setting or whose devices cannot show the original video or visual media object.

Below we discuss in more detail how to associate captions and auditory descriptions with multimedia presentations in SMIL 1.0 in a manner that allows users to control the presentation of the alternative stream. We also examine how SMIL 1.0 supports multilingual presentations and how this affects stream equivalents for accessibility. For more information about synchronizing media objects, please consult the SMIL 1.0 specification ([SMIL10]).

2.2.1 Captions

In SMIL 1.0, captions may be included in a presentation with the textstream element. The following example plays an audio track, a video track, and a caption (via the textstream element) in parallel.

   <audio      src="audio.rm"/>
   <video      src="video.rm"/>
   <textstream src="closed-caps.rtx">

The limitation of the previous example is that the user cannot easily turn on or off the caption. Style sheets (in conjunction with markup such as an "id" attribute) may be used to hide the text stream, but only for SMIL 1.0 players that support the particular style sheet language. Note. In CSS, authors may turn off the visual display of captions using 'display : none' and 'display: block' to turn them back on.

Since user control of presentation is vital to accessibility, SMIL 1.0 allows authors to create presentations whose behavior varies depending on how the user has configured the player. When a SMIL element such as textstream has the system-captions test attribute with value "on" and the user has configured the player to support captions, the element may be rendered. Whether the element is actually rendered depends on other markup in the document (such as language support).

The following example is a TV news presentation that consists of four media object elements: a video track that shows the news announcer, an audio track containing her voice, and two text streams containing a stream of stock values and captions. All the elements are to be played in parallel due to the par element. The caption will only be rendered if the user has configured the player to support captions.

   <audio      src="audio.rm"/>
   <video      src="video.rm"/>
   <textstream src="stockticker.rtx"/>
   <textstream src="closed-caps.rtx"

The system-captions attribute can be used with elements other than textstream. Like the other SMIL test attributes (refer to [SMIL], section 4.4), system-captions acts like a boolean flag that returns "true" or "false" according to the player configuration. Section 3.1 illustrates how system-captions can be used to specify different presentation layouts according to whether the user has configured the SMIL player to support captions.

Note. Authors should only use system-captions="on" for captions and system-captions="off" for caption-related effects such as layout changes. This allows players to distinguish accessibility captions from other types of content (which may allow them to avoid overlapping captions and other content automatically, for example).

2.2.2 Auditory Descriptions

In SMIL 1.0, auditory descriptions may be included in a presentation with the audio element. However, SMIL 1.0 does not provide a mechanism that allows users to turn on or off player support for auditory descriptions. Note. In CSS, authors may turn off auditory descriptions using 'display : none', but it is not clear what value of 'display' would turn them back on.

2.2.3 Multilingual presentations and stream equivalents

SMIL 1.0 allows authors to create multilingual presentations with subtitles (which are text streams) and overdubs in another language (which are audio streams). Multilingual presentations themselves do not pose accessibility problems. Indeed, providing additional tracks (even in a different language) will probably help many users.

However, multilingual presentations are linked to accessibility since these text and audio streams may co-exist with text and audio streams provided for accessibility, authors of accessible multilingual presentations should be aware of how they interact. For instance, authors should lay out presentations so that captions and subtitles do not interfere with each other on the screen. Audio tracks should not overlap unless carefully synchronized.

In SMIL 1.0, the system-overdub-or-caption test attribute allows users to select (through the player's user interface) whether they would rather have the player render overdubs or subtitles. Note. The term "caption" in "system-overdub-or-caption" does not refer to accessibility captions. Authors must not use this attribute to create accessibility captions; use system-captions instead.

In the following example, the TV news are offered in both Spanish and English. If the user has the player configured to support both Spanish and overdubs, the Spanish audio track will be rendered. Otherwise the second audio track of the first switch element (the English audio track) will be rendered. Note that since there is only one set of captions (in English), they will be rendered when the user has configured the player to support captions.

   <switch> <!-- audio -->
     <audio src="audio-es.rm"
     <audio src="audio.rm"/>
   <video src="video.rm"/>
   <textstream src="stockticker.rtx"/>
   <textstream src="closed-caps.rtx"

To add Spanish subtitles to the example, the author would specify a second textstream element. The first text stream will be rendered if the user has configured the player to support accessibility captions. The second textstream element will be rendered if the user has configured the player to prefer subtitles and Spanish.

   <!-- audio section same as before -->
   <video src="video.rm"/>
   <textstream src="stockticker.rtx"/>
   <switch> <!-- captions or subtitles -->
     <textstream src="closed-caps.rtx"
     <textstream src="subtitles-es.rtx"

Since captions include text descriptions of actions, sounds, etc. in addition to dialog, they can be more helpful than subtitles. Authors who provide captions need not provide subtitles in the same language since the two are so similar. The following example (based on the previous one) illustrates how to provide a text stream that may serve as either a caption or subtitle, without being rendered twice on the screen.

In the switch element, the three text streams are rendered in this order: the user prefers Spanish captions or Spanish subtitles or English captions. This design allows authors to reuse captions as subtitles and to ensure that the text stream is not rendered twice when the user has configured the player to support both.

   <!-- audio section same as before -->
   <video src="video.rm"/>
   <textstream src="stockticker.rtx"/>
   <switch> <!-- captions or subtitles -->
     <textstream src="closed-caps-es.rtx"
     <textstream src="closed-caps-es.rtx"
     <textstream src="closed-caps.rtx"

Note. In SMIL 1.0, values for system-overdub-or-caption only refer to user preferences for either subtitles or overdubs; there are no values for the test attribute that refer to user preferences for neither or both.

3 Layout and Style

3.1 Layout

Authors may specify the visual layout of SMIL 1.0 media objects through SMIL's own layout markup or with a style sheet language such as CSS [CSS1, CSS2]. In both cases, the layout element specifies the presentation information. The "Web Content Accessibility Guidelines" recommend style sheets for a number of reasons (refer to [CSS-ACCESS] for details): they are designed to ensure that the user has final control of the presentation, they may be shared by several documents, they make document and site management easier. Style sheets may not be supported by all SMIL players, however. SMIL's layout facilities allow authors to arrange rectangular regions visually (via the region element), much like frames in HTML.

The following example illustrates how to regain space when captions are turned off or not supported. In this example, the same layout is defined both with SMIL markup and CSS2 style sheets. Since both style sheets appear in a switch element, the SMIL player will use the CSS style sheet if supported, otherwise the SMIL style sheet. Note that the type attribute of the layout element specifies the MIME type of the style sheet language, here "text/css".

The style sheets in this example specify two layouts. When the user has chosen to view captions, they appear in a region (the "captext" region) that takes up 20% of available vertical space below a region for the video presentation (the "capvideo" region), which takes up the other 80%. When the user does not wish to view captions, the video region takes up all available vertical space (the "video" region). The choice of which layout to use depends on the value of the system-captions test attribute.

      <layout type="text/css">
        { top: 20px; left: 20px }
        [region="video"] {top: 0px; height: 100%}
        [region="capvideo"] {top: 0px; height: 80%}
        [region="captext"] {top: 80%; height: 20%; overflow: scroll}
        <region id="video" title="fullsize video pane"
                top="0" height="100%" fit="meet"/>
        <region id="capvideo" 
                title="video pane for use with caption window"
                top="0" height="80%" fit="meet"/>
        <region id="captext" title="caption pane"
                top="80%" height="20%" fit="scroll"/>
      <switch> <!-- if captions off use first region, else second -->
        <video region="video" src="movie-vid.rm"
               title="Video presentation of soccer match, 100% vert"
        <video region="capvideo" src="movie-vid.rm"
               title="Video presentation of soccer match, 80% vert"/>
     </switch> <!-- if captions on render also captions -->
      <textstream region="captext" src="closed-caps.rtx"
            title="Caption of soccer match, 20% vert"/>

3.2 Style

In SMIL 1.0, the only style attribute that can be set in SMIL 1.0 is background-color but without other color definitions, that has little effect on accessibility.

4 Navigation and Linking

SMIL 1.0 includes a number of interesting linking features, including HTML-like hyperlinks and image maps (as well as video maps). SMIL 1.0 also allows authors to create time-dependent links that may only be active only at certain times during a presentation (as defined by the author). To make these hyperlinks accessible, authors must provide textual information and SMIL players should allow users to control the link rendering.

4.1 Accessible Image/Video Maps

To create an accessible image or video map, authors must describe the nature of each link in the map for users who cannot see or use the visual information. Authors provide the description via the title attribute on the a and anchor elements. This text description may be rendered by SMIL players on the screen or by assistive technologies as speech or dynamic braille.

Here is an example of a video clip with an associated map. Each link describes an active rectangular region of the video via the coords attribute.

<video src="http://www.w3.org/CoolStuff">
  <anchor href="http://www.w3.org/AudioVideo" 
          title="W3C Multimedia Activity"/>
  <anchor href="http://www.w3.org/Style"      
          title="W3C Style Sheet Activity"/>

Note that the anchor element is an empty element (has no content), while the a element has link content.

Until SMIL players are able to present this information to users on demand, authors should also make textual links available in addition to non-text links. Authors might want to control the presentation with the system-captions test attribute. In the following example, text links corresponding to those of the video map will be rendered when the user has configured the player to support captions. The example does not specify a particular screen layout.

  <video src="http://www.w3.org/CoolStuff">
    <anchor href="http://www.w3.org/AudioVideo" 
            title="W3C Multimedia Activity"/>
    <anchor href="http://www.w3.org/Style"      
            title="W3C Style Sheet Activity"/>
  <par system-captions="on">
    <a href="http://www.w3.org/AudioVideo">
       W3C Multimedia Activity</a>
    <a href="http://www.w3.org/Style">    
       W3C Style Sheet Activity</a>

4.2 Accessible Time-dependent Links

The time-dependent linking mechanisms offered by SMIL 1.0 pose an accessibility challenge to both authors and players. The following example from the SMIL 1.0 specification illustrates time-dependent linking. In the example, the duration of a video clip is split into two time intervals: from 0-5 seconds and from 5-10 seconds. A different link is associated with each of these intervals.

<video src="http://www.w3.org/CoolStuff">
  <anchor href="http://www.w3.org/AudioVideo" 
          title="W3C Multimedia Activity"
          begin="0s" end="5s"/>
  <anchor href="http://www.w3.org/Style" 
          title="W3C Style Sheet Activity"     
          begin="5s" end="10s"/>

Some users require more time than anticipated by the author to interact with the presentation. Therefore, SMIL players should allow users to access all links in a time-independent manner. Until SMIL players enable this, authors should make all time-dependent links available in a static form. This may be done in a variety of ways. For instance, authors might list all time-dependent links in a separate document (and link to it from the presentation). The static list of links should include information about when the links are active during the presentation. This type of catalog will help all users, and allow people to find information about all links associated with a particular media object or that are active at a particular moment of the presentation.

4.3 Useful navigation mechanisms

Navigation mechanisms help all users but are particularly important for users with blindness or cognitive impairments who may not be able to grasp the structure of a page through visual clues. In addition to HTML-like linking mechanisms that may be used to create site maps and navigation bars, SMIL allows authors to create "temporal navigation bars" that allow users to navigate directly to important points in time of a presentation.

As an example, we first identify key points in a presentation that includes three interviews conducted sequentially (Joe, Tim, then Judy). Each segment is marked by an anchor (identified by the "id" attribute).

     <region id="video" top="0" height="100%" fit="meet"/>
<video src="http://www.w3.org/BBC" 
     title="Future of the Web"
     alt="Interview with Joe, Tim, and Judy for BBC"
     abstract="The BBC interviews Joe, Tim, and Judy about 
               the Future of the Web. Joe and Tim talk about
               social and technological impact. Judy 
               addresses the benefits to accessibility of
               good design.">
  <anchor region="joe" id="joe"
          begin="0s" end="5s"
          title="Joe interview on Web trends"/>
  <anchor region="tim" id="tim"
          begin="5s" end="10s"
          title="Tim interview on Web trends"/>
  <anchor region="judy" id="judy"
          begin="10s" end="60s"
          title="Judy interview on Web accessibility"/>

Authors might add a temporal navigation bar in parallel with the presentation. The navigation bar takes up the lower 10% of the presentation, and consists of a photo of each of the interviewees that links to their interview. Selecting a link causes player to play that part of the video. Refer to the section on opening new windows for information about where the interview will be played.

   <region id="video" top="0" height="90%" fit="meet"/>
   <region id="joe" top="90%" height="10%" fit="meet"/>
   <region id="tim" top="90%" height="10%" 
                    left="35%" fit="meet"/>
   <region id="judy" top="90%" height="10%" fit="meet"
                    left="70%" fit="meet"/>
      <video <-- Video presentation information here -->
      <a href="#joe" title="Joe interview on Web trends>
         <img title="Photo of Joe" src="joe-photo.png"/>
      <a href="#tim" title="Tim interview on Web trends>
         <img title="Photo of Tim" src="tim-photo.png"/>
      <a href="#judy" title="Judy interview on Web accessibility>
         <img title="Photo of Judy" src="judy-photo.png"/>

4.4 Opening new windows

The show attribute on the a and anchor elements controls the behavior of the source document containing the link when the link is followed. The default value of the attribute is "replace", which means that the destination presentation replaces the current presentation (in the same window, in audio, etc.).

Two other values for the attribute, "new" and "pause", cause the destination presentation to appear in a new context (e.g., window). Opening new windows without warning may disorient some users with blindness or cognitive impairments and may simply bother others. To promote accessibility, authors should not cause new windows to open without warning. SMIL players should allow users to turn on and off support for opening new windows and orient users when new windows are created. Note. The SMIL 1.0 specification states that implementation behavior may vary once "replace" content has terminated. Some players may resume playing the content that was playing at the time the link was followed. Authors should therefore avoid navigation mechanisms (links) that rely on a particular "replace" behavior.

5 Adapting Content to User and System Settings

As mentioned earlier, the authors of SMIL 1.0 presentations can define alternative designs based on some user or system settings. The author can test these settings through test attributes set on various elements.

Test attributes for captions, overdubs, and language are described in Section 2.2. SMIL 1.0 also includes attributes to test the speed of connection and some characteristics of the player. Authors may use these tests to tailor the content or style of a presentation according to the user's device or connection. These are the SMIL 1.0 test attributes that may be used with synchronization elements:

Tests support for captions (see Section 2.2).
Tests support for overdubs or subtitles (see Section 2.2).
Tests natural language preferences.
Tests the minimum approximate bandwidth required to display the element. The author of the presentation can use this to decide that by default high quality images are not shown with slow connection speeds.
Tests the minimum depth of the screen color palette in bits required to display the element. This attribute controls the presentation according to the capability of the screen to display images or video at a certain color depth.
Tests the minimum required screen size to display the element. It can be used to control what will be shown to the user with a certain screen size.

These attributes may be used e.g., to deliver content more appropriately for various devices and connections. For example, if a connection is slow, the author may specify that images should not be downloaded. While these attributes may make some content more accessible, they may overly constrain what a user can access. For instance, users may still want to download important images despite a slow connection. Authors should use these attributes conservatively. In addition, players should offer possibilities for overriding the restrictions built in by the author when necessary.

The following example delivers different qualities of video based on available bandwidth. The player evaluates each of the choices in the switch element in order and chooses the first one whose system-bitrate value is equal to or greater to the speed of the connection between the media player and media server.

 <switch> <!-- video -->
   <video src="high-quality-movie.rm" system-bitrate="40000">
   <video src="medium-quality-movie.rm" system-bitrate="24000">
   <video src="low-quality-movie.rm" system-bitrate="10000">

6. To Learn More about Accessibility and SMIL

The first place to learn more about SMIL is the Recommendation itself [SMIL10]. The Synchronized Multimedia home page at the W3C Web site also includes information about SMIL tutorials, SMIL authoring tricks, examples of interesting presentations, player support for SMIL, and links to other sources of information about SMIL.

For more information about making SMIL presentations accessible, authors should consult the Web Content Accessibility Guidelines ([WAI-WEBCONTENT]) and accompanying techniques document ([WAI-WEBCONTENT]), which explains the guidelines in detail and with many examples. Although the techniques document emphasizes HTML and CSS, many of the principles and examples apply to SMIL as well.

Player developers should consult the User Agent Accessibility Guidelines ([WAI-USERAGENT]) and accompanying techniques document ([WAI-USERAGENT-TECHS]), which explains how to design accessible user agents, including synchronized multimedia players.

Developers of SMIL authoring tools should consult the Authoring Tool Accessibility Guidelines ([WAI-AUTOOLS]).

7. Index of SMIL 1.0 elements and attributes

The SMIL 1.0 elements and attributes discussed in this document are listed here, followed by links to their definitions in the SMIL 1.0 specification.

About the Web Accessibility Initiative

W3C's Web Accessibility Initiative (WAI) addresses accessibility of the Web through five complementary activities that:

  1. Ensure that the technology of the Web supports accessibility
  2. Develop accessibility guidelines
  3. Develop tools to facilitate evaluation and repair of Web sites
  4. Conduct education and outreach
  5. Conduct research and development

WAI's International Program Office enables partnering of industry, disability organizations, accessibility research organizations, and governments interested in creating an accessible Web. WAI sponsors include the US National Science Foundation and Department of Education's National Institute on Disability and Rehabilitation Research; the European Commission's DG XIII Telematics for Disabled and Elderly Programme; Telematics Applications Programme for Disabled and Elderly; Government of Canada, Industry Canada; IBM, Lotus Development Corporation, and NCR.

Additional information on WAI is available at http://www.w3.org/WAI.

About the World Wide Web Consortium (W3C)

The W3C was created to lead the Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability. It is an international industry consortium jointly run by the MIT Laboratory for Computer Science (LCS) in the USA, the National Institute for Research in Computer Science and Control (INRIA) in France and Keio University in Japan. Services provided by the Consortium include: a repository of information about the World Wide Web for developers and users; reference code implementations to embody and promote standards; and various prototype and sample applications to demonstrate use of new technology. To date, more than 320 organizations are Members of the Consortium. For more information about the World Wide Web Consortium, see http://www.w3.org/


Many people in W3C and WAI have given valuable comments to this document. The authors would like to thank Charles McCathieNevile, Philipp Hoschka, Judy Brewer, and the SYMM Working Group for their contributions.


A list of current W3C Recommendations and other technical documents can be found at http://www.w3.org/TR.

" Accessibility Features of CSS", I. Jacobs, J. Brewer, eds.
"Cascading Style Sheets, level 2", B. Bos, H. W. Lie, C. Lilley, and I. Jacobs, 17 May 1998.
"Cascading Style Sheets, level 1", H. W. Lie and B. Bos, 17 December 1996. Revised 11 January 1999.
"HTML 4.0 Recommendation", D. Raggett, A. Le Hors, and I. Jacobs, eds., 18 December 1997, revised 24 April 1998.
Synchronized Multimedia Integration Language (SMIL) Specification, Philipp Hoschka, 15 June 1998.
"Authoring Tool Accessibility Guidelines", J. Treviranus, J. Richards, I. Jacobs, C. McCathieNevile, eds.
"Web Content Accessibility Guidelines", W. Chisholm, G. Vanderheiden, and I. Jacobs, eds., 5 May 1999.
"Techniques for Web Content Accessibility Guidelines", W. Chisholm, G. Vanderheiden, and I. Jacobs, eds.
"User Agent Accessibility Guidelines", J. Gunderson and I. Jacobs, eds.
"Techniques for User Agent Accessibility Guidelines", J. Gunderson and I. Jacobs, eds.