Today's situation: Taylor is a deaf student who has an assignment to research the situation in Iraq. He goes to a major news web site to gather information. He finds video clips on the subject, but he doesn't know what is being said because the video is not captioned. [Note: Cable news programs on TV might also be inaccesssible. Although there is a law requiring 100% captioning within a few years, right now, the percentage of cable that is captioned is approximately 35%. There are no laws that address Internet access.] Amber is a blind student who has an assignment to research changes in the stock market. She goes to a major financial news web site to gather information. She finds a page that describes an animated series of charts that show changes over the last year. Excitedly, she goes to those charts and listens to the narrative, which includes phrases like "Notice the repeated series of lows and highs in mid-year" and "Watch as the the Dow Jones total is adjusted over an hour period." Since the data underlying the charts and long descriptions of the charts are not available, Amber cannot obtain the information she needs. Tomorrow's situation with accessible SMIL: The major news and financial news web sites, recognizing the need to make information accessible, contract with the captioning and blind access industries to (a) caption all audio materials, and (b) describe all visual materials. The access materials are stored in different parts of the world and brought together with SMIL. [Note: audio materials means audio only, the audio portion of video, or the audio portion of animations or other timed visual materials] As media is converted from television, the line21 captions are automatically extracted and re-synchronized with the digital video. [Note: MIT has a project with CNN where they digitize the CNN News Program and extract the captions as a transcript. http://www.nmis.org/ An unsynchronized transcript, while helpful, is not equivalent to "full and independent access."] For real-time media, stenographic means are used for providing real-time captioning. [Note: This is already being done by Gary Robson http://www.cheetahcast.com. Others are also coming out with similar solutions, but Gary was the first.] In a future where automated recognition systems can handle audio and video as well as text, disabled users would be able to submit urls to inaccessible sites to the recognition systems for reading. [This is actually Gregg Vanderheiden's scenario. He provides a nice clear scenario with a blind person having the directions off the back of a package read to him or her.] Taylor, the deaf student from the scenario above, is now able to access the information in audio portion of the video. He creates a synchronized multimedia report of his own, linking to the captioned video, and adding his own sign language commentary to the presentation. Amber, the blind student from the scenario above, is now able to acces the information in the visual portion of the web site. She creates a synchronized multimedia report of her own, linking to the animated set of charts, and adding her own A teacher wanting to give her deaf and hard-of-hearing students access to a web site that is not accessible uses SMIL to add captions to the audio information. A teacher wanting to give his blind and low-vision students access to a web site that is not accessible uses SMIL to add text and audio descriptions to visual information.