Presenter's Name: Ed Sims, Ph.D.

Affiliation: Vcom3D, Inc.

List your research goal(s) or the research question(s) you are trying to answer:

Our goal is to make text- and audio-based content available to Deaf persons in sign language at an affordable cost. Of particular interest is providing sign language interpretations of educational material for children in elementary and middle grades that have difficulty reading.Research questions related to this goal include:

1. How can sign language equivalents of text material best be created and distributed economically via the Web?

2. What are the relative advantages and disadvantages of representing information to be signed as a) natural spoken language text that has been marked up to simplify translation, b) gloss or phonetic descriptions of sign language, or c) stored animations of sign language?

3. Are there any related initiatives (e.g., semantic markup languages) oriented toward making Web content available in multiple spoken languages that may contribute toward providing sign language accessibility?

Describe or list the complex information that you are concerned about making accessible:

We wish to make text and spoken language of any type accessible to persons who, due to deafness, have difficulty learning to read. Our emphasis is on providing sign language accessibility to educational materials to elementary and middle-grade level students.

Which user task are you studying? Provide a scenario.

Background. Children who are pre-lingually deaf (i.e., who are deaf at birth, or become deaf prior to developing spoken language skills), face significant challenges in learning to read. Over 90 % of the parents of these children are hearing, and most of these parents do not develop skills in sign language. Therefore, their children have very limited opportunities to learn language, as compared to their hearing counterparts.By the time they reach school age, they have missed opportunities to develop either spoken or sign language skills, and are therefore ill prepared to learn to read.  A factor that complicates providing sign language translations is the fact that sign languages use a completely different grammar than spoken languages, and include not only motions of both hands/arms, but also the face and body posture.  Furthermore, countries or regions that use the same spoken language frequently use different sign languages.  For example, American Sign Language is radically different from British Sign Language; and Mexican Sign Language is different from Spanish Sign Language.

Use Scenario.  Deaf students use a "Signing Avatar" accessibility agent to view sign-enabled educational Web sites in American Sign Language.  The students may either view full ASL translations of content, or ASL definitions of difficult new terms. 

Which modalities (haptic, aural, visual) and input or output devices are you using to address accessibility issues? List any thoughts you have about using multiple modalities to create accessible interfaces:

We are using the visual modality to make text or speech information available in sign language.  Although text is visual, it may not be accessible to persons whose first language is sign.  Specifically, an animated character or "avatar" uses facial expression as well as manual (hand/arm) signs to communicate information. 

In 5 sentences, how are you attempting to address your research goal(s) or question(s)? Either list the specific technologies you are using or provide a general description of how you are using the technologies.

We have developed the ability to synthesize animated sign language from low-bandwidth, text-based gloss or phonetic descriptions.These animations are represented and rendered using the Web3D Consortium’s Humanoid Animation (H-Anim) and Extensible 3D (X3D) specifications.  All of our work to date has been in creating American Sign Language (ASL) translations of English, but the technology is extensible to translations between other spoken and signed languages.We have developed an authoring tool that provides semi-automated translation to gloss and/or phonetic notation that is animated by a Signing Avatar user agent.  We are currently researching the feasibility of creating sign language animations automatically from English text that has been marked up semantically to disambiguate word senses and anaphoric references.  

Note: Gloss is a text description of the meaning of a signed utterance; phonetic notation describes the signing visually in terms that include handshape, motion, and facial expression.

Include any visual or aural illustrations that you would like to use during your presentation:

1. Video of animated American Sign Language (ASL) synthesized from text.  signsci.mov

2. Screen capture from Signing Science Web site.  signsci.png

List resources that you will reference during your presentation

Human Animation Specification, http://www.web3d.org/x3d/specifications/h-anim_specification.html

Roush, D., Providing Sign Langauge Access to the Curriculum for Deaf Students, Proceedings of the Center On Disabilities Technology And Persons With Disabilities Conference 2004, http://www.csun.edu/cod/conf/2004/proceedings/200.htm

Signing Science Web Site, signsci.terc.edu

X3D Draft Specification Standards, http://www.web3d.org/x3d/specifications/x3d_specification.html