Early draft document for more brainstorming and review
This document includes some themes for further discussion and sample research problems related to the themes extracted from the first RDIG event: Making Collaboration Technologies Accessible for Persons with Disabilities.
Existing W3C WAI guidelines, such as WCAG 1.0, UAAG 1.0 and AUUG 1.0, are suitable for collaborative tools although some additional guidelines may also be needed. Sample accessibility barriers in the current tools:
Research question: What additional guidelines are needed?
Collaborative tools often provide inadequate means for control when information in one media need to be transformed to an alternative media. This is especially true with when going from spatial e.g. visual media to sequential e.g. audio media. Examples:
Research question: What kinds of controls are needed to best support alternative media in synchronous communication tools?
Social interfaces tell users real time information about the communities they belong to. Some important information may not be available to users with disabilities as tools offer it only via one media. For instance, users who don't see might not know who is talking, that someone just left the meeting room, or that others are looking you through a camera. Examples of information that might be important:
Research question: What information is important in social interfaces? How to provide that information in alternative media?
Research question: What guidelines are needed to ensure information about privacy also in alternative media?
Communities may be able to provide accessibility information collaboratively. For instance, Gottfried had an example of correcting captions in real time by the user community. Other similar examples:
Research question: What kinds of tasks are suitable for providing accessibility information collaboratively? What kinds of tools are needed?
Visualizations try to show connections and differences in huge amounts of information often in innovative ways. For instance, in Alison's last slide users can notice the groups of dots and the dots with different colors almost immediately and ask more information about the ones that look interesting.
It would be interesting to develop automatic audio descriptions of the visualized scenes based on what the users usually are interested in. Also the visualizations may be associated with a list of questions that users can ask. CORDA has some examples of automatically created descriptions from scientific data. However, it is often useful for humans to also give some comparisons and not just exact values e.g. mortality for lung cancer is abt. 5 units, while the next cancer types are less than 2 units, and the rest are a less than 1/5 uni
Research question: How could innovative visualizations be presented for people who don't see them?
WAI WCAG WG is discussing about "accessible graphics" and what that means. There are some guidelines for reading and writing text descriptions of complex data and some research links about accessibility of visualizations and scientific data:
Research question: What are the differences when creating graphics intended to be viewed, spoken or printed? How to integrate the information so that it is suitable for all these tasks?
Often users with cognitive disabilities might benefit of even more visualizations or images associated with the words in the text. Some visualization might show the users in the virtual conference and change their appearance when they are talking or when they are in a line waiting to ask questions. Other visualizations might help to understand the documents or slides that are being currently discussed or waiting to be discussed.
Research question: What other information could be visualized to help user's with cognitive disabilities?
Some research also exists on using gestures and sound, touch etc. to present the spatial relations.
Gupta used information about the pages and some heuristics to understand the stucture and different information areas.
It was important that content is not lost in the page transformations so that user's can always get everything they want.
Research question: What kind of metadata vocabulary would be helpful to use on the pages itself to make segmentation easier and more accurate? Are there changes that should be done to the languages themselves?
It should be easy to adjust the services to different user needs and used devices
Research question: How to best adjust the services to different user needs and devices so that the user has the ultimate control? What kind of vocabulary (e.g. in CC/PP) is needed?
If information is presented via different media the other media should adjust so that the user can concentrate on the content. For instance, having many services that offer content simultaneously in audio is not acceptable, they need to negotiate and let user have the final control.
Research question: What kinds of mechanisms are needed in Web services to automatically negotiate what service has control of the media streams and let user manually override that when necessary?
$Date: 2003/05/11 14:08:31 $ $Author: marja $