This Wiki page is edited by participants of the HTML Accessibility Task Force. It does not necessarily represent consensus and it may have incorrect information or information that is not supported by other Task Force participants, WAI, or W3C. It may also have some very useful information.

Canvas Accessibility Use Cases

From HTML accessibility task force Wiki
Revision as of 17:55, 8 July 2011 by Schepers (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


These are use cases identified as unresolved for the Canvas 2D API. These are summaries, and do not necessarily have established consensus or priority.

Note: some of these issues are general graphics accessibility issues which should be solved in SVG as well.

Hit Testing

  • A user should be able to use a pointer device or keyboard to select a point on the drawing area and determine if that point is inside or outside a specific shape or object's drawing area
  • A user should be able to activate links or other "active areas" with a pointer device or keyboard to select that area
    • A user agent should be able to bind the results of hit testing to the associated canvas subtree element, and dispatch events to the DOM accordingly
  • A content author should be able to provide these capabilities without adding special custom code

Magnification

  • A user should be able to "zoom in on" (magnify) a specific portion of a dynamic graphic
    • The user should be able to select a specific area to zoom in on, regardless of what that area contains
    • The user should be able to choose an appropriate "zoom level" or range of magnification to view the object in the proper context (e.g. they may wish to see objects around the focused object, as well)
  • A content author should be able to provide these capabilities with the same API as is used to draw the rendered shape, or with an easy API to maintain a parallel logical structure (such as a subtree or shadowtree DOM element)

Explorability

  • The user should be able to select a specific shape or object regardless of what area that object is in
    • The user should be able to automatically track an object despite changes in its size, shape, or location
    • The user should be able to request additional information or metadata about that object (e.g. intrinsic characteristics like size, general shape, color, patterns or style, and extrinsic data like textual titles, labels, and descriptions, and possible relationship with other objects)
  • A content author should be able to provide these capabilities with the same API as is used to draw the rendered shape, or with an easy API to maintain a parallel logical structure (such as a subtree or shadowtree DOM element)

Dynamic Focus

  • The user should be able to select important objects through a variety of modalities (including mouse, keyboard, and touch interfaces)
    • The user should be able to switch between multiple objects through a variety of modalities
    • The user should be able to choose the most important or immediate object from a collection of selectable objects
    • The content author should be able to alert the user of changes in priority among all selectable objects

Others

See also Active Image Accessibility Use-cases for use cases with a different granularity. For convenience, the contents of this page is included below:

This page is for listing accessibility problems with active (probably script-generated) images that may need to be solved.

  • Low- or no-vision users may have difficult reading text drawn into an image. Solutions may involve keeping around the original text, so it can be accessed by assistive technologies on demand.
  • Low- or no-vision users may have difficulty determining the connections between far-flung sections of a complex image, such as a graph, because they cannot easily assimilate the entire image's information at once. Solutions may involve annotating sections of an image with descriptions that can be accessed by assistive technology on demand.
  • Low-vision users using a magnifier to aid in resolving details can't see the entire application at once, and so don't know if something is happening that requires their attention in a part of the application that's not currently being magnified. Solutions may involve telling the magnifier about active areas, so it can alert the user and pan/zoom appropriately.