SVG Accessibility/People and Issues

From W3C Wiki

SVG Accessibility issues, requirements, etc, grouped according to broad types of disability.

Note that there will be various crossovers here because one person may have several types of disability.

More usefully, we may be able to find approaches to accessibility that benefit many kinds of user at once - but we should be careful to identify and avoid solutions which help one kind of user at the expense of another.

Users who are blind

Typically a blind person will use a screen reader with speech and/or braille output. Input is usually keyboard or touch, but may include speech recognition and alternate input devices.

Requirements for users who are blind

  • Be able to determine the name of the SVG content.
  • Be able to access a description of SVG content.
  • Be able to identify the purpose of SVG content (graphic, widget, data representation etc.).
  • Be able to understand relationships expressed through SVG content (flowcharts, groups, hierarchies etc.).
  • Be able to access text within SVG content.
  • Be able to navigate through interactive SVG content in a logical order/way.
  • Be able to use interactive SVG content from the keyboard or by touch.
  • Be able to move in and out of SVG content without being trapped.
  • Be able to determine the purpose of interactive elements/controls within SVG content.
  • Be able to determine the values of SVG objects and controls.
  • Be able to use the appropriate language profile when listening to SVG content.
  • Be able to find styling (geometry, color) of elements that have semantic meaning.

Use cases for users who are blind

  • An economics student needs to study an SVG chart representing fluctuations in the Gross National Product over the past 50 years. The student is blind and uses a screen reader with speech and braille output.
  • An HR manager needs to use a resourcing application with an SVG interface that contains multiple interactive widgets for selecting resources and assigning them to project teams. The HR manager is blind and uses a touch screen device with a screen reader and speech output.
  • A meteorologist looks for information on a website that uses SVG icons to denote different weather patterns on a map, where the map can be manipulated to display more/less ground coverage. The meteorologist is blind and uses a keyboard and speech recognition, with a screen reader and speech output.
  • A chemist needs to review an interactive SVG of a chemical compound.
  • A blind geologist needs to understand a SVG of a phase diagram.

Text-based presentation

Typical approaches rely on a "screenreader", or on a refreshable braille-line (a screen of typically 40 or 80 characters of braille). Note that another use case for this information is in image search - matching a text query to an image which has a textual representation is often more reliable than relying on aspects such as filenames or link text alone.

In order to present a navigable text version, key strategies include providing desc and title content, and structuring the image into groups of elements where relevant (with title and desc elements appropriate to the point in the structure).

haptics / sonficiation

Braille printers, braille displays, haptic-feeedback screens and devices

There are various hardware devices which can add feedback that you can feel - for example through vibration or electrostatic interactions - to an image rendered on a screen.

In addition, images can be printed on braille devices, thermal plastic printers (which produce a 3D output and can layer objects on each other as if they were a stack of shapes, in order to provide a tactile version of the image. Note that the resolution for something that can be determined by most people's hands can be fairly coarse.


When the information in a graphic cannot be readily converted to discrete text elements -- e.g. a very busy scatterplot or line chart -- a screen reader may be able to use audio tones to convey the same information.

Tree of graphic

Provide a navigable tree of the graphic. The graphic should have a description. Each branch of the tree should identify itself (role) and provide appropriate summary information. A user should be able to use tree controls to navigate the entire tree down to the leaves if desired. Leaves should describe themselves. For a chart's data item a value tuple, for a chart's guide items its scale value. Connectors should be navigable, describe what they connect to and any provide information about the connector like weight or value of the connection.

The tree should follow a predictable form. For statistical charts NCAM suggested the order - title, axes, legends, data, reference elements. Map order could be title, scales, symbols, legends, map features and reference features. Technical drawing order could be title, scales, view.

The structure of content in the document tree should be based on a logical structure/reading order, not drawing order (which would be controlled with z-index).

Users who are color blind


  • Meaning conveyed by style perceivable to users.
    • Any information that is presented through color to be also presented in another way that does not rely on color.
  • Hue contrast sufficient for certain color disorders.
  • Focus can be perceived.

Use cases

A color blind user looking at a scatterplot where the color represents a continuous variable.

A color-blind (or blind, or monochrome high contrast) user has instructions from a non-colour-blind user including "press the red button, not the green one".


Add label when color represents data

Add a text label for a color value represents a data value. The data value may be formatted.

Add title (tooltip) when color represents data

Add a title property to the item representing data, so the user can see a formatted data value when they mouse over it.

Replace colors with patterns

Replace colors with patterns.

Add aural tone

Add an aural tone to the data item, so you hear the tone when you mouse over the data item.

Find style information

A user has the possibility of finding e.g. the colour (or dominant colour) of a particular element or object.

Users with low vision who use an AT

Use cases

People have a screenreader to assist them in reading text (see also cognitive disabilities)

People who use assistive input devices

Use cases

  • Voice input
  • "Modified" keyboards -e.g. switch interfaces, half-keyboard
  • eye-trackers

Users with low vision who don't use an AT


  • SVG can be panned and zoomed
  • Text and graphics can be resized, ideally independently.
  • Luminosity contrast can be changed.
  • Typeface, font weight, font style can be changed.
  • Stroke thickness and color/contrast can be changed
  • Meaning conveyed by style perceivable to users.
  • Should play well with high contrast mode.

Use cases

  • An older user that forgets their reading glasses wants to read a website.
  • A student with glaucoma (loss of peripheral vision) needs to interpret a scatterplot where the data points fill color represents a category. The student needs to answer the questions - What is the maximum y value for each category and which categories appear in the upper third of the chart. The data points are too small for him to distinguish the color of them at the native scale and the horizontal grid lines are too faint for him to figure out the y value.
  • A user with retinopathy (dark patches in field of view)
  • A user which reads yellow text on a black background better than other text/background color combinations.
  • A legally blind engineer needs to read a circuit diagram.


Choice of background and foreground/text colors

The background color will be used on the background of the graphic - so if there was originally a image, gradient or color, that was on the background, it will be replaced by the background color. Text will be drawn in the foreground/text color. Guides - (axes, scales, symbols) are drawn in the foreground color. Legends will use the background color and foreground color, with only the swatches and symbols not effected.

Text magnification

Text magnification will change the font size. A magnification of 1.4 would change a 10pt font to a 14pt font.

Bold text

Bold text would force all text font-weight to be bold.

Show borders

Show borders would add a border to data items, but not guide items. The border would be in the foreground color. Or if no foreground color specified, black or white which ever produces more contrast with the background.

Add label when color/pattern/shape/style represents data

Add a text label to the item representing data, so the user can see a formatted data value.

Add title (tooltip) when color/pattern/shape/style represents data

Add a title property to the item representing data, so the user can see a formatted data value when they mouse over it.

Users with cognitive and neurological issues


  • User can manage distractions.
    • Autoplay of any moving content can be prevented.
    • Blinking or movement can be easily stopped.
    • Meaningful animations can be slowed down or stepped through with pause/play control.
    • Content can be zoomed to focus on only a relevant area
  • User can understand the connections between content
    • Layout helps user find important content.
    • Labels, legends and controls can be associated with the correct graphics, styled appropriately and rendered independent of "global" zoom/pan (see also low-vision users)
    • Flows/connections within the graphic can be clearly represented
  • User can understand essential information/access data in non-graphical/non-symbolic forms
    • Charts and diagrams may help most people understand, but not if spatial processing or abstract/analogical thinking is compromised!
  • User can avoid personal risk.
    • Flash can be prevented.
    • Content with flash can be reliably warned in advance and avoided.
    • Unexpected flash can be immediately stopped.
  • Audiosensitive seizures
    • Triggering condition can be prevented.
    • Content with triggering condition can be reliably warned in advance and avoided
    • Unexpected triggering condition can be immediately stopped.
  • Be able to view an individual item (with semantic meaning) by subduing (graying out) competing parts of the same feature. A competing part of the same feature would refer to other lines in a line chart, other points in a scatter plot, etc.
  • Be able to highlight a label associated with an item.
  • Be able to extract data values associated with an item, via labels, tooltips or other means.
  • Be able to highlight connectors/relationsips.

Use cases

  • A person wants to follow a map, but finds it difficult when the map isn't in the same direction they are already facing
  • A dyslexic user that finds black text on white background disturbing and prefers black text on an antique white colored background, or prefers a specific font.
  • A user with attention deficit disorder wants to stop moving/animated content in order to help them focus on data or text.
  • A person with compromised short-term memory who needs to identify something on a map, but repeatedly has to switch between the map and the legend to understand the map symbols.
  • A user wants to limit their view of a complex image to focus on a particular area and minimise distraction


Animation / Video / Audio timing control

The user agent should provide controls to pause and play animation, and also allow users to chose whether it plays automatically (with some notification if animation is available but paused by default). Users should be able to change the speed of animation and rewind/restart it as necessary.

Alternate presentation of data

Data for charts should also be available as tables or other structured text formats. Information in legends/axis labels should also be available when directly examining the data (e.g., as tooltips).

Meaningful user stylesheet control

Graphical effects (e.g. filters, gradients, decorative backgrounds) can be disabled; fonts and color/contrast can be changed to ease readability; any information conveyed by style is also presented in alternate ways.

Users with mobility impairments


  • User can access all information (e.g., extra data in title tooltips) regardless of input device.
  • User can navigate to all interactive controls and links within the graphic / SVG application.
  • The user's input options can be mapped to all behavior that has an interactive effect.
  • Make keyboard interaction efficient
  • Must address navigation strategies
  • Must accommodate multiple forms of interaction (speech, keyboard, touch, switch)
  • Address User feedback (progress, context)

Use cases

  • A user needs to review a science diagram of the water cycle. The science diagram has embedded videos and links to external content.
  • A keyboard-only user is studying a scatterplot where the precise data values of points are available as hover tooltips.
  • A user navigating with a head-motion joystick is trying to zoom or pan a dynamic data chart which changes the visible data based on the usual mouse/touch zoom and scroll actions.


Keyboard (or other input device) trigger of tooltips / hover effects

The user agent should provide means of accessing hover content without requiring pointing devices, and should make it easy to detect which elements have this hidden content.

Ensure interactive content can receive keyboard focus =

By using <a> elements for SVG 1.1., or tabIndex for SVG 2. Ideally, provide support for directional keyboard navigation where appropriate.

Use input device-neutral events to trigger actions

E.g., use the scroll event, not the mousewheel event, to capture and re-interpret scroll actions from various input devices.

Users who are Deaf or hard of hearing


Be able to get non-aural form for aural information.

Use cases

A user needs to review a science diagram of the water cycle. The science diagram has embedded videos.


Embedded content that affects deaf users are audio and video. Track must be child of embedded content (audio, video) and doesn't need to be addressed seperately.

We should use recommendations from HTML5. Links for audio in SVG2 and HTML5

Links for video in SVG2 and HTML5



  • Users can understand content, navigation and available interactions.
  • Users are not confused
    • Style does not imply perceived affordance of non-actionable features.
    • Predictability / consistency of design.
    • Minimize reliance on user short-term memory.
    • Do not require specific physical characteristics.
  • Content encoded in a manner that permits machine transformation.
  • Adjustability of output
    • Visual presentation can be adjusted to meet needs of user.
  • Enable direct perception of output for people with a wide range of perception disabilities.