WebSchemas/Accessibility/Issues Tracker/OldContent

From W3C Wiki

Old Content

There were many tables and bits of information and views that were created between the .6 and .7 specification. As we have come to conclusions, this old content has become dated and less useful. However, since we may want to refer to this in the future, I am parking this content on this page.

Access Mode and Media Feature Framework: a tabular view

Note that this section can be used in two ways. The first is to understand the mediaFeatures, and to have a sensory framework to understand them within. The second is to explain accessMode. Use this section as you wish.

A perusal of the properties above, especially mediaFeature and accessMode gives you a set of pieces without a plan. There is an overall plan, which is be explained in the table below. Before the details, though, I should explain

  1. AfA for electric learning resources
  2. This effort extended this for ebooks (richer content) and to physical books and media as well.

The framework held up well, but there are many attributes that get in the way of a simple model.

The first concept is access modes. These are the way in which the intellectual content of a described resource or adaptation is communicated. There are four defined in the specification: visual, auditory, tactile and textual. The input access modes are specified in the left column. There is also the concept of a refinement on an access mode. ColorDependent, for example, says that colors, which may be hard to recognize if colorblind, are present and significant in the visual appearance and intellectual information of the content.

The access mode at the top of the table (row 2, columns 4-7 are the access modes that adapted content is available in. The intersection of the two is the adaptation available to make that transformation. For example, an audio recording can be made available in textual form through captions or a transcript. It's "input at the left, transformation in the middle of the table, output access mode at the top."

It's worthwhile to try a few examples.

  1. A video with audio can be made available as purely visual/textual by sign language or captions.
  2. A book with mathematics (commonly presented as visual images) can be made accessible if the text is made available as text and the mathematics available through the long list of alternatives from describedMath and longDescription to MathML and laTex.

Note that math is in row 6 under visual (as this is how it was typically done in the past… inaccessible images). The qualified access mode in italics and the italic qualifier names are not access modes in the content (we just think of them as images), but calling them out as special content types puts the adaptation types (mediaFeatures) in a better organization. Likewise, putting the various braille mediaFeatures under tactile puts the values in a better context. Note that some ebook software also has the capability to present text in braille format with appropriate devices.

There is one other concept to bring up: mediaFeatures that are associated with a specific access mode. This is row 3 of the table. The first one, and easiest to understand, is largePrint. Printed books or images will have a large print adaptation available, no matter what the input access mode was. Textual has a few, with displayTransformability and structuralNavigation calling out specific points of WCAG (site the sections). These optional attributes will be true for any textual representation.

Finally, note that the mediaFeatures can be divided into three categories.

  1. Display or Transformative - restyling, adjusting layout, while staying within the same access mode. This is row 3 of the table.
  2. Augmentation or Content - adding captions, descriptions, alt-text to augment an accessMode to another accessMode. This is the bulk of the table, from row 4 column 4, down and to the right.
  3. Symbolic encodings - as noted above, MathML, ChemML and laTex are specific encodings of symbologies. having them in their own content rows (along with nemethBraille).

The hope is that, once you have seen the mediaFeatures in this organized fashion, that they'll make more sense. This framework also leaves clear places where other mediaFeatures could be placed in the future.

Content Access modes and adaptations Searchable Access mode
Access Modes Visual (V) Auditory (A) Tactile (B) Textual (T)
Refined Access Mode Refinement name +largePrint, +highContrast +structuralNavigation +haptic +displayTransformability, +structuralNavigation
Visual, including photos, graphs and charts ALWAYS audioDescription tactileGraphic, tactileObject, altText, longDescription alternativeText, longDescription
(textual) textOnImage braille alternativeText, longDescription
(mathematical) mathOnImage audioDescription tactileGraphic, tactileObject, nemethBraille alternativeText, longDescription, describedMath, MathML, laTex
(chemical) chemOnImage audioDescription tactileGraphic, tactileObject, nemethBraille alternativeText, longDescription, ChemML
(musical) musicOnImage musicBraille
colorDependent captions audioDescription alternativeText, longDescription
Auditory signLanguage, captions (open) ALWAYS captions, transcript
Tactile ALWAYS
Textual ALWAYS

A few notes of future things that could be done.

  1. Looking at this framework, it's unclear that we need highContrast as a mediaFeature. It's really just a type of displayTransformability and belongs over in textual as such. Thoughts? (Two views... First, it does belong in Visual, as it would pertain to how the content is created, and, as it is an image, it's immutable. Second, it does not belong on textual, as it's just an application of CSS displayTransformability.
  2. Small consistency thought. Consider that all of the "on image" and colorDependent names would be more clear if the refinement name said "onVisual" at the end instead of "onImage" (good idea for some point in the future... not worth acting on)


After the call on 10/7/2013, we agreed to try the mediaFeatures divided by areas of the what the transformation would be to (if you think about it, these content adaptations can either be source-oriented (the what is being adapted) or what the content is adapted to. It's unclear if we actually want to represent this as eight property names in use (it makes the queries more difficult), but it is a good representational technique for grouping the properties. My hope is that we could join this back together into one attribute when it is all done, and just use divisions like below as learning aids.

Enumerated View of mediaFeature

Property Expected Type Expected Values Description
accessMode Text
  • auditory
  • tactile
  • tacile/haptic
  • textual
  • visual
  • (consider adding "no" variants)
  • colorDependent
  • textOnImage
  • mathOnImage
  • chemOnImage
  • musicOnImage
  • other proposals (icon, chart)
An access mode through which the intellectual content of a described resource or adaptation is communicated; if adaptations for the resource are known, the access modes of those adaptations are not included. Note that source refinements of visual are listed under visual.
visualTransformFeature Text
  • highContrast (can add colors as extensions, such as highContrast/yellowOnBlack, greenOnBlack, whiteOnBlack and blackOn White)
  • largePrint (can add specific pointsize, as in largePrint/18 or just /CSSEnabled)
  • resizeText
  • displayTransformability
Transform features of the resource, such as accessible media and alternatives.
visualContentFeature Text
  • alternativeImage/captions
  • altnerativeImage/signLanguage (with possibility of ISO 639 sign language code)
Content features of the resource, such as accessible media and alternatives.
auditoryTransformFeature Text
  • enhancedAudio - /noBackground, /reducedBackground or /switchableBackground
  • structuralNavigation (can have /tableofContents, /index, /headings, /tags, /bookmarks, /printedPageNumber)
Transform features of the resource, such as accessible media and alternatives.
auditoryContentFeature Text
  • audioDescription
  • (audioDescription/image)
  • (audioDescription/math)
  • (audioDescription/chem)
  • (audioDescription/color)
Content features of the resource, such as accessible media and alternatives.
tactileTransformFeature Text N/A
tactileContentFeature Text
  • braille and its extensions (/ASCII, /music, /math, /chem or /nemeth) (also consider /contracted and /gradeII)
  • (tactileGraphic/image)
  • (tactileGraphic/math)
  • (tactileGraphic/chem)
  • tactileObject
  • (tactileObject/image)
  • (tactileObject/math)
  • (tactileObject/chem)
Content features of the resource, such as accessible media and alternatives.
textualTransformFeature Text
  • largePrint (can have specific pointsize, such as /16 or the more open "/CSSEnabled")
  • highContrast (can add colors as extensions, such as highContrast/yellowOnBlack, /greenOnBlack, /whiteOnBlack and /blackOnWhite)
  • resizableText (and the extension /taggedPDF)
  • displayTransformability
  • structuralNavigation (can have /tableofContents, /index, /headings, /tags, /bookmarks, /printedPageNumber)
Transform features of the resource, such as accessible media and alternatives.
textualContentFeature Text
  • alternativeText
  • (alternativeText/image)
  • (alternativeText/math)
  • (alternativeText/chem)
  • (alternativeText/color)
  • captions
  • ChemML
  • laTex
  • longDescription
  • (longDescription/image)
  • (longDescription/math)
  • (longDescription/chem)
  • MathML
  • transcript
Content features of the resource, such as accessible media and alternatives.

Simplified View of mediaFeature

The simpler view of this follows, as I break this down into just three mediaFeature groups. Note that there is a common extension that is used to represent the refinement of the content types (see all of the /*OnImage in access modes. These will be represented in the table below as

  • /*Refinement, but will be understood to be any of these as relevant
    • /image
    • /math
    • /chem
    • /color
    • /music

Note that, under mediaFeature, my legend for access modes transformed by the augmentation is (I overload the "expected type" column to represent the access mode that the augmentation transfers from and to).

  • Auditory
  • Tactile
  • teXtual
  • Visual
Property Expected Type Expected Values Description
accessMode Text
  • auditory
  • tactile
  • tactile/haptic
  • textual
  • visual
  • colorDependent
  • textOnImage
  • mathOnImage
  • chemOnImage
  • musicOnImage
  • other proposals (icon, chart)
An access mode through which the intellectual content of a described resource or adaptation is communicated; if adaptations for the resource are known, the access modes of those adaptations are not included. Note that source refinements of visual are listed under visual.
mediaFeature (for transformation) Text
  • highContrast
can add most comment colors as extensions, such as highContrast/yellowOnBlack, greenOnBlack, whiteOnBlack and blackOnWhite or just say /CSSEnabled if CSS is set up to allow these changes to be customized by the user
  • largePrint
can add specific pointsize, as in largePrint/18 or just /CSSEnabled
  • resizeText
either /CSSEnabled or the extension /taggedPDF if the PDF allows resize and reflow
  • displayTransformability
The document is set up for CSS display transformability
  • enhancedAudio
For prerecorded audio content that contains primarily speech in the foreground
  • /noBackground
  • /reducedBackground
  • /switchableBackground
mediaFeature (for navigation) Text
  • structuralNavigation
table of contents or similar resource to allow higher-level document navigation, which can net extended as /tableofContents, /index, /headings, /tags, /bookmarks, /printedPageNumber
mediaFeature (for augmentation) V->X
  • alternativeText
alternative text is provided for visual content (e.g., the HTML alt attribute)
  • /*Refinement
V->X
  • longDescription
descriptions are provided for image-based content and/or complex structures such as tables
  • /*Refinement
V->X
  • ChemML
  • laTex
  • MathML
The use of one of these specific ASCII/XML encodings for mathematics or chemistry. These can have /*Refinements specified, but it's rarely needed.
A->X
  • captions
  • transcript
The addition of synchronized text (closed captions) or a separate transcript to convey the meaning of the audio
A->V
  • alternativeImage
The presentation of audio or textual content as a visual presentation. Common extensions are:
  • /captions for captions (open) that are placed on the visual presentation
  • /signLanguage (with possibility of ISO 639 sign language code as an extension to that, as in /sgn-en-us)
V->A
  • audioDescription
Audio descriptions are available (e.g., via the HTML5 track element). Common extensions are for the various "onImage" refinements noted above
  • /*Refinement
V->T and X->T
  • braille
braille content or alternative is available (e.g., eBraille or print braille) This can have extensions for the different types of braille, /ASCII, /music, /math, /chem or /nemeth) (also consider /contracted and /gradeII or other nomenclature... pretty open)
V->T
  • tactileGraphic
tactile graphics have been provided, as described in the BANA Guidelines and Standards for Tactile Graphics
  • /*Refinement
V->T
  • tactileObject
tactile 3D objects have been defined and a 3D object or instructions to build one are available
  • /*Refinement