Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | References

W3C

Implementation Techniques for
Authoring Tool Accessibility Guidelines 2.0

Guideline 3: Support the author in the production of accessible content

W3C Working Draft 22 November 2004

This version:
http://www.w3.org/TR/2004/WD-ATAG20-TECHS-20041122/tech3
Latest version:
http://www.w3.org/TR/ATAG20/tech3
Previous version:
http://www.w3.org/TR/2003/WD-ATAG20-TECHS-20030314/tier3
Editors of this chapter:
Jutta Treviranus - ATRC, University of Toronto
Jan Richards - ATRC, University of Toronto
Matt May - W3C

Introduction to Guideline 3:

Actions may be taken at the author's initiative that may result in accessibility problems. The authoring tool should include features that provide support and guidance to the author in these situations, so that accessible authoring practices can be followed and accessible web content can be produced.

This support includes prompting and assisting the author to create accessible web content (Checkpoint 3.1), especially for information that cannot be generated automatically, checking for accessibility problems (Checkpoint 3.2), and assisting in the repair of accessibility problems (Checkpoint 3.3). In performing these functions, the authoring tool must avoid including automatically generated equivalent alternatives or previously authored equivalent alternatives without author consent (Checkpoint 3.4). The authoring tool may also provide automated means for managing equivalent alternatives (Checkpoint 3.5) and provide accessibility status summaries (Checkpoint 3.6).

Accessibility-related documentation provides support and guidance to the author. The documentation must accommodate the various levels of author familiarity with web content accessibility issues. The checkpoint requirements include documenting accessible content promoting features (Checkpoint 3.7), and ensuring that documentation demonstrates authoring practices (Checkpoint 3.8) and workflow processes that result in accessible content (Checkpoint 3.9).


Checkpoints in Guideline 3:


Notes:


ATAG Checkpoint 3.1: Prompt and assist the author to create content that conforms to WCAG. [Web Content Checkpoints Relative to WCAG]

Rationale: Appropriate assistance should increase the likelihood that typical authors will create WCAG-conformant content. Different tool developers will accomplish this goal in ways that are appropriate to their products, processes, and authors.

Executive Summary of Techniques:

In some authoring situations it may be necessary to prompt (e.g. task automation, entry storage, etc.) authors to follow accessible authoring practices. This is especially true of accessibility problems that require human judgment to remedy, such as adding descriptions to images. In general, it is preferable to begin guiding the author towards the production of accessible content before accessibility problems have actually been introduced. Postponing checking (checkpoint 3.2) and correcting (checkpoint 3.3) may leave the author uninformed of accessibility problems for so long that when the author is finally informed, the full weight of the accumulated problems may be overwhelming.

When information is required of the author, it is crucial that that information be correct and complete. This is most likely to occur if the author has been convinced to provide the information voluntarily. Therefore, overly restrictive mechanisms are not recommended for meeting this checkpoint.

Clarification of Term "Prompt":

The term prompt in this checkpoint should not be interpreted as necessarily implying intrusive prompts, such as pop-up dialog boxes. Instead, ATAG 2.0 uses prompt in a wider sense, to mean any tool initiated process of eliciting author input (see definition of prompting for more information).

Implementation Notes 1:

During implementation of this checkpoint, consideration should be given to the promotion and integration of the accessibility solutions involved as required by Guideline 4 of the guidelines. In particular, accessibility prompting:

Techniques for Success Criteria 1: When the actions of the author risk creating Web content that is not accessible (i.e. fails to meet the Web content checkpoints requirements to Level 1, 2, or 3) (e.g. image inserted, author typing invalid element into a code view, author initiating a page creation wizard, etc.), the tool must introduce the appropriate accessible authoring practice.

Technique 3.1.1: Use an appropriate prompting and assisting mechanism

3.1.1(1): Prompting and assisting for short text labels (e.g. alternate text, titles, short text metadata fields, rubies for ideograms):

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.1.1(1a): This illustration shows an authoring interface for description reuse. It is comprised of a drop-down list that is shown with several short labels for the same image. Notice that one of the labels in the list is in a different language (i.e. French). The author must be able to create a new label, if the stored strings are not appropriate. (Source: mockup by AUWG)
Screen shot demonstrating prompting for short labels[longdesc missing]

Applicable to Code-Level authoring functions Example 3.1.1(1b): This illustration shows a code-based authoring interface for short text label prompting. The drop-down menu was triggered when the author typed quotation marks (") to close the href attribute. (Source: mockup by AUWG)
Screen shot demonstrating pop-up menu for  selecting alt text.[longdesc missing]

3.1.1(2): Prompting and assisting for multiple text labels (e.g. image map area labels):

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.1.1(2): This illustration shows an authoring interface for image map area text label prompting. It is comprised of a list with two columns. In the right-hand column is the URL for each image map area. This can be used as a hint by the author as they fill in the text labels (left-hand column). A checkbox at the bottom provides the option of using the text labels to create a set of text links below the image map. (Source: mockup by AUWG)
Screen shot demonstrating prompting for image map labels[longdesc missing]

3.1.1(3): Prompting and assisting for long text descriptions (e.g. longdesc text, table summaries, site information, long text metadata fields):

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.1.1(3): This illustration shows an authoring interface for long text description prompting. A "description required" checkbox controls whether the rest of the interface is available. If a description is required, the author then has the choice of opening an existing description file or writing (and saving) a new one. (Source: mockup by AUWG)
Screen shot demonstrating prompting for long descriptions [longdesc missing]

3.1.1(4): Prompting and assisting for form field labels:

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Example 3.1.1(4): This illustration shows a form properties list that allows the author to simultaneously decide the field labels, tab order, form field place holders, and accesskeys. In this example, two form field labels are missing, causing prompts to be displayed. (Source: mockup by AUWG)
Demonstration of form labeling property list[longdesc missing]

3.1.1(5): Prompting and assisting for form field place-holders:

3.1.1(6): Prompting and assisting for TAB order sequence:

Applicable to 'what you see is what you get' authoring functions Example 3.1.1(6): This illustration two views of a "Set TAB Order" utility that lets the author visualize and adjust the TAB order of a document: as a mouse-driven graphical overlay on the screen and as a keyboard accessible list.
Demonstration of TAB ordering utility [longdesc missing]

3.1.1(7): Prompting and assisting for navigational shortcuts (e.g. keyboard shortcuts, skip links, voice commands, etc.):

Applicable to 'what you see is what you get' authoring functions Example 3.1.1(7a): This illustration shows a mechanism that detects repeating navigation elements and asks the author whether they want to add a skip navigation link (Source: mockup by AUWG)
example of a skip navigation link interface

Applicable to Code-Level authoring functions Example 3.1.1(7b): This illustration shows a code-based authoring interface suggesting accesskey values. Notice that the system suggests "m" because it is the first letter of the link text ("moon"). The letter "c" does not appear in the list because it is already used as an accesskey later in the document (for the link "camera"). (Source: mockup by AUWG)
Demonstration of an interface suggesting accesskeys[longdesc missing]

3.1.1(8): Prompting and assisting for contrasting colors:

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.1.1(8): This illustration shows an authoring interface for choosing a text color. The palette has been pre-screened so that sufficient contrast between the text and the current background color is assured. Color codes entered manually are also screened. (Source: mockup by AUWG)
Demonstration of high-contrast palette filter[longdesc missing]

3.1.1(9): Prompting and assisting for alternative resources for multimedia (transcripts, captions, video transcripts, audio descriptions, signed translations, still images, etc.):

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.1.1(9): This illustration shows an authoring interface for embedding a video. The tool automatically detects whether captions, video transcript, audio descriptions, signed translations and a still image are available for the video. When an item is not found, the author has the option to locate the material or launch an authoring utility. (Source: mockup by AUWG)
Demonstration of check for captions and descriptions[longdesc missing]

3.1.1(10): Prompting and assisting for Metadata:

3.1.1(11): Prompting and assisting for document structure:

Applicable to 'what you see is what you get' authoring functionsExample 3.1.1(11): This illustration shows a tool that detects opportunities for enhancing structure and alerts the author. (Source: mockup by AUWG)
Demonstration of prompting for structural information [longdesc missing]

3.1.1(12): Prompting and assisting for tabular structure:

Applicable to 'what you see is what you get' authoring functions Example 3.1.1(12): This illustration shows a tool that prompts the author about whether the top row of a table is a row of table headers. (Source: mockup by AUWG)
Screen shot demonstrating a system for automatically adding table heading markup.[longdesc missing]

3.1.1(13): Prompting and assisting for style sheets:

Applicable to 'what you see is what you get' authoring functions Example 3.1.1(13a): This illustration shows a prompt that indicates that a heading has been misused to indicate emphasis. Use of style sheets is suggested instead and a list of styles already used in the document is provided. (Source: mockup by AUWG)
error message for misused heading

3.1.1(14): Prompting and assisting for clearly written text:

Applicable to Code-Level authoring functions Example 3.1.1(14a): This illustration shows an authoring interface that indicates the reading level of a page and whether it exceeds a limit determined by the author's preference settings. (Source: mockup by AUWG)
[longdesc missing]

Applicable to 'what you see is what you get' authoring functionsExample 3.1.1(14b): This illustration shows an authoring interface that prompts the author to enter an acronym expansion. (Source: mockup by AUWG)
[longdesc missing]

3.1.1(15): Prompting and assisting for device independent handlers:

3.1.1(16): Prompting and assisting for non-text supplements to text:

Applicable to 'what you see is what you get' authoring functions Example 3.1.1(16): This illustration shows an authoring interface for prompting the author about whether a paragraph that contains many numbers might be made more clear with the addition of a chart or graph. (Source: mockup by AUWG)
Screen shot demonstrating a system that prompts for visual alternatives.[longdesc missing]

3.1.1(17): Prompting and assisting the author to make use of up to date formats:

Note: The preceding list is meant to cover techniques of prompting and assisting for many, but not all, of the most common accessible authoring practices.

Technique 3.1.2: Check all textual entries for spelling, grammar, and reading level (where applicable).

Technique 3.1.3: Share non-text equivalents between authors (where applicable).

Technique 3.1.4: Provide multiple preview modes and a warning to authors that there are many other less predictable ways in which a page may be presented (aurally, text-only, text with pictures separately, on a small screen, on a large screen, etc.). Some possible document views include:

Applicable to 'what you see is what you get' authoring functionsExample 3.1.2: This illustration shows a WYSIWYG authoring interface with a list of rendering options displayed. The options include "All" (i.e. render as in a generic browser), "text-only" (i.e. non-text items replaced by textual equivalents), "no styles", "no frames", and "grayscale" (used to check for sufficient contrast). (Source: mockup by AUWG)
Illustration shows an authoring tool with a drop down menu of different rendering options[longdesc missing]

ATAG Checkpoint 3.2: Check for and inform the author of accessibility problems. [Web Content Checkpoints Relative to WCAG]

Executive Summary:

Despite prompting assistance from the tool (see Checkpoint 3.1), accessibility problems may still be introduced. For example, the author may cause accessibility problems by hand coding or by opening content with existing accessibility problems for editing. In these cases, the prompting and assistance mechanisms that operate when markup is added or edited (i.e. insertion dialogs and property windows) must be backed up by a more general checking system that can detect and alert the author to problems anywhere within the content (e.g. attribute, element, programmatic object, etc.). It is preferable that this checking mechanisms be well integrated with correction mechanisms (see Checkpoint 3.3), so that when the checking system detects a problem and informs the author, the tool immediately offer assistance to the author.

Implementation Notes:

The checkpoints in guideline 4 require that implementations of checking be:

Techniques for Success Criteria 1: The authoring tool must always provide a check (automated check, semi-automated check or manual check) for each applicable requirement to conform to WCAG.

Technique 3.2.1: Automate as much checking as possible. Where necessary provide semi-automated checking. Where neither of these options is reliable, provide manual checking.

(a) Automated: In automated checking, the tool is able to check for accessibility problems automatically, with no human intervention required. This type of check is usually appropriate for checks of a syntactic nature, such as the use of deprecated elements or a missing attribute, in which the meaning of text or images does not play a role.

Applicable to Code-Level authoring functions Example 3.2.1(a): This illustration shows a summary interface for a code-based authoring tool that displays the results of an automated check. (Source: mockup by AUWG)
Screen shot demonstrating automated checking with the results in a summarized list. [longdesc missing]

Applicable to 'what you see is what you get' authoring functions Example 3.2.1(b): This illustration shows an interface that displays the results of an automated check in a WYSIWYG authoring view using blue squiggly highlighting around or under rendered elements, identifying accessibility problems for the author to correct. (Source: mockup by AUWG)
Screen shot demonstrating automated checking in a WYSIWYG tool.[longdesc missing]

Applicable to Code-Level authoring functions Example 3.2.1(c): This illustration shows an authoring interface of an automated check in a code-level authoring view. In this view, the text of elements with accessibility problems is shown in a blue font, instead of the default black font. (Source: mockup by AUWG)
[longdesc missing]

(b) Semi-Automated: In semi-automated checking, the tool is able to identify potential problems, but still requires human judgment by the author to make a final decision on whether an actual problem exists. Semi-automated checks are usually most appropriate for problems that are semantic in nature, such as descriptions of non-text objects, as opposed to purely syntactic problems, such as missing attributes, that lend themselves more readily to full automation.

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.2.1(d): This illustration shows a dialog box that appears once the tool has detected an image without a description attribute. However, since not all images require description, the author is prompted to make the final decision. The author can confirm the at this is indeed an accessibility problem and move on to the repair stage by choosing "Yes". (Source: mockup by AUWG)
Screen shot demonstrating a semi-automated check[longdesc missing]

(c) Manual: In manual checking, the tool provides the author with instructions for detecting a problem, but does not automate the task of detecting the problem in any meaningful way. As a result, the author must decide on their own whether or not a problem exists. Manual checks are discouraged because they are prone to human error, especially when the type of problem in question may be easily detected by a more automated utility, such as an element missing a particular attribute.

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.2.1(e): This illustration shows a dialog box that reminds the author to check if there are any words in other languages in the document. The author can move on to the repair stage by pressing "Yes". (Source: mockup by AUWG)
Screen shot demonstrating manual check[longdesc missing]

Technique 3.2.2: Consult the Techniques For Accessibility Evaluation and Repair Tools [WAI-ER] Public Working Draft for evaluation and repair algorithms related to WCAG 1.0.

Techniques for Success Criteria 2: The authoring tool must inform the author to any failed check results prior to completion of authoring.

ATAG Checkpoint 3.3: Assist authors in repairing accessibility problems. [Web Content Checkpoints Relative to WCAG]

Executive Summary:

Once a problem has been detected by the author or, preferably by the tool (see Checkpoint 3.2), the tool may assist the author to correct the problem. As with accessibility checking, the extent to which accessibility correction can be automated depends on the nature of the particular problems. Some repairs are easily automated, whereas others that require human judgment may be semi-automated at best.

Implementation Notes:

The checkpoints in guideline 4 require that implementations of correcting be:

Techniques for Success Criteria 1: The authoring tool must always provide a repair (automated repair, semi-automated repair or manual repair) for each applicable requirement to conform to WCAG.

Technique 3.3.1: Automate as much repairing as possible. Where necessary provide semi-automated repairing. Where neither of these options is reliable, provide manual repairing.

(a) Automated: In automated tools, the tool is able to make repairs automatically, with no author input required. For example, a tool may be capable of automatically adding a document type to the header of a file that lacks this information. In these cases, very little, if any, author notification is required. This type of repair is usually appropriate for corrections of a syntactic or repetitive nature.

Applicable to Code-Level authoring functions Example 3.3.1(a): This illustration shows a sample of an announcement that an automated repair has been completed. An "undo " button is provided in case the author wishes to reverse the operation. In some cases, automated repairs might be completed with no author notification at all. (Source: mockup by AUWG)
Screen shot demonstrating automated checking.[longdesc missing]

(b) Semi-Automated: In semi-automated repairing, the tool can provide some automated assistance to the author in performing corrections, but the author's input is still required before the repair can be complete. For example, the tool may prompt the author for a plain text string, but then be capable of handling all the markup required to add the text string to the content. In other cases, the tool may be able to narrow the choice of repair options, but still rely on the author to make the final selection. This type of repair is usually appropriate for corrections of a semantic nature.

Applicable to 'what you see is what you get' authoring functions Example 3.3.1(b): This illustration shows a sample of a semi-automated repair in a WYSIWYG editor. The author has right-clicked on an image highlighted by the automated checker system. The author must then decide whether the label text that the tool suggests is appropriate. Whichever option the author chooses, the tool will handle the details of updating the content. (Source: mockup by AUWG)
Screen shot demonstrating semi-automated repair[longdesc missing]

3. Manual: In manual repairing, the tool provides the author with instructions for making the necessary correction, but does not automate the task in any substantial way. For example, the tool may move the cursor to start of the problem, but since this is not a substantial automation, the repair would still be considered "manual". Manual correction tools leave it up to the author to follow the instructions and make the repair by themselves. This is the most time consuming option for authors and allows the most opportunity for human error.

Applicable to Code-Level authoring functions Example 3.3.1(c): This illustration shows a sample manual repair. The problems have already been detected in the checking step and the selected offending elements in a code view have been highlighted. However, when it comes to repairing the problem, the only assistance that the tool provides is a context sensitive hint. The author is left to make sense of the hint and perform the repair without any automated assistance. (Source: mockup by AUWG)
Screen shot demonstrating manual repair advice.[longdesc missing]

Technique 3.3.2: Implement a special-purpose correcting interface where appropriate. When problems require some human judgment, the simplest solution is often to display the property editing mechanism for the offending element. This has the advantage that the author is already somewhat familiar with the interface. However, this practice suffers from the drawback that it does not necessarily focus the author's attention on the dialog control(s) that are relevant to the required correction. Another option is to display a special-purpose correction utility that includes only the input field(s) for the information currently required. A further advantage of this approach is that additional information and tips that the author may require in order to properly provide the requested information can be easily added. Notice that in the figure, a drop-down edit box has been used for the short text label field. This technique might be used to allow the author to select from text strings used previously for the alt-text of this image (see ATAG Checkpoint 3.5 for more).

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.3.2: This illustration shows a sample of a special-purpose correction interface. The tool supports the author's repair task by providing a description of the problem, a preview (in this case of the image missing a label), tips for performing the repair, possible repair options (archived from previous repairs), and other information (in this case the name of the image file). (Source: mockup by AUWG)
Screen shot demonstrating a page from a dedicated accessibility prompting checker[longdesc missing]

Technique 3.3.3: Checks can be automatically sequenced. In cases where there are likely to be many accessibility problems, it may be useful to implement a checking utility that presents accessibility problems and repair options in a sequential manner. This may take a form similar to a configuration wizard or a spell checker. In the case of a wizard, a complex interaction is broken down into a series of simple sequential steps that the author can complete one at a time. The later steps can then be updated "on-the-fly" to take into account the information provided by the author in earlier steps. A checker is a special case of a wizard in which the number of detected errors determines the number of steps. For example, word processors have checkers that display all the spelling problems one at a time in a standard template with places for the misspelled word, a list of suggested words, and "change to" word. The author also has correcting options, some of which can store responses to affect how the same situation can be handled later. In an accessibility problem checker, sequential prompting is an efficient way of correcting problems. However, because of the wide range of problems the checker needs to handle (i.e. missing text, missing structural information, improper use of color, etc.), the interface template will need to be even more flexible than that of a spell checker. Nevertheless, the template is still likely to include areas for identifying the problem (WYSIWYG or code-based according to the tool), suggesting multiple solutions, and choosing between or creating new solutions. In addition, the dialog may include context-sensitive instructive text to help the author with the current correction.

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.3.3: This illustration shows an example of a sequential accessibility checker, the special-purpose correction interface from Example 3.3.2 is supplemented with navigational controls for moving backwards and forwards through the list of repair tasks. (Source: mockup by AUWG)
Screen shot demonstrating Screen shot demonstrating dedicated accessibility checker[longdesc missing]

Technique 3.3.5: Where a tool is able to detect site-wide errors, allow the author to make site-wide corrections. This should not be used for equivalent alternatives when the function is not known with certainty (see ATAG Checkpoint 3.4).

Technique 3.3.6: Provide a mechanism for authors to navigate sequentially among uncorrected accessibility errors. This allows the author to quickly scan accessibility problems in context.

Technique 3.3.7: Consult the Techniques For Accessibility Evaluation and Repair Tools [AERT] Public Working Draft document for evaluation and repair algorithms related to WCAG 1.0.

Techniques for Success Criteria 2: For accessibility problems for which an authoring tool provides only manual repairs, the repair instructions must be directly linked from the corresponding check.

ATAG Checkpoint 3.4: Do not automatically generate equivalent alternatives or reuse previously authored alternatives without author confirmation, except when the function is known with certainty. [Priority 1]

Techniques for Success Criteria 1: When the author inserts an unrecognized non-text object (the recognition criteria is left unspecified), the tool must not insert an automatically generated text equivalent (e.g. label generated from the file name).

Technique 3.4.1: If the author has not specified an alternative equivalent, default to leaving out the relevant content (e.g. attribute, element, etc.), rather than including the attribute with no value or with automatically-generated content. Leaving out the attribute will increase the probability that the problem will be detected by checking algorithms. [STRONGLY SUGGESTED]

Techniques for Success Criteria 2: When the author inserts a non-text object for which the tool has a previously authored equivalent alternatives (i.e. created by the author, tool designer, pre-authored content developer, etc.), but the function of the object is not known with certainty, the tool must prompt the author to confirm insertion of the equivalent. However, where the function of the non-text object is known with certainty (e.g. "home button" on a navigation bar, etc.), the tool may automatically insert the equivalent.

Technique 3.4.2: If human-authored equivalent alternatives are available for an object (for example, through management functionality (ATAG checkpoint 3.5) and/or equivalent alternatives bundled with pre-authored content (ATAG checkpoint 2.6), then the equivalent alternatives can be used in both semi-automated repair processes and automated repair processes as long as the function of the object is known with certainty. The function of an instance of an object can be considered to be known with certainty when:

Technique 3.4.3: Allow the author to store semantic role information for instances of objects.

Technique 3.4.4: If human-authored equivalent alternatives are available for an object and that object is used for a function that is not known with certainty, tools may offer the equivalent alternatives to the author as defaults in a semi-automated repair processes, but not not in fully automated repair processes.

Technique 3.4.5: Where an object has already been used in a document, the tool may offer the alternative information that was supplied for the first or most recent use as a default.

Technique 3.4.6: If the author changes the alternative content, the tool may ask the author whether all instances of the object with the same known function should have their alternative content updated with the new value.

ATAG Checkpoint 3.5: Provide functionality for managing, editing, and reusing alternative equivalents. [Priority 3]

Note: This checkpoint is priority 3 and is, therefore, not required to be implemented in order for a tool to conform to ATAG 2.0 at the single-A and double-AA levels. However, implementing this checkpoint has the potential to simplify the satisfaction of several higher priority checkpoints (ATAG checkpoint 3.1, ATAG checkpoint 3.2, and ATAG checkpoint 3.3) and improve the usability of the tool.

Techniques for Success Criteria 1: The authoring tool must always keep a record of alternative equivalents that the author inserts for particular non-text objects in a way that allows the text equivalent to be offered back to the author for modification and re-use if the same non-text object is reused.

Technique 3.5.1: Maintain a registry that associates object identity information with alternative information (this could be done with the Resource Description Framework (RDF) [RDF10]). Whenever an object is used and an equivalent alternative is collected (see ATAG Checkpoint 3.1) the object (or identifying information) and the alternative information can be added to the registry. In the case of an equivalent alternative, the alternate information can be stored in the document source. For more substantial information (such as video captions or audio descriptions), the information can be stored externally and linked from the document source. Several different versions of alternative information can be associated with a single object.

Applicable to Code-Level authoring functions Applicable to 'what you see is what you get' authoring functions Applicable to Object-Oriented authoring functions Applicable to Indirect authoring functions Example 3.5.1: This illustration shows a text equivalents registry viewer that a tool can include to allow the author to query and edit the various text equivalents stored in the registry. For maximum flexibility, the design takes into account multiple non-text objects of the same name, multiple types of text equivalents for each non-text object, and multiple versions of each text equivalent type. (Source: mockup by AUWG)
Illustration of a text equivalents registry editing tool[longdesc missing]

Technique 3.5.2: Present stored alternative information to the author as default text in the appropriate field, whenever one of the associated files is inserted into the author's document. This satisfies ATAG Checkpoint 3.4 because the equivalent alternatives are not automatically generated and they are only reused with author confirmation.

Technique 3.5.3: If no stored association is found in the registry, leave the field empty.

Technique 3.5.4: The stored alternative information required for pre-authored content (see ATAG Checkpoint 2.6) may be part of the management system, allowing the alternative equivalents to be retrieved whenever the pre-authored content is inserted.

Technique 3.5.5: Tools may allow authors to make keyword searches of a description database (to simplify the task of finding relevant images, sound files, etc.). A paper describing a method to create searchable databases for video and audio files is available (refer to [SEARCHABLE]).

ATAG Checkpoint 3.6: Provide the author with a summary of accessibility status. [Priority 3]

Techniques for Success Criteria 1: The authoring tool must provide an option to view a list of all known accessibility problems (i.e. detected by automated check or identified by the author as part of a semi-automated or manual check) prior to completion of authoring.

Technique 3.6.1: Provide a list of all accessibility errors found in the content (e.g. selection, document, site, etc.).

Technique 3.6.2: Provide a summary of accessibility problems remaining by type and/or by number.

Technique 3.6.3: Store accessibility status information in an interoperable form using Evaluation and Repair Language [WAI-ER].

ATAG Checkpoint 3.7: Document all features of the tool that support the production of accessible content. [Priority 2]

Implementation Notes:

The checkpoints in guideline 4 require that implementations of documentation be:

Techniques for Success Criteria 1: All features that play a role in creating accessible content must be documented in the help system.

Technique 3.7.1: Ensure that the help system can answer the following questions: "What features of the tool encourage the production of accessible content?" and "How are these features operated?".

Technique 3.7.2: Provide direct links to context sensitive help on how to operate the features.

ATAG Checkpoint 3.8: Ensure that accessibility is modeled in all documentation and help, including examples. [Priority 3]

Techniques for Success Criteria 1: All examples of markup and screenshots of the authoring interface that appear in the documentation and help must model accessible Web content.

Technique 3.8.1: Include relevant accessible authoring practices in examples. [STRONGLY SUGGESTED]

Applicable to Code-Level authoring functions Example 3.8.1: This illustration shows documentation for the input element in this code-level authoring tool makes use of the label element in order to reinforce the routine nature of the pairing. (Source: mockup by AUWG)
Screen shot demonstrating a help system for the 'input' element.[longdesc missing]

Technique 3.8.2: In the documentation, ensure that all code examples pass the tool's own accessibility checking mechanism (see Checkpoint 3.1).

Technique 3.8.3: In the documentation, provide at least one model of each accessibility practice in the relevant WCAG techniques document for each language supported by the tool. Include all levels of accessibility practices.

Technique 3.8.4: Plug-ins that update accessibility features of a tool, should also update the documentation examples.

Technique 3.8.5: Implement context-sensitive help for accessibility terms as well as tasks related to accessibility.

Technique 3.8.6: Provide a tutorial on checking for and correcting accessibility problems.

Technique 3.8.7: Include pointers to more information on accessible Web authoring, such as WCAG and other accessibility-related resources.

Technique 3.8.8: Include current versions of, or links to, relevant language specifications in the documentation. This is particularly relevant for languages that are easily hand-edited, such as most XML languages.

Technique 3.8.9: Provide links from within the accessibility related documentation to launch the relevant accessibility features.

ATAG Checkpoint 3.9: Provide a tutorial on the process of accessible authoring. [Priority 3]

Techniques for Success Criteria 1: A tutorial on accessible authoring for that authoring tool must be provided.

Technique 3.9.1: Document the sequence of steps that the author should take, using the tool, in order to increase the likelihood of producing accessible content. This should take account of any idiosyncrasies of the tool.

Technique 3.9.2: Explain the importance of accessibility for a wide range of content consumers, from those with disabilities to those with alternative viewers. Consider emphasizing points in "Auxiliary Benefits of Accessibility Features", a W3C-WAI resource.

Technique 3.9.3: Avoid referring to accessibility features as being exclusively for particular groups (e.g. "for blind authors").

Technique 3.9.4: In addition to including accessibility information throughout the documentation, provide a dedicated accessibility section.


Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | References