Important note: This Wiki page is edited by participants of the EOWG. It does not necessarily represent consensus and it may have incorrect information or information that is not supported by other Working Group participants, WAI, or W3C. It may also have some very useful information.


ATAG review

From Education & Outreach
Revision as of 12:51, 20 November 2013 by Shawn (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Nav: EOWG wiki main page

previous versions of this page with closed comments, etc:

ATAG Overview

Docs:

  • maybe add something about the motivation for ATAG -- the elevator pitch -- e.g., when tools help with accessibility, it saves millions of developers work {Shawn}
  • comment {name}

ATAG at a Glance

Docs:

  • comment {name}

Selecting & Using

old doc: Selecting and Using Authoring Tools for Web Accessibility

hasn't been reviewed since 2002 - maybe should be checked carefully {Andrew}
I think there were some issues with it. We might not want to update and promote it. We might even want to unlink it?
Sharron [action] -- do you want to check with Judy on this? {Shawn}

  • Andrew is right, this page has a lot of very outdated information :-) That said, the purpose of the page is still valid, and a significant portion can be updated and reused. {Paul}
  • Introduction: there are now a number of tools that will fully support production of accessible websites. While such an authoring tool may produce accessible content as configured (or even "out of the box" in some cases), specific implementations may prevent the creation of accessible content. It's probably a good idea to mention that there are serious and sustained efforts by many companies to incorporate accessibility and within many different communities to develop default CMS themes that are accessible (Drupal, WordPress). The list 1-7 is also still valid, in my opinion. {Paul}
  • Checklists for Authoring Tool Selection: These don't feel like "checklists" to me, they feel more like a list of suggested questions someone who is selecting a new authoring tool should consider to help guide their selection process. I suggest a short narrative here describing that ATAG and WCAG guidelines should be consulted in conjunction with these questions.
  • Evaluating software currently in use by an organization: it might be a good idea to suggest ways for an organization to assess "just how bad is my current authoring software?" This could be done internally via freely available assessment tools, or by one of the many competent consulting firms that specialize in this.
  • Selecting new or replacement software: In the United States, we have VPATs (Voluntary Product Accessibility Templates) that in theory explain how a vendor's software product meets section 508 checkpoints. In the absence of such "identifying badges," how will individuals know how to assess each of these questions? There must be a reputable list somewhere that identifies products, plug-ins, themes, and so on that meet minimum basic accessibility checkpoints.
  • Reviewing software procurement practices: Many governmental organizations require that software purchases adhere to basic accessibility laws. However, accessibility is just one of many different purchasing criteria, and in many cases, the whole of accessibility is distilled down to a single checkbox, indicating that an "accessibility assessment" has been completed by someone in the organization. Unfortunately, most organizations - even very large ones - do not have an individual with the responsibility (much less authority) to champion accessibility in the same way as cost, performance, interoperability, or some other criteria.
  • Questioning software vendors about product support: I think this section is perfect, wouldn't change a thing.
  • Working around Limitations of Existing Authoring Tools I think parts of this section can be combined with the "Evaluating software currently in use by an organization" section above. The results of an assessment can help determine what needs to be done to create accessible content.
  • Examples of strategies to work around limitations of existing authoring tools: Most of the examples cited are not strategies, but tactical solutions that rely on time-consuming manual labor. There are many general updates needed in this section...references to "print-centric" authoring tools are stale, and many of the software examples are dated. References to document conversion processes are still relevant; there are plenty of "how-to" tutorials available now. Are missing DTDs still such a major issue that they need to be called out?
  • Product Reviews I really like this idea, but the reviewed authoring tools page is embarrassingly out of date and should probably be removed.
  • Authoring Tool Conformance Evaluations is an old document that was useful when released in 2002, but obviously took a lot of effort and may be out of scope to update. An alternative might be for tool owners to provide details into a database? {Andrew}
    Gathering implementations is part of the ATAG WG's CR work. I don't know if they have plans to provide conformance evals. If so, we might help with the EO aspects, but the main work would be under the ATAG WG, I think. {Shawn}
  • comment {name}

ATAG 2.0

ATAG 2.0 CR
Note: It's in Candidate Recommendation so hopefully any suggestions are non-substantial copy edits.

Implementing ATAG

Implementing ATAG 2.0 Working Draft

  • Intent includes implementations notes — It looks like some of the "Intent" sections combine two things: 1. Why this is important for accessibility, 2. info on implementation, conformance, and/or the requirement. (for example: A212)
    It would be good to separate these into at least 2 separate sections.{Shawn}
  • Appendix B: Levels of Checking Automation - my first impression is that quite a lot rests on encouraging or facilitating authors to check their progress, and that in any case, feedback is an integral part of any design or development activity. This Appendix with its discussion of automated and manual checking has an important role in understanding how to implement ATAG 2.0 but I feel the title is underselling the content. I would propose that 'Automatic and Manual checking' is the start of the title, and if necessary the concept of levels is a subhead, or sub clause. Is there a list which analyses which or how many success criteria can be evaluated automatically? {Suzette 8/2 or Aug 2nd}
  • It says that "The Working Group seeks feedback on the following points for this draft:
    • Is the overall document a useful resource for implementers to apply ATAG 2.0 to your product? {SR comment: yes there are very useful pieces of information within this document. However, it is probably less useful than it might be because of the need to follow the Guidelines format. This results in a narrative that is excessively wordy and hard to follow. I am not sure that software developers - toolmakers - would wade through all of the extraneous verbiage to get to the kernel of actionable information and implementation guidance. Other than a less wordy Quick Guide, I am not sure what to suggest.}
    • Are the sections describing the intent of each success criteria clear? {SR comment: Overall, yes the intent sections are generally quite good. Once question about the use of the term "More accessible,"used in A.1.2.1 and A.1.2.2. Why say "more accessible" rather than simply "accessible?".}
    • Do you have any suggestions for examples that should be added, modified or removed? {SR comment: Yes, I would like to see more examples from courseware, from teaching and learning tools used by both students and teachers.}