Warning:
This wiki has been archived and is now read-only.

Talk:Accessibility Conformance Testing for W3C

From Automated WCAG Monitoring Community Group
Jump to: navigation, search
  • What will the work plan look like for phase 1? What about phase 2 and 3?
  • How does this work related to digital publishing?

Comments by Jon Gunderson:

  1. I think it should focus more on looking at HTML5 and ARIA Specs (e.g. the approach of the OpenAjax library) rather than WCAG 2.0 techniques for the development of rules
    • [Shadi] I think this makes sense. Test rules need to be more technology- and context-based than techniques typically are. Techniques often combine several rules. The objective of the rules is to point out conformance 'faults' (or potential faults) to WCAG requirements rather than to individual techniques. Anyone have suggestions on specific areas of the Work Statement where this should be further clarified?
  2. I think the first step would be to develop test cases for rules, since I think this will be more palatable to people with their own code bases, and much easier for people to participate in and help people see the deficiencies in their rulesets.
    • [Shadi] This is an interesting approach. Indeed, a common "acceptance test" for rules would be very helpful. This is kind of implied with the "ACT Benchmark" but maybe not as explicitly - IMO it talks more about the spec than the test cases and how these will look like. As to "first step", is this something that we want to declare at this stage already, or leave it to the group. My guess is that we will be working on several things in parallel anyway.
  3. I just look at the differences between the two major open source evaluation libraries: OpenAjax evaluation library and aXe. I believe it would be difficult for us to find a common Javascript library of rules given the different use cases we have for our tools. Now throw in all the other companies with proprietary evaluation libraries and it seems like it would be difficult to find any consensus.
    • [Shadi] But is this not exactly the point of the "ACT Framework" - to develop a common format for the rules so that they can be combined (and compared)? What are the specific hurdles here?


Comments by Jesse Beach:

  1. I'm beginning to believe that WCAG-based tests just don't work at the page level in a large application; it becomes impossible to track the issues back to the team that caused them...or virtually impossible without deep integration. The tests we can do at a feature/component/piece level become ever-more trivial and easier to cover in tools like linters. So, I'd love to hear your thoughts on this.
    • [Shadi] Not sure what is meant by "WCAG-based". I understand this to mean 'break up the broad coverage of success criteria into individual small tests'. I think this is in agreement with the proposed work - each rule covers a small aspect but they are combined to also catch broader aspects. For example, lack of a header cell in a table could be a test, which maps to the broader 1.3.1 success criterion (which, alone, would probably be too broad to report on).
  2. More and more, I've been moving towards linters as the appropriate intervention point. Like, if you put an onClick on a div element (or some custom element that renders down to a div), the IDE should raise a warning that a keyPress handler should also be added as well as a tabIndex. This kind of warning falls below the level of a WCAG rule, but the cumulative effect of these types of linter checks is to eliminate the sloppy code that composes into bad AX.
    • [Shadi] I agree that such 'warnings' are incredibly useful. After all, there are only few aspects that can be fully and conclusively tested by tools. There is much more potential in 'warnings' to support developers and testers/evaluators. But we will probably need to scope this carefully, as it expands the volume of tests that need to be developed and vetted very critically.