See also: IRC log
Wilco: Auto-wcag workshop will be in July in
Dortmund, Germany
... Focus will be on test cases and some of the in-progress documents.
<Wilco> https://www.w3.org/community/auto-wcag/wiki/Accessibility_Conformance_Testing_for_W3C#Goals
Wilco: Trying to clearly set the goals.
... Regarding Goal #4 on the development of rules: We want to make sure the
rules remain up-to-date and useful
SA: Agrees with the goals set here. Are there any things we want to specifically exclude from our goals? e.g. Interpretations of WCAG and what that means.
CP: Rules will be technology agnostic without
testing to the differences in browser handling? How will we take that into
consideration?
... Different browsers render the content differently, and have varying
support for HTML5, etc.
Wilco: This work is not limited to simple rules, want rules to be applicable to any technology.
SA: The way WCAG uses the term technology-agnostic is because their requirements apply to any technology. Every test rule will apply to a specific technology.
Wilco: The framework should be technology agnostic. But there will have to be rules for specific technologies, but not necessarily limited to specific browsers.
<shadi> [framework - technology agnostic; rules - technology specific; accessibility support?]
Wilco: The question is how would the rules work where there is different browser support.
<annika> https://www.w3.org/community/auto-wcag/wiki/Accessibility_Support
Annika: Katie provided a wiki page on this topic, so it's worth looking at.
<frank_berker> https://www.w3.org/community/auto-wcag/wiki/Accessibility_Support
Wilco: That came down to we wanted to be agnostic of implementation, but not sure about that any more. There is something to omitting rules for implementation that isn't well supported.
Wilco: For example iOS has little support for table markup, so rules should probably point that out for that platform and let the developer know that there are additional accessibility considerations here.
Wilco: Proposes we provide a way to deal with AT accessibility support for newer HTML, ARIA and other standards.
MJ: There is danger in being too closely coupled with AT support, as that can change very quickly with bug fixes, new releases, etc that we have no access to until they come out.
Wilco: 3 components of the project: We want to have a framework for the rules so that various tools can utilize them. We need to vet the rules so that they meet the quality we need. We also need a database to keep the rules.
SA: The rule benchmark sets a good framework. The rules definition would be more spec-like. The existing test rules that we've started can be used in the first phase to test this framework out.
<Wilco> # Tooling and development process - the points in the "Benchmark" and "Collection" deliverables makes me immediately think of the tools and process(es) that we envision. Maybe these should be open questions for now? We want to encourage a community-driven process, which you outline in some of the wording already, but it needs to be emphasized, I think.
SA: How will we develop and test the rules? Use GitHub or something else?
Wilco: Not sure yet how to go about it yet. We
will have to have a representative set of pages that you test the rules on, use
the test tool and see how well the rules work and if there are any false
positives.
... If you write unit tests for code, you know what you expect the rule to do.
And those rules will be tested against our assumptions. It still doesn't catch
things we didn't think about - different implementations we didn't think
about.
... We need to test against real web pages to make sure our assumptions are
correct.
SA: Why not both types of validations of the rules to coexist?
Wilco: Yes, there is value in both.
SA: Testing with real web pages - how do we know we've tested enough web pages to make sure our assumptions are correct?
Annika: The differences from deliverable 1 and 3 are not exactly clear.
Wilco: auto-wcag we hope is where the rules get developed and the validation of the rules is done to make sure the quality is sufficient.
Detlev: This is about finding rules and aggregating them to create a more sophisticated automated test. There will still be human checking needed, so to what extent will that be handled in this work?
Wilco: We do want to include both. Part of our rules will be automation and part is providing prompting for the manual checks to complete the manual testing.
JB: Need to figure out where this work, potentially a task force, will land. Best option looks like it will best be a task force of the WCAG working group. Some work would continue in this community group as well.
General agreement from the group that a task force under WCAG is reasonable.
+1 to a task force under WCAG
SA: The signing off of the rules should be in the task force. Auto-wcag community group should be kept as doing the spec work so it is more open to participation.
Anna: Introduction - accessibility specialist from the BBC. This is Anna's first meeting.
Detlev: Introduction - Been part of the Evaluation Methodology task force and EM working group. May not be able to commit time to active working in the group, but wants to be familiar with the work of this group.
JB: Happy work is continuing to move along. There are others that want to join - someone from the U.S. Trusted Tester program, and someone from Zhejiang University as they are also working on test rules in China.
<shadi> NEXT MEETING: Tuesday 3rd May 14:00 UTC http://bit.ly/1WHNxiz
[End of minutes].