W3C

Accessibility Conformance Testing Teleconference

18 Jun 2020

Attendees

Present
Trevor, MaryJo, Wilco, Shadi
Regrets

Chair
MaryJo, Wilco
Scribe
Shadi

Contents


[no quorum today]

Results of CFC on integration of ACT rules into WCAG materials (Issue #446): https://github.com/w3c/wcag-act/issues/446

MJM: comment from KathyEng

SAZ: this section was supposed to get removed

WF: need another CFC then

https://github.com/w3c/wcag-act/edit/master/understanding-act-rules.html

SAZ: updates made to wrong branch!

MJM: will resend CFC once updated

How strict should our test cases be? https://github.com/w3c/wcag-act/issues/460

WF: reflecting discussion going on in the CG
... has been discussed here in the past too
... basically, how strict should we be on test cases?
... so far, CG has been not as strict with test cases
... not seek out all the edge cases
... understand the general idea of the test rule
... not become a bug tracker

SAZ: definitely agree not to aim for being totally exhaustive of all situations
... but comprehensive enough to ensure sufficient consistently among implementations

<trevor> scribe: trevor

shadi: We started the discussion about how this could potentially lead to limitless issues. And that perhaps this indicates too broad of a scope.
... Other cases that have been brought up include places where the spec is unclear or their browser issues.
... I am wondering where we have other assumption, accessibility support notes, or something else in place of additional tests
... Aim to be strict, but make exceptions on a case by case basis on where we should be lenient

<shadi> scribe: shadi

MJM: tend to agree, can't make test cases completely exhaustive
... do we need to document that it is not exhaustive?
... otherwise assume it is

TB: like Wilco's approach
... work on the general concept
... then drill down over time
... concerned about case-by-case basis
... but also classify issues quickly

WF: goal was to promote harmonization
... not define how every tool should work
... but bring them more closely together

SAZ: would hope we can agree on a minimal set
... and not have to keep coming back to it

TB: not convinced about "never coming back"
... can start with like 80% agreement

SAZ: issue is that we don't have a measure
... would not think 20% is acceptable
... does not show common agreement

WF: think we are way up in the 90%
... most issues are edge cases

[outlines example of fairly theoretical issue that was brought up, which in consequence excluded an implementation]

SAZ: 90% for me is already fairly benchmark-level

<trevor> scribe: trevor

shadi: So how do we avoid the issue that is perceived, that we are only putting test cases in that benefit certain tools over the others
... You may say it happens rarely, but perhaps another tool has it occur a lot, so it wouldn't be an edge case for them
... How can we objectively define an edge case

<shadi> scribe: shadi

WF: couple of arguments I've used in the past
... for example, when test case is not strictly about the rule
... for example, goes beyond the scope of the test rule
... also, when something is not consistently supported across the board

SAZ: ideal question would be objective data on how often an issue occurs
... but also important to note examples where difficulty in writing test cases was actually a reflection on the test rule itself

MJM: no harm in documenting test cases
... with note on accessibility support issues
... might be tools that actually check these

WF: what about other tools that do not meet these test cases?

MJM: may be ok that not all tools catch all

SAZ: wild idea to toss in
... have core test cases that have broad agreement
... and other test cases that others suggest but do not have sufficient consensus

WF: raises another thought
... ask for real world example

SAZ: like that, very much of "how often does that occur"

WF: like the idea of optional test cases

SAZ: could help maintain future work on test rules

WF: want to avoid rules be benchmark of implementations, for example of accessible name

TB: kind of meta-rule to check some of that stuff?
... like rule for accessible name
... if you fail that, then you will likely fail other rules

WF: there is a test suite

<Wilco> https://www.w3.org/wiki/AccName_1.1_Testable_Statements

TB: how would we pull that into ACT?

WF: not sure, is that needed

SAZ: think not, as long as test cases are based on real-world issues
... may or may not need to implement full AccName spec

WF: what about configuration of tools?

<Wilco> https://act-rules.github.io/implementation/axe-core

WF: some may not provide all tests by default

SAZ: need to disclose in the implementation report
... so that report output can be replicated

WF: do consider that already

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/06/18 14:26:06 $