W3C

Accessibility Conformance Testing Teleconference

09 Jul 2020

Attendees

Present
Trevor, MaryJo, KathyEng, Charu, Wilco
Regrets

Chair
MaryJo, Wilco
Scribe
trevor

Contents


Survey: Role attribute has valid value - https://www.w3.org/2002/09/wbs/93339/ACTValidRole/results

wilco: survey still open for another week
... discussing trevor's comment on the larger assumption.

trevor: don't have strong opinions so we can keep it

charu: A little confused by the implic rule is not enough to satisfy, for example a button would have implicit role button

wilco: The applicability is for elements that use the role attribute
... So it assumes you need that role attribute for the rule to evaluate

charu: Does the rule also check if the role is appropriate for the element

wilco: No, it just looks to make sure the role is valid
... Comment on how WCAG docs on role attribute is lacking. No understanding or technique documents.

kathy: For passed example 2, for role="button link", what is the actual role?

wilco: The first one that is recognized is used

Implementations with mostly "cantTell" results https://github.com/w3c/wcag-act/issues/458

<Wilco> https://act-rules.github.io/implementation/axe-core#id-80f0bf

wilco: Some recent implementations had mostly cantTell results, along with 2 inapplicable
... The questions is does that matter for counting as an implementation
... This is mainly for automated testing, since manual is either a pass or fail
... The thing that is important is that it doesn't have any inconsistencies with the rule
... I think it fits within our goal of harmonization, what it doesn't do from a task force angle is show us if the rule can be implemented or that the implementor has the same understanding of the rule

kathy: I think its okay, especially when we are seeing it on automated tool results. I expect we will have test cases where it will be cantTell for the tools
... cantTell means that the tool tried to test

wilco: yes, it can't tell either way, but untested means the tool was never run
... Can we use cantTell implementations to determine if a rule is okay?

charu: My understanding was inapplicable that does not meet the applicability of the rule. cantTell as I understand, means it meets applicability but the tool can't get the result

wilco: It means the tool is unsure, it may not be sure about the applicability or it may not be able to tell pass or fail

maryjom: Do any of the tools get the expected results of passed or failed?

wilco: One does, they have all the expected results as the actual results, I am guessing they are put in by hand. Where they get cantTells they fill it in by hand. So it is partially automated.
... Several implementors have a similar methodology.

kathy: I do like the idea of if all of the automated tools are cantTell, then we should have something like manual to support the test case

maryjom: It seems odd to me that automated tools are including their manual tests in their output

wilco: Deque has a similar tool that asks users questions and gives pass or fail based off of that.
... It makes sense for us to say that we want to see one implementation that can get all of the expected results.

<Wilco> https://act-rules.github.io/implementation/axe-core#id-80f0bf

wilco: can we agree that we need more that what is provided on that output

maryjom: Maybe they should have a note about it including automatic and manual

wilco: as an implementation it is fine, but is not proof the rule can be implemented
... So what do we do with tools that combine manual and automated
... Would we want it per test case or per rule

maryjom: I think we would want it for a rule
... At least they could report that it was a combination, but just document it that way

kathy: I am curious if we had the tool indicate that, how the tool gets the tester to reach the conclusion of the test result

wilco: Yep, my next question would be what the tool gives the person, is it guided or just go look at WCAG. It might not be a testing methodology
... If thats the case, then some of the semi-automated data we have today we may not want. We would have to somehow decide what the right amount of the instruction is for the user.

trevor: I think just adding to see semi-automated is in the right direction

wilco: We might want to have some documentation on when people can include their semi-automated results. I wouldn't feel comfortable with axe-core since it doesn't give clear steps on how to reach an answer.
... Overall results: implementations with mostly cantTells are fine, but we want a rule with correct results to show the rule can be implemented.

<Wilco> 1. It is OK for an implementor to have mostly cantTell result

charu: This is okay if we have the rule with correct implementations

<Wilco> 1. It is OK for an implementor to have mostly cantTell result and call it consistent with a rule

trevor: question from gitlab about the line between all cantTells and having a couple extra inapplicables.

wilco: reasoning at the time was that exlusively cantTells isn't really an implementation
... A tool that reports only cantTells is useless, but a tool with some results has some value.

<Wilco> 1. ACT TF needs at least one implementation free of cantTell and untested results before publishing the rule

+1

<maryjom> +1

<Wilco> +1

<kathyeng> +1

<cpandhi> +1

wilco: Do we then need to know if the results were automatic, semi, or manual?

maryjom: I think so

wilco: Why do we need to know that?

maryjom: It could call in to question the results

wilco: Is that saying we don't trust their results?

maryjom: I hate to think that they would go through and put in the expected results

wilco: Its possible, but there is nothing we can do to about that

maryjom: Its a little clearer when admitted that it was semi or manual
... have to figure out where to display that

<Wilco> 2. ACT TF needs to know for each rule implementation whether the outcome was decided completely automatically, completely manually or a combination of automated and manual

trevor: was thinking where we declare the id, type and consistency

<shadi> [+1 to MJM -- what is the concern with collecting that data about mode of testing?]

<Wilco> +1

+1

<maryjom> +1

<kathyeng> +1

<cpandhi> +1

wilco: Do we then need a declaration that the manual results were from a testing methodology

<Wilco> 3. ACT TF needs to know that manual outcomes are the result of a step by step test procedure.

<Wilco> 3. ACT TF needs to know that manual outcomes are the result of a step by step test procedure included in the implementation

<shadi> [wouldn't that be semi-automated, when the tool provides testing guidance (step-by-step or otherwise)?]

wilco: Should I add that it must be included in the tool

<Wilco> yes, that's semi-automated Shadi

<maryjom> +1

<cpandhi> +1

+1

<Wilco> +1

<kathyeng> +1

wilco: Is there anything else? I think we have just set some requirements for implementation data
... do we need to talk about the cantTells

maryjom: I don't think it matters, it is what it is

wilco: I think when we get to the implementation tracker we will need to have that discussion

kathy: When people register as implementors, do they give their type

wilco: Generally yes, but not for all of them

kathy: How do the demonstrate that they have a methodology

wilco: We don't need to see it, but they may need to change their results to cantTell if they are relying on expert assisstance
... This will go back to CG to see how we can be provided this data. We will need some sort of statement from each implementor that integrates automated and manual that they do have test procedures.
... Does this need to be a CFC or a survey? We should probably formalize this

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/07/09 14:09:22 $