W3C

Accessibility Conformance Testing Teleconference

05 Nov 2020

Attendees

Present
Daniel, Shadi, Carlos, Jean-Yves, Wilco, Kasper, KathyEng, Levon, EmmaJ_PR, JenniferC, Kasper, MaryJo
Regrets

Chair
MaryJo, Wilco
Scribe
Jean-Yves

Contents


Everybody: rounds of presentations

Wilco: today is about accessibility support in relation to test cases.
... accessibility support for iframe is very chaotic. Nobody really follows specs, possibly by being ahead of the specs.
... thus, on the iframe rule, I couldn't rely on specs. (sometimes they are focusable, sometimes not; in some UA they only use title, in other, other attributes; in Voiceover, you need to navigate into iframe, other UA don't; …)
... tried to figure what is testable and works everywhere.
... but that creates a fragile rule. If browsers change, rule needs update. This is bad.
...  question is whether we do enough in Accessibility Support

Kasper: the rule currently written only works for specific browsers/ATs. I'd prefer if the rule is a description on describing what iframe should be rather than of the current technical situation.
... then, the current situation becomes an implementation detail.
... instead of assuming that iframe are focusable (or not), we could use a more generic description like "iframe that can be navigated to by keyboard". Details then falls on implementation.

<Wilco> Should ACT rules enforce consistency, even where accessibility support is inconsistent?

Wilco: problem is that rules then do not necessarily enforce consistency. Each implementation/tool could test for a separate UA and be "correct" even though they are different…
... Is that OK?

Shadi: example?

<Levon> Seems like a pass with Safari noted as not supporting

Wilco: name for iframe. Concrete question: "do you pass or fail for an iframe with aria-label?" In safari it has no name, in other UAs it has.

<EmmaJ_PR> This rule? https://act-rules.github.io/rules/cae760

https://github.com/act-rules/act-rules.github.io/pull/1464#discussion_r502245640 <= this discussion

Wilco: current version of the rule depends on "accessible name" which depends on the accessible name computation spec, which says aria-label is OK.
... But that is not supported in Safari.

Emma: Is there some W3C spec specifying how iframe should work?

Wilco: there is. No browser is following it…

Emma: as devs, we try o work with the specs. We code to the specs because we can't really handle all browsers shenanigans.

Kasper: we've had a similar situation before. That was the concept of ownership in accessibility tree. Wilco realized that specs are not very good. We first tried to make our own def, but no browser follows that either. We ended up deciding that the accessibility tree is something external to the rule (an input aspect). The concept of ownership is thus provided by UA.
... I think this is similar with accessible name computation. Today we refer to the specs. We could instead consider that the accessible name is "label exposed in the accessibility tree".
... thus accessible name would be external. We don't worry about UA ignoring specs.

Emma: this put challenge on devs to find different ways to provide labels for each browser.

Wilco: this put difficulty into test cases, which is the main topic of today…
... for owned elements, we have mostly avoided test cases with ambiguous results. So no example that pass in a browser and fail in another (we would need to take side by deciding whether it pass or fail).

<Zakim> shadi, you wanted to speak of ARIA-AT CG https://github.com/w3c/aria-at/

Wilco: if tool A test iframe and fails it because of aria-label, but tool B doesn't. They are both valid implementations, but will disagree on some test case. Should we avoid any test case with aria-label, or have another setup to record that a case has different results.

Shadi: I fear we go down a path where we document Accessibility Support. That would require a lot of maintenance work.
... would be interesting to explore the approach, maybe with additional link to ARIA-AT or other documentation

Wilco: I see a few choices. Do we include test cases that we know pass on some UA and fail on other? It means some implementation will be inconsistent of they want to take other browsers into account.

Emma: If there is a spec, the rule should pass against the spec, with note that it is not respected everywhere.

Carlos: I agree with Emma. If we ignore the specs too much, the message we're passing is that specs could be ignored. This is bad. I would prefer to avoid test cases with different results depending on UA, and document if we really need to.

<EmmaJ_PR> +1

Carlos:  let us not create the burden of maintaining support.

Wilco: I see 2 suggestion. 1/ follow specs and "ignore" UA (acknowledge they differ and it might be OK but we only test specs). 2/ ignore the specs and test again what UAs do.

Daniel: Try to document everything will be too much work. I'd prefer following the specs.
... acknowledge there is currently an issue with some UA/AT, but follow specs. No need to use the edge test cases in the rule.

Wilco: from Audrey: RGAA is an example where they look strictly at what actually works. Each version list UA/AT they try to support and test based on these.

Mary Jo: To me these are bugs in UA/AT if they don't follow specs. We should test to specs.

Mary Jo: if we find Accessibility Support issues, we contact vendors and report bug.

<JenniferC> +1 with Mary Jo

Kathy: Trusted Tester do opposite than RGAA. We test according to specs. For iframe, we knew UA did it differently. In the test report, the tester has to identify environment. Difference in test result can be tracked to this.

<Zakim> Wilco, you wanted to bring in that specs tend to be following not leading

Wilco: my feeling is that specs follow implementations, but do not lead them. There are exceptions (ARIA is an example), but often they follow.
... AT often do stuff intentionally because they think it is a better solution.

Shadi:  agree with Wilco. But then which implementation to follow since they are different.
... WCAG doesn't say which method is good or bad. It let tester know their environment. If we don't know environment (UA, AT, …) we can't really know if something is OK.
... I'm concerned about maintenance if we try to follow implementations and have to change with them.

<Wilco> STAW POLL: ACT rules should follow standards over implementations where those standards exist

<maryjom> +1

+1

<Carlos> +1

<JenniferC> +1

<shadi> +1 with a note/caution on accessibility support

<kathyeng> +1

<Kasper> +1

Emma: but if large majority of UA do the same thing, different from specs, we can consider following that…

<Levon> +1

<EmmaJ_PR> +1

Wilco: what to do with test cases. Purpose is to ensure tools are consistent.

<Wilco> STRAW POLL: In the case where browsers do not follow the standards, there should not be test cases that enforce the standards

Shadi: how many "not following" would trigger the question?

<Wilco> JY: For something like iframe harmonization, it is not something that can be harmonized because everyone does it different. If the browsers aren't harmonized, which kind of harmonization could we get accept for the minimal that currently exists? The rules becomes future proof by following the spec.

Emma: in the iframe example, how many browsers follow specs? How many would fail something that follow specs?

Wilco: aria-label would be ignored in VoiceOver. iframe are not always focusable (only Firefox make them always focusable, so in other browser it is not an issue if they have no name).

Emma: what specs say to iframe focus?

Kasper: they should redirect focus to content.

Wilco: by the specs, I do not think iframe need an accessible name.

<Levon> Could a landmark region be added to an iframe to force focus for browsers that don't support focus?

Emma: if everybody starts making them accessible (even if specs stay they are not), we can consider preempting the change to specs.

Levon: would it be doable to add landmark o make them focusable?

<Wilco> STRAW POLL: In the case where any known browsers do not follow the standards, there should not be test cases that enforce the standards

Wilco: no. Browsers still do some trick on top of it… Even with tabindex…

<shadi> -1

<Kasper> 0

<JenniferC> 0

<EmmaJ_PR> 0

<Kasper> Keep in mind that Lynx exists https://www.lynxbrowser.com/

<maryjom> -1

Wilco: one quircky browser shouldn't be enough. We'll always find one… "enough" "major" UA/AT should be needed to trigger skipping edge test cases.

<Kasper> Wrong link: https://lynx.browser.org/

<Carlos> -1

Mary Jo: some times new browsers pop up and have no accessibility support. We shouldn't skip all test cases because of them.

Wilco: How do we decide when an issue is "big enough" to skip test cases?

Kathy: If we don't include test cases triggering accessibility support, would that mean no iframe with aria-label?

Wilco: yes.

<kathyeng> -1

Wilco: Shadi, how should we decide when to (not) include test cases with known support issues?

Shadi: (thinking)

Emma: as a CG, we should raise bug more often when we find accessibility support issues.

<shadi> +1 to Emma

Wilco: we've reported issues to specs, more than browsers.

Mary Jo: We sometimes report bugs to browsers.

Wilco: time's out. We've frame the question well but have no real solution yet… Should we do it on a case-by-case base?

Shadi: Yes, I think it will depend on how we feel as a group.

<EmmaJ_PR> +1

Wilco: is it a responsibility of the TF? I do not think this is a CG responsibility.

Emma: conversation could pop up when creating rule.

<EmmaJ_PR> when deciding whether or not to create a rule

Mary Jo: community group is creating test cases and find problems. TF can open issues against browsers.

<EmmaJ_PR> :-)

Wilco: TF will figure out how to organize this. CG can come to TF in case of questions.

RESOLUTION: Task force will work out a process for answering questions on test case accessibility support

Summary of Action Items

Summary of Resolutions

  1. Task force will work out a process for answering questions on test case accessibility support
[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/11/06 09:48:34 $