W3C

Accessibility Conformance Testing Teleconference

12 May 2022

Attendees

Present
Will_C, thbrunet, trevor, kathy_, JennC, Daniel, ToddL
Regrets
Chair
Kathy
Scribe
JennC

Contents


ACT rules sheet and Survey Results

<Will_C> oops, be right back

ARIA -hidden has no focusable content - Todd, do you have an update?

Todd: Got through them I think.

Kathy: Ok, we'll work on that later. Audio element has no text alternative. We can talk about this later on as well. Will, did you want to talk about Meta element has no refresh display? I understand he may not have had enough time for this.

Meta viewport allows for zoom? Will says the pull requests are available. We'll go through this later.

Open ACT pull requests

Let's switch to open pull requests. Daniel, this one for 'make sentence easier to process'?

Daniel: There are some old libraries around, but this is a small problem with the rule. A subordinate clause at the end of the sentence, and Jean-Yves has approved it. If someone can find out why the tests are failing, go ahead. That would be the only review here.

<Will_C> I'm back

Kathy: Ok, I'll leave one without additional reviewers. Daniel: No, doesn't seem to need more review. There are some old libraries that we're using but I didn't touch it during my pull request.

Wilco: Please send me the link on Slack and I'll take a look at it.

Wilco - bypass block, remove duplicate test cases? Kathy: As a reviewer, I'll put myself and Trevor and Will too.

Pull requests - object has name, adjust order of failed examples - Jennifer to review as well.

Tom: There's a question about decorative and non-decorative state? Kathy asks if decorative/non-d considered a state. "State" may be the wrong word. If it's interactive, that implies that it's non-decorative... there's confusion whether that content is intended."

Kathy: There's a conflict here, so I'll try to respond that.

Tom: I reviewed the definitions that Helen mentioned, so maybe that's what's holding it up at this point. Kathy: I"ll look after the meeting.

Update from ACT implementations on WAI website

Kathy: This is the update on ACT implementations on the WAI site.

<dmontalvo> hhhttps://deploy-preview-95--wai-wcag-act-rules.netlify.app/standards-guidelines/act/implementations/

<kathy_> https://deploy-preview-95--wai-wcag-act-rules.netlify.app/standards-guidelines/act/implementations/

Kathy: this is what Wilco has been putting together. an initial page talking about tools, methodologies, automated test tools, and explaining the value of ACT consistency, and the second page relate

is related to how axe core performed with each rule - which are consistent and not consistent with ACT; which of the ACT rules is detailed even further...how the consistency and results were for each of the examples within that rule. There are some rules where axe core has two tests for that single rule and results for each. Wilco, anything else I've missed?

Kathy: Wilco has sent out a question if these were understandable.

Wilco: I think you covered it. I would like to get this published, and feedback - are you ok with this? Any blockers? Anything we cannot put on the W3C website the way this is today. I would like to work iteratively on this.

Tom: It's the combination of the rules that is considered an ACT rule. Kathy: Yes, axe is using two procedures for one rule so if at least one is correct, it fits the rule.
... We add the rule codes, but I think what you say here is fine. I don't know if there's an easy way to combine this.

Kathy: I have submitted similar feedback about following the results. Some of the rules details for axe - there is more than one of their procedures, i.e. for a Passed example. As far as what to do with a pass/fail result, I haven't seen that.

<dmontalvo> +1

Tom: Whatever is a pass is a pass, etc. - and it would be good to have that one the page.

Kathy: Is that a blocker? Tom: No, just a nice-to-have. Wilco: Agree, would be good to have a summary.
... Any other comments about the pages?

<dmontalvo> https://github.com/w3c/wcag-act-rules/issues/100

<dmontalvo> https://github.com/w3c/wcag-act-rules/issues/101

Daniel: I'd like to point people to some issues I've opened here.
... #100 - here we are suggesting a quality assessment, the 'why' - how ACT helps us compare how the tools tests for WCAG, and a little bit about scoring, but we are just giving information so it's easy to process as a resource, for now.
... We have "testing tools and methodologies" - that's too generic... how we define it ? just a list of tools, or how they perform, implement the ACT rules? Also about "metrics" - we're providing what the tools are doing with the rules, in a metric systematic way. This is what we at W3C wanted to discuss, so we can address them.

Kathy: Daniel, would you prefer to hold off on publishing? Daniel: We can at least discuss those, comment and provide suggestions. Or give people some time with this. not that we don't want to publish, but add/change some more.

Wilco: I want to say this is a draft - we have the standing agreement from the AG and ACT chairs that if this task force agrees, we can go live with this.

Kathy: What you said, Daniel, let's look at Issue #100 on the screen. About how the tools are seeming to be comparing tools?

Jenn: Thanks Wilco, I see that the tools for Siteimprove Alfa and Deque axe are there, and yes, I see it side-by-side, and inevitably people are going to compare the tools when they're listed as such, and that is life. I am acknowledging this happens. Will check with Jean-Yves and my colleagues of their thoughts.

Daniel: We can be more cautious about avoiding this type of comparison of tools, not the results. To be specific, Shawn was the one to raise a concern. Can I ask that we have more days to discuss inside the pull request

Kathy: Shawn says 'W3C should not be seen to compare and score each tools - think carefully how we can present the information clearly as 'compare' and 'score' send up red flags. I don't think we need to involve W3C legal or Judy Brewer, but we need to discuss."

Daniel: I think we can resolve some of these concerns by adding more information and explanation. Wilco: Can you submit a proposal for this? Daniel: Yes, I will come up with a proposal. We can send a CFC for this.

Kathy: Ok, thanks

Rules Format and state

Kathy: turning over to Trevor regarding Rules Format and State.

Trevor: let's get on the same page about rules and format state, I'll walk through it today in case not everyone had gotten through it in GitHub. It started with Jean-Yves trying to write about context related to widgets - how they can be in different states, like pseudo selectors and then other complex ones. Jean-Yves runs into the problem - three solutions in the ACT format that can work for us to fix the issue. 1) We play dumb and pretend [CUT]
... ... and allow users to simply test what is in the real world (the rule doesn't change or wording). Problem - since we're not stating which states to test, users might end up with different results and we want/need consistency. So this is where we're starting at but not where to go. 2) Solution is to actually specify the state and applicability ... and essentially in Applicability you have a

the specific states that relate to the rule (i.e. styling)... "the link that's inside that text, has to meet ONE of these state criteria." So there's a list of things to keep up with the applicability, but worth having available ... and what this means to include this (for others).

Will: I think we should have them, but feel like testers will look at it and disregard it - if they see pseudo-classes of focus state and think "wut" ... so I feel we should be clearly, plainer in the language and less technical. The testers may use their own judgment instead.

Trevor: The CSS pseudo classes is what we're talking about now, but 'states' of decorative, or required for input - is that truly a state? It's unclear, but up for us to define.

Will: I know that the definition of this is a separate thing, but feel that #1 as the solution is a better way of doing it. If #2, we have to describe it better.

<Will_C> sorry, i think #2 is better

Kathy: I'm coming at it from the Trusted Tester and how a human would follow and experience the state itself - I'm saying the same as Will, to make it understandable. But in the same sense, we can take ACT rules wording and translate it to human language (manual) but also remember the automated tools / wording. Can we avoid being that technical in the rules?

Trevor: I'd say the ACT rules are fairly technical, and it takes a tester to boil the language down to human language.

Daniel: I agree with that. It's better to worry about this from the beginning. The technical info need to be there... but we can create another document, like an explainer.... we have the common input aspects already defined - where these are used for the rules.
... We can use the common input aspects and how they relate.

Trevor: i.e. yes, "hover over the link with your mouse" or "put focus on it". If I have an expanded list to show more options, a menu, what are the types of language / wording that you'd like to see describe the behavior for the testers? Anything come to mind?
... How do we define the state?

Kathy: We already tell the tester to 'expand' and 'collapse' the menu for testing. but just need to understand the end goal for the Javascript, and if that's going to perform a certain function... then how do we translate this for a human.

Trevor: I'd like to get Javascript manipulations into this.... but at the ACT rules level, WHAT we call these manipulations. Pseudo-classes are like that, they can be considered states. But what does not fall under the pseudo class umbrella? it could be a long list.

Kathy: I'll just comment that there are a few ACT rules that we don't do in Trusted Testers, because we don't have the tools to perform at that level, and that's ok. If we can't do the rule, we can't... but as the ACT Task Force, I shouldn't hold back from writing more definitions and explanations.
... Meta viewport stuff is an example. Outside of 2.1, there are a few rules that we don't do right now. I'm not sure we can without adding more tools to our process.

Trevor: You mean tools that test the code?

Kathy: Yes.

Trevor: A lot of the ACT rules that are going to involve state - these will be difficult for automated testing. Normally a human will test, and then run the test again. If we can't make a rule that can't be automated, then it's not useful to anybody, and not worth writing.

Kathy: In Trusted Testers, we do have instructions to test and re-test something. We do have instructions to 'do this' then test it, so maybe we need to explore a little bit more. Even if it can't be tested by TT right now, let's not stop writing the rule.

Trevor: another thing to be careful - avoid asking to test every state possible, as it's not possible for a human tester to do.

<dmontalvo> Wilco asks to have a resolution to publish the implementation pages in beta with the summary box hidden until we have a better text

Kathy: we are out of time to do that. Daniel: Ok, let's follow up offline.
... Thanks everyone, we'll set out an invitation for next week.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.200 (CVS log)
$Date: 2022/06/07 10:18:15 $