W3C

Accessibility Conformance Testing Teleconference

7 July 2022

Attendees

Present
ChrisLoiselle, Daniel, Helen, kathy, ToddL, Wilco
Regrets
-
Chair
Wilco
Scribe
JennC

Meeting minutes

ACT rules sheet

<Wilco> https://docs.google.com/spreadsheets/d/1OSkPFocXk4K3zYLnwS78WLsWO4PvE5yRcsauyefuIUI/edit#gid=0

Wilco: The meta element item just needs one more approve then good to go. Wilco + Will.

Helen: Object element rendering non-text content has non-empty accessible name - I seem to get more feedback, so Helen and Wilco meeting up later today to discuss.

Wilco: Trevor is away, but Jenn will review keyboard scroll after the meeting.

<ChrisLoiselle> https://github.com/ChrisLoiselle/act-rules.github.io/pull/1

Chris: I will review the pull request and get back to Wilco.

Open ACT pull requests

Wilco: There are a number of ARIA pull requests open assigned to a few people - Tom and Trevor I believe.

Tom: I saw one or two, there were changes already made, and changes have been updated.

<thbrunet> 1881 is the one that's still confusing.

Wilco: Looks like there are comments on those pull requests. The next step is to move all of these pull requests merged into a separate branch to move into ARIA 1.2 and send it out to a big call for review in the coming weeks, before the end of the month.

Wilco: reviewing pull requests on GitHub... we don't have a lot to review here, though Meta Refresh editorial updates #1831. Jenn will look at this today.

Daniel: I managed to merge my pull request, editorial only, and didn't have comments.

Difficulty with composing rules and accessibility requirements mapping

<Wilco> https://github.com/act-rules/act-rules.github.io/pull/1883/files

Wilco: Daniel's pull request is #1883

Wilco: Composite rules - I introduced this topic last week and along with that, a possible direction to go with this. i.e. If you have a composite rule - say, for keyboard trap that is made up two atomic rules, one that says use standard keys to escape a trap, and one that says use non-standard keys and describe how to get out of the trap in text. Those are the two ways to pass, so are composite together.

Wilco: Even though the atomic rules don't have a mapping to success criteria - just because you fail one, doesn't mean you fail the other. Jean-Yves suggested to add the success criteria mapping as "optional" ... where it may be strange to say requirements is optional, we may need a better name. But it does solve some problems we're having. How do people feel about a note - you must report the failures or success criteria, but certain ones you[CUT]

Wilco: Any questions.

Helen: What I understand - you can have a failure against the checkpoint, but it doesn't fail the checkpoint completely - i.e. it's an element of the checkpoint that fails?

Wilco: A failure under this rule can be reported as a failure under this SC, but it doesn't have to be.

Helen: i.e. an <a> tag without an href fails doesn't fail one SC but fails 4.1.2 name role value?

Helen: I think you should take the 'people' element out of this, as it can all go pear-shaped. The complexity is too great - we are adding complexity to the rules as humans are complex. Subjectivity - that there could be interpretations that are both valid.

Wilco: It gives testers flexibility to fail an item under 1.1.1 for example, but if you don't that's ok too. Not everyone has to.

Chris: The principles / guidelines rules / SCs and parent-child relationships - instead of "optional" it could be seen as a cousin relationship - where one SC is failed, but there a lot of things related to that SC which are not total failures. Where these other considerations map back to the one element.

Tom: Is there a note in the rule - if you fail, you don't necessarily fail an SC, but there is work to do?

Wilco: Yes - colour contrast is a good example where we write a rule for 1.4.3 SC at AA, you also fail the AAA SC. It's quirky that we require that you fail both SC, when the ACT rule is explicitly about the one. When a tool runs a check it would be ok if it doesn't flag AAA.

Helen: It's inferred - failing AA , fails AAA also.

Wilco: In this particular AA to AAA case, it's always a "failure to failure relationship". But maybe we can say that.

Helen: Could it be an assumption in the rule?

Kathy: I like the cousin relationship, an acceptable SC that fails. With Trusted Testers, we cover the checks that all testers have to perform, and in that one test, we have a number of tests - not a separate test to cover standard and non-standard. We pass and fail under a single test. Where we have atomic rules without an SC, the Trusted Tester would have one.

Tom: In the SC mapping, any failure - SC is not satisfied... can we say, where "it may not pass" or "may not be satisfied".

Wilco: That could cover it largely. What we might be saying.

Tom: Would this be put into a separate section?

Wilco: yes, if we have an additional or 'related' accessibility requirements section, we could do it that way. That's working out the details, though. Am I right in thinking that we're all largely in favour of this idea? Does anyone not like this?

Helen: I don't like it. It makes me think too much. :) But it's good to ensure we're not over-egging the failures, overly complicating - but it covers the possibility of failures.

Wilco: with Jean-Yves, we recognise the struggle. We can say - you can report one, you can report all failures, it's all good - gives us an 'out' on the ACT rules where we can't say it's always this success criterion, and on the other hand, can never say it's not one SC, when a lot of the times it is. How does everyone feel about this way of handling the atomic rules?

Daniel: I understand the approach, what's the impact? as long as an implementer you report either some or none of those SC, so you're consistent. If a tool is reporting / going to conform to AAA, they do or don't include the AAA related criteria. When it comes to testing, how does it work? Do we need to do something about it?

Wilco: If a Trusted Tester does not test for AAA, we accept that is ok. We say it's a consistent implementation. If we call it 'additional' requirements, you need to report the AA as a failure, but if you don't also report AAA, this is also fine. It's actually more flexible than the process we had a few months back.

Daniel: Yes, I wanted to raise that question and I understand that.

Wilco: Many tools do report AAA, and maybe not so many manual reports.

Daniel: Yes, sometimes they do and some don't. Just wanted to raise the issue. Yes, my first point was about consistency...

Wilco: We could get into the problem where methodology A and methodology B test the same, but one reports on additional SC. If we make this change, we are allowing for that.

Daniel: it's something to be mindful of, and what we're saying, and it could be a concern from some people in the future, and we need to be aware of that.

Wilco: I think we need to explain in the rule - documenting and explaining the additional rules - is important. And also to decide where we apply this - not a free-for-all, where it actually applies.- not a fre

Wilco: Next steps - I feel we need a proposal written for this. At which point, I'll look at my editor on this call - Kathy. Do you feel you could come up with some text for this that we can discuss? It's for this would be the rules format for 1.1 that we can start experimenting with.

Kathy: you had question - the ARIA rules don't map to anything in WCAG right now, correct?

Wilco: Yeah. I think this related to the test case examples. Yes.

<ChrisLoiselle> For Trusted Tester, https://section508coordinators.github.io/TrustedTester/auto.html is a good reference point and the "OR"

Kathy: Wondering if it could become one of the atomic rule in a composite rule, and ARIA 1.2.

Wilco: What we may want to have here is that the atomic rules need to have the composite rules listed as an additional requirement. I.e. the 2.1.1 example - the standard keyboard and non-standard keyboard where we would have the SC 2.1.1 as an additional rule that could be failed.

Kathy: Ok.

Wilco: In addition to this, is it too soon for me to start adopting the idea and me creating some pull requests? Are we comfortable enough to start using the idea?

Helen: let's see Kathy's text first as it could get muddled having pull requests created. An agreement before we move forward.

Wilco: I personally would like to see movement first, as it's not too complicated - and I was hoping to do work in parallel but ok if not.

Daniel: This could give us a chance to work on different perspectives, and see how it would be applied. But I understand concerns about working on them at the same time.

Wilco: Kathy, do you think you could get the proposal together next week? I can jump in as to try.

Report examples as untested when there is an open issue?

Wilco: One of our implementers, does not matter who, has been reporting some of the test cases as "untested" - filtering out test cases they do not agree with, and gets them a partially consistent result rather than getting no result. There are issues open for those test cases, but the community group has a lot of issues to work with (and bandwidth low) ... so one of the implementers are getting this partially consistency by saying things are 'un[CUT]

Wilco: Checking with this group if that is something we are ok with. We say that any untested text case status - it's partially consistent at best, not a fully consistent implementation. These are test cases they are deciding to report on.

Helen: If they are removing these test cases, because there are issues with them - they should join a community group to push these things through. This is being done without telling us.

Kathy: is it being done on proposed rules or approved rules?

Wilco: It should only be on proposed rules - as we should not approve rules that have open questions on them. But it's a good question.

Tom: Maybe it's better to have 'untested' to show that you don't agree with the rule, showing that it was accounted for.

Wilco: There's no way to know or show the inconsistency - it doesn't show up on the report.

Tom: I would still like to see the inconsistencies.

Wilco: Technically we could, as we have the data. But in the past it's because of potential bugs.

Wilco: At very least we can make this a known thing - show if these are inconsistencies in the results.

Tom: I think we have a few that are untested for a reason, but we've been inconsistent in the past but we have reported it -- even as a reminder that it's still an issue.

Kathy: Has it been suggested to the implementer to enter the inconsistent result, rather than untested result?

Wilco: They had asked to provide an area to put an explanation as to why something was left untested, as it's difficult in EARL. But we could do that... but their reason is that they don't want to show incorrect examples - and asking that we hide any incorrect or inconsistent examples from the rules. I think it's far, and difficult to do. But maybe we could.

Wilco: That was kind of their suggestion - that we hide those examples and don't consider them for implementation at all because they are contested. At which point, it would be fine to report them as untested. Who do people feel about the idea of hiding or taking out test cases that are open/proposed?

Wilco: We could be explicit with a warning on that particular example.

Tom: Is there a way to add where the issue is, i.e. "what issue is causing this test case is concerned with"? it seems more automate-able to have them log this in their report.

Tom: They would at least have to link to an open issue

Wilco: Personally, I feel we make sure issues get closed so we don't have to worry about this. leave it for now and make sure we get our homework done... anything else could be more of a headache and we need to close the issues anyway. So prioritize this rather than build something around this.

Tom: let's say we fix a test case, and then the implementer needs to flip the switch to test the case again. So - ok, as long as we're doing it on the hash.

Wilco: Potential resolution: We say "yes, this is acceptable and we want to prioritize the open issues"?

Helen: Yes to exploring the open issues, but no to an automatic hiding of the test cases. We want to keep people as honest as possible

Wilco: Can be temporary - It's ok to leave these as untested for now, as we know there's an issues backlog, but in future, since we are working on those issue and cases - so this should go away, and do not do it again.

Helen: It's an open source, but to a point - I say yes, get them to agree to a policy moving forward, but no automation for hiding things.

<Wilco> proposed RESOLUTION: Accept current untested examples, going forward, do not skip examples with open issues

RESOLUTION: Accept current untested examples, going forward, do not skip examples with open issues

<dmontalvo> +1

HTML page title is descriptive (c4a8a4)

Wilco: We have three new Surveys open next week - the new ones (3) are open and due next week

Wilco: Ones that were due last week and not discussed today - menu item had one reject, Chris you had some comments. Do you want us to look through those?

Wilco: talking about the implementations and your raised open issues. We need to have a conversation next week and will get to it then.

Wilco: So lots of surveys to get through. We want to revisit Kathy's text. We'll talk next week.

Summary of resolutions

  1. Accept current untested examples, going forward, do not skip examples with open issues
Minutes manually created (not a transcript), formatted by scribe.perl version 192 (Tue Jun 28 16:55:30 2022 UTC).