W3C

– DRAFT –
ARIA and Assistive Technologies Community Group Weekly Teleconference

27 March 2024

Attendees

Present
Hadi, howard-e, IsaDC, James_Scholes, Joe_Humbert, jugglinmike, LolaOdelola, Matt_King, murray_moss, webirc, webirc54
Regrets
-
Chair
-
Scribe
jugglinmike

Meeting minutes

Review agenda and next meeting dates

Matt_King: Next meeting: Thursday April 4

Matt_King: Any requests for changes to the agenda?

Matt_King: Hearing none, we'll move forward with the agenda as planned

Matt_King: The first automation-specific meeting of the year will be on April 8. I will be setting up a W3C calendar event after this call

Upcoming app fix patch

Matt_King: There are two bugs that I've asked Carmen and howard-e to prioritize

howard-e: Fixes for those two are still on-track to be available on the "staging" server by tomorrow

Matt_King: The two bugs are referenced in the agenda, if anyone is interested

Resolve conflicts in toggle button results

Matt_King: Once again, I'm going to say "thank you" to jugglinmike and the other folks at Bocoup who worked on automation

Matt_King: I was able to use the bot to collect responses from NVDA for these tests, and it goes so fast

Matt_King: For NVDA, I found two tests where I got different responses from what IsaDC reported

IsaDC: At first, I thought this was a mistake in my results, but I consistently get that for multiple versions of NVDA

IsaDC: For me, NVDA says the word "blank" every time--in focus mode. I ran it three times to be sure

Matt_King: I didn't make sure I was running exactly the same version of NVDA as the bot

Matt_King: ...but I got the same responses at the bot

Matt_King: The NVDA bot used NVDA version 2023.3

IsaDC: I used NVDA version 2023.3.4

Matt_King: And I'm using NVDA version 2023.3 (followed by some kind of build number that we don't keep track of)

Matt_King: I'm curious: is it possible to update the version of NVDA?

jugglinmike: It always uses 2023.3 today. As of the deploy scheduled for April 1, it will use the latest version of NVDA which we have packaged for this purpose (that is, that we have built with PAC's plugin)

jugglinmike: We do not have AT version selection on the roadmap

Matt_King: I think AT version selection will be the next-highest priority feature following support for VoiceOver

Matt_King: In the mean time, it sounds like we may be observing a regression in NVDA

Radio test plan

Matt_King: For Radio, we already have IsaDC assigned, but we need another person to run the Radio Button Test Plan on Mac

murray_moss: I'm happy to volunteer

Matt_King: Awesome!

Matt_King: the "assign testers" menu has a lot of options, now--a lot of potential testers. It would be nice to have a type-ahead

James_Scholes: by the way, focus doesn't enter the menu when you open it

Matt_King: For NVDA, we have Alyssa assigned, and we also have the NVDA Bot assigned

Matt_King: For Test Admins, there is a button in the "status" column which is labeled "Manage NVDA Bot Run". That opens a dialog which allows you to re-assign the run to yourself

Matt_King: I just got a "502 Bad Gateway" error...

Matt_King: But I reloaded, and now, it's fine.

IsaDC: I've re-assigned the bot's run to myself

Matt_King: I've scheduled another NVDA Bot run; I'll assign that to Alyssa when it's complete

Hadi: I can run the test with JAWS

Matt_King: Great, thanks! I will un-assign Alyssa and assign you

Matt_King: Two weeks from today would be April 10. Is that a reasonable target, murray_moss and Hadi?

murray_moss: For sure

Hadi: Yes

Matt_King: And you, IsaDC?

IsaDC: Yes, sure

Matt_King: Okay! Then we have a plan for Radio

Matt_King: Right now, we have limited Bot usage to admins. I'm wondering if we want to make it possible for anybody to run with the bot, as we develop more confidence

Matt_King: That doesn't seem like it would be a big change to the user interface...

Hadi: Can you explain the new feature a bit more?

Matt_King: It runs the Test Plan and it fills in the AT Response text field for every command

Matt_King: You still need to run the test to double-check, but it alleviates the need of manually writing the responses into the app

Matt_King: It's only available for NVDA at the moment

James_Scholes: We can give you a tutorial if you like, Hadi

LolaOdelola: Bocoup wrote a blog post on this topic, for those interested https://bocoup.com/blog/aria-automation-launch

Matt_King: Is there any reason not to open this feature up to all testers, jugglinmike?

jugglinmike: I have some concerns about resource limitations in the GitHub Actions service, but to be honest, we may be risking overrunning those today

Matt_King: Let's make a note to investigate that further, then

Proposal to structure test queue primarily by test plan instead of AT/Browser combination

Matt_King: Right now, you can use any version of any screen reader you want when you're running the test in "Draft review"

Matt_King: As we saw earlier in this meeting, that sometimes causes issues, but that's actually pretty rare (and it's easy to resolve, besides)

Matt_King: but when the test plan reaches the "Recommended" phase, that means we'll run it for every new release of a screen reader from that point forward so that we can track interoperability over time

Matt_King: that means that when we're generating those reports, we need to be able to say that each is for a specific version of e.g. JAWS

Matt_King: So we'll need to be able to run with a specific version of a screen reader

Matt_King: but right now, there's nothing to tell Testers when/if they need to use a specific version of a screen reader (or a specific range, e.g. "NVDA 2023.3 or later")

Matt_King: That's why we'll be updating the design of the test queue to support this soon

Matt_King: While working on the design for this, I realized that the design of the Test Queue is pretty long in the tooth and just doesn't suit the way that we work

Matt_King: In this agenda item, I'd like to reorganize the Test Queue to be organized by Test Plan, and to list the AT/Browser combos within each of those

Matt_King: I think we're all aligned on the basics, and I think that Isaac from Bocoup can go forward on the proposal in the issue

Matt_King: The issue is here: w3c/aria-at-app#791

howard-e: No questions here

Matt_King: Well, if anyone has questions, please feel free to ask in that issue

Matt_King: This is in the design stage right now, but because we need this in time for when we have four "Recommended" test plans in April, it's going to come up pretty quickly as something we need to be able to do

Matt_King: Hearing no questions, we'll move on!

Define assertion verdicts

github: w3c/aria-at#1050

Matt_King: Vispero believes that it sounds odd to call the result of a "MAY" assertion as a "pass" or a "fail"

Matt_King: I think I agree. We're interested in defining some terminology for how we describe when ATs do or don't "check the box" when it comes to "MAY" assertions

Matt_King: I've suggested "supported" and "not supported" instead of "passed" and "failed"

Matt_King: When it comes to "SHOULD" assertions, it seems like everyone is satisfied with the terms "pass" and "fail"

Hadi: That sounds good to me

James_Scholes: when it comes to "SHOULD", I find myself flip-flopping between different viewpoints

Joe_Humbert: That sounds good to me

Murray_Moss: Me, too

LolaOdelola: My question is more about the use of "MAY"/"SHOULD"/"MUST" in general, and I know that conversation has already happened, so I don't want to re-tread those points

Joe_Humbert: I'd like to get feedback from developers and see if the framing helps them understand how well the different patterns are supported by the different ATs

LolaOdelola: Yeah! I also wonder if there's a bit of a conflict between what developers want and what AT vendors want

LolaOdelola: I also wonder how many developers would even be familiar with these more standards-facing terms like "SHOULD"

Matt_King: This is really for AT developers. I think web authors are unlikely to drill down into this level of granularity; they will probably be reviewing the test plan reports at the top level

Hadi: regarding whether it is geared toward AT developers or web developers: while I see occasionally that some web developers are interested in seeing this level of granularity, I think that overall, they shouldn't have to be concerned about specific features

Hadi: I think they should just worry about implementing the APG and considering that as sufficient

Joe_Humbert: I agree that would be ideal, but I think that practically speaking, web developers have to be able to prove certain things in certain situations (e.g. when it comes to accessibility compliance)

Matt_King: We are planning to write a detailed explanation about how to interpret the reports. This will certainly be covered there, too

Joe_Humbert: That eases my concerns significantly

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Succeeded: s/bot used/NVDA bot used NVDA version/

Succeeded: s/there is a button which/there is a button in the "status" column which/

Succeeded: s/LolaOdela/LolaOdelola/

All speakers: Hadi, howard-e, IsaDC, James_Scholes, Joe_Humbert, jugglinmike, LolaOdelola, Matt_King, murray_moss

Active on IRC: Joe_Humbert, jugglinmike, murray_moss, webirc54