Meeting minutes
title: ARIA and Assistive Technologies Community Group Weekly Teleconference
Review agenda and next meeting dates
https://
Matt_King: Requests for changes to agenda?
Matt_King: We'll switch the order of the topics to accommodate Joe, who will be arriving for the second half of this meeting
Matt_King: Any other topics to add?
Matt_King: Hearing none, we'll stick with the ones on the agenda
Matt_King: Next Community Group Meeting: Wednesday May 7
Matt_King: Next AT Driver Subgroup meeting: Monday May 12
Current status
Matt_King: We made some progress over the week
Matt_King: We got the conflicts resolved for radio group with roving tab index--thanks to IsaDC
Matt_King: That's now the 15th plan in candidate review
Matt_King: We have some updates on some of the other things that are on hold
Matt_King: The APG Task Force has a change for the vertical temperature slider to change
Matt_King: We know what the changes will need to be--we took the value out--so we could probably change the test plan partially (we only have to wait to update the references)
Matt_King: There's a change in the example, and that will change the number of arrow key presses required for all screen readers (taking it from three to two, consistently for all of them)
Matt_King: Actually, the only thing we could really update ahead of time is the CSVs, so it's probably not worth it. Let's wait for it to be published.
Matt_King: I'm hoping it will be a pretty quick change
Matt_King: The disclosure test plan, at least when it comes to JAWS, the changes won't be published until the July release of JAWS. There are still more fixes to do. We may be moving anything with a same-page link further out into the schedule. I haven't adjusted the schedule accordingly, yet
Matt_King: There's a pull request for the next "disclosure" test plan. It's waiting on my review, so it may be ready for the test queue next week
Issue 1214 - AT version recorded for reports where bots collected some of the responses
github: w3c/
james: We started talking about this at the very end of our last meeting
james: We briefly discussed the idea that we care what the bot was running with and we also care about what version the human was using
james: I think we landed that ideally both pieces of information would be stored, but that maybe the bot's version wouldn't be included in reports
Matt_King: I think there are two decisions to make here. The first: do we want to record both versions? The second: what should we show in the run history and the reports (based on decision 1)?
Matt_King: It seems like it might be simpler to say that one takes precedence over the other and that the human has the final say
Matt_King: Are there any downsides to having the human's version overwrite the bot's version
Matt_King: It feels problematic to allow the bot's version to overwrite the human's version
james: I agree
Matt_King: So it's just a matter of: do we record both, or do we give precedence in some way?
<Carmen> jugglingmike: overwritting is a more practical idea. Having a log of the events it's something we have been talking for a the last few months.
Matt_King: If you visit the "reports" page and review the history, you can find the name of the tester and the date and time they completed their run
Matt_King: That wouldn't need to change if the human tester's version takes precedence
Matt_King: the only UI that would need to change is when the human tester opens a test plan run where the bot has done some work, we would want to make sure we're recording the version that the human is using. I suppose we can use the same warning prompt that we've already built--asking if the user wants to change it
Matt_King: It would be like the prompt that we get right now when an admin edits a report
Matt_King: It would also be displayed whenever a human opens a test plan run that was touched most recently by a bot
Carmen: makes sense to me!
Matt_King: This issue is in the ARIA-AT repository. You could move it to the ARIA-AT App repository so that you don't have to make a brand new issue
Issue 1240 - Reporting feedback on bot output and performance
github: w3c/
Matt_King: IsaDC raised this today
Matt_King: Joe was raising issues for VoiceOver bot problems
Matt_King: I changed the title on those three issues, from "Feedback:" to "VoiceOver Bot Feedback:"
Matt_King: I've done that a few times in the past as well; I don't know if we want to continue doing that moving forward
IsaDC: Those issues have an impact within our data
IsaDC: We haven't documented how to deal with them
Matt_King: When we get feedback on a test plan, they get linked to the test plan. And we have to close all the issues on a test plan in order to advance it in some circumstances
Matt_King: So this workflow creates issues that are tied to a test plan when they actually should be associated with a bot
Matt_King: I assume it's really helpful to have information about the specific test plan version and test when people are giving feedback on the bot
Carmen: It is. We use that information to replicate the issues
Carmen: Perhaps I can do something to move them?
Matt_King: You could edit the description of the issue and remove the HTML comments (which are hidden when the issue is rendered on the page)
Matt_King: If those get deleted, I don't know if it would delete the linking in real time, or if the link information is static
Carmen: I don't know either, but I can ask howard-e and get back to you
Matt_King: Sounds good. howard-e will know
Matt_King: This solution is a little hacky, but it's probably better than writing additional code
carmen: Hopefully we will soon reach a world where consistency issues are no more!
IsaDC: The other part of the concern was that, for example when we have an issue raised against the test plan itself (e.g. to fix the command), we respond to say that we're working on it. How does this work with the bots?
Carmen: I add it to the project, but I don't know if that would be visible to you. Would you like me to add a comment?
IsaDC: That would be lovely
Testing of Rating Radio Group
Matt_King: We have 8 NVDA conflicts and 4 VoiceOver conflicts
mmoss: I just finished the VoiceOver test a few minutes ago
mmoss: I ran the test a long time ago, and so I had some old results. I updated those, and there are no longer any conflicts
Matt_King: I'm going to mark it as "Final", then!
Matt_King: Now, we're just down to NVDA
Matt_King: Maybe "insert+tab" should be the only command
Matt_King: I'm kind of wondering how dean got the result that he reported
Joe: I had to change the results from the bot
Matt_King: I believe that what you did manually was correct, and it looks like the bots command didn't match
Matt_King: The bot was behaving as though it were in focus mode
Joe_Humbert: No, that's the label for the group. If it was in the wrong mode, it would say something like "one star"
Matt_King: Oh, you're right! The bot was just in the wrong place...
Matt_King: I think there are actually two problems, here
Matt_King: There is a bot problem, and there is a problem with the test plan (I think we should remove the "insert+up arrow" from this test)
Matt_King: If we remove that command, there will be no more conflicts
Matt_King: "Insert+up arrow" isn't really a "read all information" in all contexts
Joe: That makes sense to me
Matt_King: I know we left it out in other test plans for this particular test--the "request information" test
Matt_King: So, test 14--you're right, it is here. "Insert + up arrow". My guess is that in that test plan, the "pizza crust options" are not all together and NVDA is separating
Matt_King: Was it "disclosure navigation menu"? There was a whole bunch of stuff all on one line in that
Matt_King: I have definitely done this somewhere
Matt_King: It's strange that NVDA is not reading all of the disclosure buttons on the same line in the "disclosure navigation menu" (at least, when all of the buttons are collapsed)
Matt_King: There is some other plan where we removed "insert + up arrow". I remember performing the delete myself and discussing it with james and IsaDC
IsaDC: Just to confirm, I got the same results as Joe for "insert + up arrow"
IsaDC: I agree with removing this command
Matt_King: NVDA also offers "Insert + tab" to do this
Matt_King: I don't think that this is a failure of NVDA to do what it says it does for "insert+up arrow"
Matt_King: Okay, we're aligned. We'll fix this by removing "insert + up arrow". That will remove these conflicts, and that will complete this test plan
Re-run of JAWS for color viewer slider
Matt_King: We were doing only JAWS for this one. Two testers, Joe and Hadi, are 100% complete. There are four conflicts
Matt_King: Joe is getting the min and max output, and Hadi is not
Matt_King: Perhaps the JAWS version is different. We don't show version information in the "conflicts" page (we may want to change that--it has sometimes been an issue)
Matt_King: When we show these conflicts, it might be good for us to--where we show the output, we could also show the browser version and AT verison
Matt_King: We could add at version and browser version after the output column
IsaDC: Yes!
IsaDC: And on that note, there is no way for us to return to the test queue (other than pressing "back"). There's no button to go back to the queue
IsaDC: There are no breadcrumbs here (unlike in the reports)
Matt_King: It would be good to add some breadcrumbs there
Matt_King: I've also wanted the particular AT to be part of the title of the page. In this case, that would be "Conflicts in JAWS results for {name of the test plan}"
Carmen: I can write up an issue
Matt_King: Let's confirm the testers' versions of JAWS
Hadi: I was using version 2025.2504.89.40 etc (the latest version published in April)
Joe: I am now running the latest, today. I don't know what I was running when I was running this test plan. I would have to double-check
Joe: I may have been running a slightly older version
Matt_King: There's a possibility that they may have regressed support for min/max in the April release
Joe: I can re-run to double check. It shouldn't take very long--it's just a couple of test, and it's just that keystroke
Matt_King: Great
Matt_King: It looks like the test queue is soon going to be empty. However, if there is a plan in the wings here, it could get merged and updated today if I get on top of my game sufficiently
Matt_King: That would be a disclosure test plan
Matt_King: there might be some value in a feature to be able to mark something as "on hold" in the test queue in order to prevent people from working on them
Matt_King: Like an admin function where we mark it as "on hold", and it disables... something. Perhaps the "continue testing" button, though I don't know if we want to completely block the ability to access the test. Perhaps just a warning that lets viewers know that the test plan is on hold...
Carmen: I can write an issue for that and present it to the team
Matt_King: Anyway, I think we're still good to go forward with this report
james: I'm concerned that people won't know what it's conveying
james: e.g. "one" versus "won"
Matt_King: Okay, well, we will have this new "disclosure" test plan ready very soon. Is anyone available to take up more testing?
dean: I will do NVDA or VoiceOver; whatever you need
Joe: You can sign whatever to me
mmoss: I also have availability in the coming week
Hadi: I'm available to do JAWS testing on the disclosure plan when you have it ready
Hadi: I may not be able to join on the Wednesday meeting, but if you notify me via e-mail, that should be fine
Carmen: We have an issue with the harness right now, and the bot is not working. I will send a message to the team when we know that it's fixed