Meeting minutes
Review agenda and next meeting dates
Matt_King:
Matt_King: Next AT Driver Subgroup meeting: Monday October 20
Matt_King: Next CG meeting: Wednesday October 22
Matt_King: Requests for changes to agenda?
carmen: I'd like to discuss issue 1575
carmen: in ARIA-AT App
Matt_King: Got it, we'll do our best to get to that, too
Current status
Matt_King: Apple has approved at least one of the "link" test plans, "link example 1". I went ahead and moved it to "recommended"
Matt_King: I think they expect to approve a couple more test plans in short order
Matt_King: That's a big deal!
Matt_Kings: It's the simplest test plan, but having one is better than none
Matt_King: We also advanced the second of two "switch" test plans to the "candidate" phase
Matt_King: So we're really close to having all the "switch" test plans done
carmen: Should we promote this milestone publicly?
Matt_King: I don't want to make too big of a deal about it right now. Maybe at TPAC
Matt_King: Coming next in "draft review" is the "two-state checkbox"
IsaDC: I implemented the latest changes you requested
Matt_King: Okay, that's probably about ready to merge. That should be ready for the test queue next week
Matt_King: Then, after that is the "quantity spin button" test plan
IsaDC: I've been working on that this past week
Question about Disclosure Navigation Menu Example
Matt_King: Did we raise an issue for this, and do we have an understanding of the root cause?
<ChrisCuellar> github: w3c/
ChrisCuellar: Yes, we did create an issue for this
ChrisCuellar: I think this is a test plan report that got hidden by our automated run filtering, initially
Matt_King: so maybe it is a valid run
ChrisCuellar: I think so, yes
ChrisCuellar: There were differences in the JAWS versions from the report that's already been available
ChrisCuellar: There was a JAWS/Chrome completed in March of 2025
ChrisCuellar: And there's another that is incomplete. I asked howard-e to check the internal timestamp on when it was created, and he reported that it was created in September of this year
Matt_King: I vaguely remember saying that we would re-run this when the new version came out
ChrisCuellar: I think there was another bot run that JoeHumbert took over
Matt_King: Okay, that means we actually have work to do here on this one
Matt_King: We'll take this up.
Matt_King: As I recall, JoeHumbert is out for a week or two
Matt_King: It looks like there is a second JAWS Bot run that we could assign to somebody
ChrisCuellar: JoeHumbert took one but just needs to assign verdicts
Matt_King: You're saying that was done in September with the September release of JAWS
Matt_King: Do we have another JAWS tester who is willing to take on this second run?
IsaDC: It would have to be me, but I already have a lot of other test plans assigned to me
Matt_King: We need more JAWS testers in this group
carmen: I'll close the issue, then
Issue 1576: Proposal to open test runner in a new browser tab
github: w3c/
Matt_King: I raised this issue as a result of some of my frustration
Matt_King: But I wanted to check with people before decided to do anything
Matt_King: when I open the test plan in the runner and return to the queue, the state of the queue is completely refreshed--all the disclosures are collapsed, and the filters are reset
Matt_King: It would be very complicated to have the test queue remember the state that it was in and reload that state whenever you return
Matt_King: I was thinking that a simpler solution is that whenever you do "start testing" or "continue testing" or "run as", that those open the runner in a new browser tab, and that completing the testing just closes the new tab
james: Let's not open a new window
james: If it was a link, people could opt to do that themselves
james: I feel like a list of links for "run as" and a list of links for "start testing" would be fine
ChrisCuellar: We can look into this. I like the suggestion of opening in a new tab
ChrisCuellar: This is a complaint of mine, as well: I would like to add more relative links in the test queue so that it's easier to navigate to specific places
ChrisCuellar: I think we can also encode the state of the test queue in the URL. You're right that it would be a little more work, but I think the solution can be multi-faceted
Matt_King: My current state of mind is: "what is the absolute simplest solution, first?" You know--the 80%-20% rule
Matt_King: And you're right that the test queue performance is relevant because it does take a few seconds to load. Closing a browser tab is an instantaneous way to return to it
IsaDC: I think we could also use some breadcrumbs within the "conflicts" page
Matt_King: I always open the "conflicts" page in a new tab because it is a link
ChrisCuellar: In the issue, let's list the common scenarios that are most frustrating
james: The runner already has links, they just have role=menuitem
Matt_King: that means changing the UI element from a menu to a disclosure, and that takes away the typeahead...
Matt_King: Maybe ignore the "run as..." one for now and just focus on the "start testing" button
ChrisCuellar: This has all been helpful
Matt_King: We could consider changing "start testing" and "continue testing" from buttons to links
Running test plan for Tabs with Automatic Activation
Matt_King: This is just a quick check-in
Matt_King: Have we heard from Hadi?
IsaDC: No, but I remember he said that we could change his results if necessary
Matt_King: Okay, let's prioritize your other work above this. Maybe I'll do it in the mean time
Running test plan for Tabs with Manual Activation
Matt_King: on JAWS, that's IsaDC
Matt_King: Thank you very much for doing the work on this, dean
Matt_King: JoeHumbert is almost done; there's just one incomplete test at the very end
IsaDC: This is the one that we fixed
Matt_King: but we have 5 conflicting results
dean: They're pretty simple
dean: JoeHumbert had a side-effect that I didn't have
dean: I think the others are just checking the wrong box, judging from the output he reported
dean: I started another bot run and checked it this morning; I should be able to do those tomorrow
Matt_King: in test 4, you both had the same output, and in this one, JoeHumbert says it did not switch from "browse mode" to "focus mode", but dean says that it did switch
dean: But JoeHumbertisn't here today, so we can't resolve that right now
Matt_King: The next conflict is in test 13. It looks like the same output to me.
dean: I think I was running 25.2, but I would have to double-check
Matt_King: This is another one where the negative side-effects are different
Matt_King: You both had all the same assertion verdicts, it appears
Matt_King: I don't remember if there was a "tab panel boundary" assertion, but if it failed, then it seems like this negative side-effect would actually be equivalent...
Matt_King: I guess we need JoeHumbert present to clarify
Matt_King: Ah, this is for "down arrow"
Matt_King: We'll wait until JoeHumbert returns to address this. But it looks like we're really close except for these conflicts!
Matt_King: As for VoiceOver...
dean: JoeHumbert is done with that, and I should be able to complete the work on my part tomorrow
Matt_King: Fantastic, that's great! Thank you, dean
Running tests for Switch Example Using HTML Checkbox Input
Matt_King: this has JAWS, NVDA, and VoiceOver all available, including bot runs ready to be re-assigned
Matt_King: But everyone here is busy, so maybe we will need to wait until next week
Matt_King: It is a relatively short test plan
Updating reports to latest screen reader versions
Matt_King: Right now, we have five VoiceOver automated runs
Matt_King: If you go to the filter on the test queue labeled "Automated updates", you will find five test plans listed
Matt_Kings: I completed one of these last week, but I can't remember which
Matt_King: I wanted to get a sense of the workload. The primary differences that I was seeing was the situations where the VoiceOver bot had collected the "hint text" information in the output but the human who previously ran the test plan omitted that information
Matt_King: I see that Elizabeth has raised an issue where they were getting output that differed from that of the bot
Elizabeth: I noticed for one of the tests that I was working on. Perhaps it was "down arrow with quick nav off". The response the bot recorded did not sound right, so I double-checked and found I was getting a different response
Matt_King: I wonder if this is a bot isssue
ChrisCuellar: It could be a bot issue!
Matt_King: This is kind of an interesting case in the GitHub issue functionality. If there's a conflict, then you can raise an issue, and it automatically captures the output. I guess in the test runner, you do have that information...
Matt_King: I guess if you wanted to see the difference, ChrisCuellar, you would have to use the "Run as" feature
Matt_King: This is test 4
Matt_King: This is strange output. It seems as though the focus was in the wrong place
Matt_King: when I
Matt_King: It looks like Elizabeth submitted the results. Elizabeth, you answered the assertions based on the VoiceOver output that was there, is that right?
Elizabeth: That's right. I was actually not sure what to do.
Matt_King: Thank you for raising this question
Matt_King: In this particular case--this is where the bot really produced an incorrect result. We actually do want to change the response and put in what VoiceOver is actually saying
Matt_King: It's really good that you raised this issue because we need to know that the bot is giving the wrong response
Matt_King: I think I might have seen something like this--where the bot was recording the VoiceOver response as if Quick Nav was off. Rather, it recorded an "arrow" response as if quick nav was on, but it should have been off.
ChrisCuellar: so there may have been an issue with the "quick nav" toggling
Matt_King: Right
Matt_King: I wonder if "quick nav" was on in the prior test--in test 3
Matt_King: So yes, it could just be the setting of "quick nav" on and off which is causing the problem here
Matt_King: Elizabeth, could you please change the value of the "output" field and put in the response that you are observing from VoiceOver?
Elizabeth: Sure. I just wasn't sure based on some prior discussions
Matt_King: If it looks like the bot response is correct, we don't want to change it at all--we want the response to be formatted in the way that the bot would recorded (just in case there's a material difference between how the bot would report it and how a human would record it). But if it looks incorrect, as it does in this case, then we should change it
Matt_King: We want to make sure that, as much as possible, we are recording responses in the way the bot would record them. In this case, we should just record the correct response as best as we can
Matt_King: We're out of time for this meeting, so we will include carmen's issue in the agenda for next week
Matt_King: Thank you everybody!