Meeting minutes
Review agenda and next meeting dates
https://
Matt_King: Requests for changes to agenda?
Matt_King: Hearing none, we'll stick with the agenda as scheduled
Matt_King: Next CG meeting: Wednesday October 8
Matt_King: Next AT Driver Subgroup meeting: Monday October 13
Current status
Matt_King: Since we last met, we've moved two test plans into candidate review
Matt_King: that's pretty awesome! We're up to 19 plans in candidate review and 4 plans in draft review
IsaDC: I'm working on a test plan for check box. It should be ready for review by Monday at the lastest
Matt_King: By Wednesday, we'll have two more new test plans in the test queue
Matt_King: Hopefully, we'll make some progress today and get some test plans out of the test queue
App updates to make bot more available
ChrisCuellar: Just this morning, we released some big updates to how you manage bots
ChrisCuellar: Now, testers can start bot runs
ChrisCuellar: If you look at any test plan run, you should see a "Start bot run" button in the "actions" column
ChrisCuellar: Right now, each test plan run can only have one bot run associated with it. That's more of a UI limitation
ChrisCuellar: The idea is that testers will generally start off with a bot run
ChrisCuellar: We want to improve the workflow based on Matt_King's feedback. But for now, when you start a bot run, you will see the typical status information on the run. Also, the "start bot run" button will change to a "manage bot run" button. When the bot is done, you can use that new button to re-assign the test plan run to yourself
Matt_King: Right now, there is still an "assign yourself" button. If you use that, then your name will be on a test plan run, and you won't be able to assign a bot run to yourself
Matt_King: For now, in this current iteration, the "assign test plan to yourself" button isn't particularly useful unless you don't want to use the bot
dean: I tried to use it this morning, and I am now signed up twice for it, somehow. I don't understand what's going on with that
IsaDC: That shouldn't be possible. Is that a bug?
ChrisCuellar: Yeah, I'm seeing that, now--for VoiceOver. I think this is a bug
Matt_King: Is the "unassign yourself" button, there?
dean: Yes
dean: I'll pause my work there, for now
dean: I just un-assigned myself from one of them, and they're now both gone
ChrisCuellar: You should be able to use the workflow Matt_King described, now
Joe_Humbert: The state looks corrupted, now, though.
dean: Right now, I'm not assigned, but I do have a "manage VoiceOver bot run" button
Matt_King: When that run is done, you will find an option to assign yourself in the modal dialog which is opened by that button
dean: Okay, I will assign myself when it's done
Running Switch test plan
IsaDC: I e-mailed Elizabeth. She hasn't been well recently, but she is going to run it as soon as she can
Matt_King: Great!
Matt_King: The regular "switch" test plan (the one to which Elizabeth is assigned) has no open issues
Issue 1298: Feedback on test of Switch using HTML button
github: w3c/
Matt_King: I think we can close this
Matt_King: dean raised this issue
Matt_King: he is reporting that there is no VoiceOver output in a specific circumstances
Matt_King: Right, I think we were aligned here. There was no output, and we reported that in the appropriate Test Plan
dean: That's right
Matt_King: Okay, then I will close this
Running test plan for Tabs with Automatic Activation
IsaDC: Hadi needs to update his results on this one
Matt_King: right. We talked about this last time, during the meeting of September 18
Matt_King: I'm pretty sure we were aligned on Hadi making his report consistent with Louis's
Matt_King: But Hadi isn't here today, so I guess we can move on
Running test plan for Tabs with Manual Activation
Matt_King: We're just waiting on IsaDC for JAWS
IsaDC: We need one more tester for NVDA
Matt_King: Since Dean signed himself up for VoiceOver, we're covered there
Joe_Humbert: I raised issue 1300 before I finished testing, so maybe its not a bug
Joe_Humbert: I can view my own results. It isn't allowing me to see my results. I can see results from others
Joe_Humbert: I was at the tab panel, then it took me to the tab group, but the assertion expects that you arrive at the final tab
Joe_Humbert: So everything would pass if you did the key press one more time
Dean: the bot finished, and I just re-assigned that test plan run to myself
Dean: I went to start testing, and it gave me some output. It looks like it works
Joe_Humbert: But you didn't receive any feedback that you successfully assigned it to yourself
ChrisCuellar: That missing feedback may be a bug. There may have also been a lag
As for issue 1300
Matt_King: For JAWS we have three up arrows, and for NVDA, there is only one. Why is there only one?
Matt_King: I'm wondering about Test 4
Matt_King: I think that should also be three...
Joe_Humbert: Test 4 wouldn't matter because that's the one where it goes into tab panels
Matt_King: There's no way to get VoiceOver to do the right thing. I'm just saying that we should have the correct number regardless of VoiceOver's behavior
Joe_Humbert: How do we know what the correct number is if VoiceOver isn't behaving correctly?
Matt_King: Well, if you load that example, then by default, the first tab is selected
Matt_King: If you just tab to the link in the first tab and then use "ctrl+opt+left arrow", then the bug isn't present
Matt_King: But as soon as you activate any tab, then the bug appears
Matt_King: And it took three presses in the APG example
Joe_Humbert: So that's two tests that I have to re-run and two tests that Dean may have to change
Matt_King: Yes, to record the appropriate (but incorrect) output
Joe_Humbert: IsaDC is going to change something, and that will remove some responses from your test plan run
Matt_King: You can skip test 2 and test 4 for now, and come back and finish them once IsaDC has made the changes
Matt_King: Thank you for catching that, Joe_Humbert! Given how VoiceOver is behaving, it's kind of amazing that you recognized the problem
Matt_King: This is exactly why we want the test plans to pass through "draft review" because IsaDC, James, and I can't catch everything on our own. This is the process working as intended
Updating reports to latest screen reader versions
Matt_King: So we now have this awesome capability: an "automated reports" tab in the test queue
Matt_King: We can instruct the bot to update any report that hasn't been run with the latest version of a given AT
Matt_King: There are 18 reports that we've completed with the September release of JAWS. I was noticing here that on the "automated reports update" tab, it seems like the only bot that is 100% up-to-date with the latest release version is JAWS
Matt_King: We might start with that. I don't want to run them all because that's going to create a lot of manual work all at once
Matt_King: I know there is a ".3" release of NVDA out. Do we have that on the list of bot updates to do?
Matt_King: There's also a VoiceOver update in macOS 15.6
ChrisCuellar: I think we just need to carve out some time for those. We can try to get that in as soon as possible
Matt_King: I would say VoiceOver is a higher priority than NVDA right now because that one is further behind and because I'm going to have more conversations with James Craig
ChrisCuellar: VoiceOver is now on 15.6.1, I think.
Matt_King: But it lists "version 15" on the UI
ChrisCuellar: This is a UI problem. We want to more clearly label what version is actually available. This is part of the "minimal versus exact" situation which has been confusing everyone for a long time
ChrisCuellar: It's a weird quirk of the system that causes it to be rendered in that way
ChrisCuellar: It's a problem that's specific to VoiceOver. We are running 15.6.1
Joe_Humbert: Every Apple OS is jumping to version 26
Matt_King: Okay. That's... interesting
Matt_King: So in that case, maybe we should run the VoiceOver bot
ChrisCuellar: That will use 15.6.1
Matt_King: Okay. That's kind of a nicer place to start because there are only seven reports which haven't been run. That seems like such a small number
IsaDC: Is it taking into account version 1 of the test plans?
ChrisCuellar: Should it be all of the tests?
Matt_King: Right now, the tab shows the reports that are out-of-date. It's only showing seven reports that are out-of-date and not on 15.6.1
ChrisCuellar: What should we be seeing?
Matt_King: Maybe we do have a lot of reports that have been run with 15.6.1 already
ChrisCuellar: Right now, we have something higher than 15 in the system. The list of reports that it wants to generate new reports for, they are all lower than 15
ChrisCuellar: Do you want to re-run the ones that were in the lower versions of 15
Matt_King: Yes
Matt_King: But if we run these seven, it will generate 7 new reports. That's not a huge volume, so it will be a good "soft introduction" for the team
Matt_King: And when the version thing is fixed, it will probably show another 10 or 11 that are below 15.6.1, but above or at 15.0
ChrisCuellar: We might have to do some kind of small fix to make that comparison possible. I'll need to check with howard-e
ChrisCuellar: I think it might require a little bit of extra back-end work to achieve that. Just let us know how to prioritize it.
Matt_King: If we won't ever have that problem with JAWS...
ChrisCuellar: Yeah, I think this problem is unique to VoiceOver, but I don't fully understand why. That's why I want to speak with howard-e
Matt_King: Next week, as part of the agenda, we'll be walking people through what we want them to do in this workflow. It's similar to "draft review", but with a little less work. Still, understanding the context is important here
CSUN and TPAC planning
Matt_King: There will be at least one CSUN proposal. titled "Who cares if your screen reader says the right thing?"
ChrisCuellar: Bocoup also submitted a proposal
Matt_King: Great. We'll discuss conferences more next time