Meeting minutes
Review agenda and next meeting dates
https://
Matt_King: Requests for changes to agenda?
Matt_King: Hearing none, we'll keep the agenda as planned
Matt_King: No meeting Thursday June 26
Matt_King: Next meeting: Wednesday July 2
Next AT Driver Subgroup meeting: Monday July 14
Matt_King: Next AT Driver Subgroup meeting: Monday July 14
Current status
Matt
Matt_King: No major changes to the status this week, though we did have some important updates
Matt_King: Still 15 in "candidate review" and 4 almost rady
Matt_King: Isa and louis have made good progress
Matt_King: The JAWS report for "roving tab index" is now up-to-date
Isa: there is still one conflict with slider
Matt_King: Yup; we'll discuss that later
Matt_King: We have tabs coming up, but as usual, some of the test plan turned out to be more complicated than Isa and I anticipated, so we're still getting that ready for people
Matt_King: I forgot to check with Isa about "vertical temperature slider". We have the app changes and APG changes in place, so it's now just a matter of making the small changes to the arrow commands
Isa: I think I can prioritize that so we can complete the report by the end of Friday
Matt_King: Great! Even though we're not having a meeting next week, we can still try to move things forward asynchronously
Matt_King: It was just a couple tests where we need to circle back with people
Isa: PAC will be closed the entire first week of July
Matt_King: Ah, okay. If we have enough people available, we can still make progress
Matt_King: Hopefully we can get everything we need from Isa and James lined up asynchronously so we can still make progress that week
Matt_King: Thanks for calling that out, Isa
Isa: We can also work on the other disclosure--the FAQ one
Matt_King: Oh, right. We can adjust our plan accordinly
Matt_King: In early July, we could end up with a lot of things ready for testing
App issue 1430 - Improvements to new experience for marking commands untestable
github: w3c/
Matt_King: This is a pretty big change!
Matt_King: I want to talk about what's changing in the testing experience
Matt_King: In every single command, you're now going to see another checkbox
Matt_King: Right after the "output" field, there is a new checkbox that you check if, for some reason, when you execute it that the screen reader behaved in such a way that you can't answer the assertions
Matt_King: For example, if the screen reader was supposed to move to a button, but it didn't move to that button, you can't answer questions about the behavior of the button
Matt_King: If you make it as "untestable", then you have to answer a question about "why" it was untestable.
Matt_King: We automatically check the box which says that "there were negative side effects", and you need to designate why it was untestable, mark that reason as "severe", and then add a description with additional context
Matt_King: Isa and I did some testing before it rolled out, shared some feedback regarding three issues that we would like resolved
Matt_King: This issue is capturing one part of that feedback
Matt_King: Today, I want to talk through the details of issue #1430 and make sure that people are in agreement to the solutions to the problems that are raised by issue #1430
Matt_King: In my issue, I first tried to make sure that we would be aligned on exactly what the problems are
Matt_King: There were three problems. First, we have a label on the checkbox that is kind of hard for screen reader users to understand (and in fact, Isa pointed out that if the command was, like, the letter "b", you could read it and understand whether the letter "b" was the letter "b" or the word "b" just because of the way the label was phrased)
Matt_King: Then, after you check the box, we didn't have clear instructions about what you have to do next (which is to record what was untestable)
Matt_King: And third, if you submit the form without explaining what was untestable, there wasn't a clear message, and the focus didn't jump back to a location that was as helpful as it could be
Isa: I agree with all of that
Matt_King: On the first two issues (understanding what the label is and what it means), I feel like this is one of those situations where we ended up with a really long label because we wanted to be clear with testers about what it means. Once people understand it, they don't need such a long label
Isa: Right, it's overly verbose at that point
Matt_King: So I'm proposing that we use a shorter label along with a description. For the label, I simply wrote, "command is not testable", and for the description, it says exactly what that means
Matt_King: I'll copy that into the minutes, if it's helpful...
<Matt_King> description: Description: Executing 'COMMAND_NAME' affected behavior that made assertions untestable. If checked,
<Matt_King> then at least one severe negative side effect must be recorded below.
Matt_King: Is this good?
Isa: Sounds good to me
Isa: Testers who are more familiar with the platform, or the ones who are going to be running the test: is it clear why we are checking the checkbox? Is that instruction clear?
louis: from the verbal discussion, it does make sense
Dean: Agreed
louis: of course, when we see it in person, we may have a different opinion. But conceptually, it makes sense
Matt_King: If you try to submit a form with a side effect with a side-effect recorded, but you didn't put a description of the side effect, does that result in an error? I didn't check that
Matt_King: There are two error conditions: one is that they didn't input any side effect, and another is that they designated a side effect but they didn't give it a description
Matt_King: Are we all aligned: when you press "submit" and there are three errors on the form, focus should return to the first one, right?
Isa: Yes, that sounds right to me
Matt_King: Then I think we can give this one the green light!
jugglinmike: Carmen is out today, but she will see the minutes
Re-run JAWS report for color viewer slider
Isa: louis did the heavy lifting here
louis: "shift+f" didn't give the right output
louis: so my results were still the same as Hadi's on that one
Isa: Joe_Humbert is the other tester
Matt_King: Joe is not here today
Matt_King: So louis's output matched Hadi's when he was going backwards
Matt_King: This is similar to what we saw in radio group
louis: Now, the interesting thing is that if I sort of go off-script and just tab back-and-forth, then it will start reading
Matt_King: We noticed "shift+f" (I believe) in the "roving tab index" radio group
Matt_King: Just in some weird edge cases, "shift+f" works differently from "f"
louis: It's the oddest thing. I'm not sure what's going on. Eventually, you can get it to read if you mess around with it enough, but that would still be considered a failure
James: I cannot reproduce that. It announces the min and max value every time for me
James: The wording and lack of pause is somewhat questionable, but that aside, it isn't giving me any issues
Matt_King: I wonder, for this, because Joe isn't here...
Isa: I wonder if this is another Windows 10 versus 11
louis: I'm on Windows 11
Matt_King: There is no issue number for this
Isa: I don't get min and max here on my machine
Isa: I do get the "slider" role
Isa: If I maximize, I still don't get the min and max
Matt_King: I'm getting the "min 0 max 255" with "shift+f" on Windows 11. I'm running a JAWS beta that I think is just after the May update
James: I am not running a JAWS beta, and I'm running Windows 10
louis: I got mine to work. I hit the "run test setup" button and then I toggle PC cursor off and on
Matt_King: I refreshed the page, the focus is on the "run test setup" button. I press "enter", and JAWS announces the link, and then I press "shift+f", and it actually says "color viewer group left right slider 128 min 0 max 255"
louis: I did exactly what you did, and I got "color view group left right slider 128"
louis: But if I refresh it, press "run test setup", toggle virtual PC cursor off and on, then press "shift+f", I get the expected output
louis: But it seems every time I look at this, I observe something different
Isa: It's iconsistent
Matt_King: Are we all on Chrome 137 point something?
louis: I am
Matt_King: I don't know what to think of the extra step that Louis is inserting in order to get it to work
Matt_King: I don't have to take that extra step, and neither does James
Isa: But I do need to insert the extra step
Isa: And I'm on default settings
louis: It depends. If I reload it, and I run it five times, then on the fifth try, I may get it to read
Matt_King: By the way, I did maximize the window with the example, but that didn't change the behavior
louis: I have my Chrome default to maximize
James: For ARIA-AT, the popup does not open maximized by default even if Chrome maximizes by default
louis: For what it's worth, I ran it both on default settings and my configuration. That didn't make a difference
Matt_King: I don't know why it says "group"
James: Because there are multiple sliders, so they're in a group
James: I just reproduced the bug
James: Is there something wrong with this example?
James: I'm on Windows 10
Matt_King: The other people who are not getting the announcement (consistently), they are on Windows 11
Isa: I don't think there's anything wrong with the example
Matt_King: When I "shift+tab", I don't hear "group"
Matt_King: I've done it now many times, and I'm getting the "min"/"max" announcement every time (on the test page specifically)
Matt_King: I want to figure out a path out of this hole
Matt_King: We have something that's flaky, but we don't know the conditions for the flakiness...
James: Is this in a pull request or on "main"?
Isa: It's on "main". It's a conflict
Isa: For the record, it's only with JAWS. For the others, the behaviors are met
James: NVDA doesn't read "min" and "max", though, so it fails
Isa: Yes, that's right
Matt_King: In the ARIA-AT test case, I can't get it to fail no matter what I do. I am on a slightly later build of JAWS, so that could be a factor. I'm on Windows 11 and the same version of Chrome
Isa: Now I got the min and max, just by re-opening the test page
jugglinmike: Is this untestable? A room full of people can't agree on the behavior, and that seems like a precondition to testing to me
Matt_King: I've got to figure out how to move us off of this topic, but I'm honestly feeling a bit stuck
Matt_King: Do we wait for the next version of JAWS? The next beta release is in July, which isn't that far away now. We could hang out for a while and see if this gets better
Matt_King: James is right to question the integrity of the test case itself, but we're not finding any problems, there
Matt_King: I guess, with lack of a better option at this point in time, I'm kind of feeling like we should put this on ice until the July release
Matt_King: If there was a bug, Vispero would want the bug to be associated with the July release, anyway
Matt_King: I appreciate everyone's energy and enthusiasm in geeking out over a specific test case!
Matt_King: We're very close
Run of accordion test plan
Matt_King: We'll skip this for now
Conflicting results in Rating Radio Group
Matt_King: Dean had output that was completely different than the other testers, and I'm wondering if that came from Dean or from the bot
Matt_King: Do you think you could re-run the test, Dean?
Dean: Sure
Dean: This may be a version thing, too. I'll have to check
Isa: This is the issue with the laptop key
Dean: Ah, right, then I got a cheap external keyboard
Dean: That allowed me to do it, but I don't know why there was a conflict. That was a while ago, so I'll have to look again. I'll do that today
Matt_King: This might have been bot output and not your output, but if you could manually go to test 14 and 15 and re-run them and make sure that the output that is recorded is accurate, that will move us forward
Dean: I will do that