Meeting minutes
Review agenda and next meeting dates
https://
Matt_King: Requests for changes to agenda?
carmen: I wanted to share an update: the 502 errors should no appear any longer. Please let us know if you experience them!
IsaDC: I have something to add about the quantity spin button
Matt_King: We can discuss that when we review the current status
Matt_King: Next week is TPAC, so this group won't meet. I'm tentatively planning for a meeting two weeks
Matt_King: That will be November 19 at this time
Matt_King: The date for the next automation subgroup meeting is TBD
Current status
Matt_King: There are a couple of changes from this past week
Matt_King: It was kind of small in some ways, but still a big milestone for the project: we advanced two more test plans to the "candidate" phase
Matt_King: Apple is in the process of reviewing more test plans. They've added three more people in addition to James Craig--they're all in the system, now
Matt_King: We'll be meeting with some of these people in-person next week at TPAC
Matt_King: We hope to move forward by the end of the year, and we'll discuss one piece of feedback today
Matt_King: We moved "tabs with manual activation" forward to candidate, and we're very close on "tabs with automatic activation"
IsaDC: Should we remove the "min" and "max" from the test plans? Because "home" and "end"...
Matt_King: Ah, yes, I responded that in an e-mail
James: This is specifically the tests for jumping to the minimum and maximum value, right?
IsaDC: That's right. Not the tests for the attributes with those names
Matt_King: The spin-button pattern in the APG suggested that you could use "home" and "end" keys to set values, but because it's in an edit field, those keys interfere with the behavior
Matt_King: Yesterday, the APG aligned on removing that guidance, and at the same time, they will make some adjustments on the guidance for "page up" and "page down"
Matt_King: We can just not test "home" and "end" for now. I think that's fine; the test plan will still be valid
IsaDC: As for "page up" and "page down"...
Matt_King: I don't remember if we have a test for those
IsaDC: I'm pretty sure we don't
Matt_King: We do have those tests for the slider incrementing those by 10 steps
Matt_King: We could make the test more generation by just saying something like "multiple steps" instead of "10 steps"
IsaDC: I'm going to replace the tests for "home" and "end" with "page up" and "page down"
Matt_King: That sounds like a good plan
Running test plan for Tabs with Automatic Activation
Matt_King: Last week, IsaDC reported that Hadi may be updating these things, but that hasn't happened, yet
IsaDC: Right. He didn't reply to my latest e-mail, so I may go in and address them, myself
Matt_King: I may get to it first; I'm going to try to advance this one before next week
Running tests for Switch Example Using HTML Checkbox Input
Matt_King: Joe_Humbert has completed all of the testing. We're waiting on results from dean and mmoss
IsaDC: Neither dean nor mmoss are present today
Matt_King: Okay, so I guess we can't get an update there
Matt_King: Since there aren't any conflicts right now, there isn't any progress we can make on this today
Running plan for Checkbox Example (Two State)
Matt_King: We're in a similar state here. All test plans are completed once (by Joe_Humbert), but we need a second test plan run for each of the three screen readers
IsaDC: I can take on JAWS
Matt_King: Okay, that would be helpful
IsaDC: If I have time, I'll try to accommodate one other one. I guess I'll just assign the bot's results to myself in that case
Matt_King: If there's any doubt, then just leave the run assigned to the bot so it's ready for others
IsaDC: Got it
Updating reports to latest screen reader versions
Joe_Humbert: Do I need to assign myself to this? Some of the tests with VoiceOver look incomplete
Joe_Humbert: Do I need to assign myself to this?
Matt_King: Yes
Joe_Humbert: It looks like there are two that I can do for VoiceOver; I'll get those done by next week
Joe_Humbert: If there is one JAWS test that is more important than the others, I can take that
Matt_King: I would say work through them in the order presented, if possible. I don't have a simple way of filtering just the ones you've worked on
Matt_King: You can do this through the "Manage Bot Run" dialog
Joe_Humbert: I was having trouble assigning multiple bot runs to myself
carmen: I can take an action item to look into this
Request for change to alert test plan
github: w3c/
Matt_King: This is feedback from James Craig at Apple
Matt_King: I wasn't able to test this prior to today's meeting, so we'll make the assumption that we can reproduce the behavior that James is reportng
Matt_King: It involves an earcon for the alert role
Matt_King: This is similar to test cases for JAWS and NVDA where we have a mode-switching test, and the mode switch is conveyed only via sound (with the default settings, anyway)
Matt_King: Both JAWS and NVDA have settings to convey that via speech (And thus capture it in a response), but we are not asking the tester to change their configuration for this specific test
Matt_King: We could treat this similar to how we test mode-switching tests. Those are currently not "bot testable" due to the sound. They could be made bot-testable if we changed the default settings. But that is a bit of a gap in our automatic verdict assignment capabilities
Matt_King: We can set aside that issue for now, but I want to focus on how we handle feedback for this one specific test
james: The assertion was classified as "MAY" due to feedback from Vispero and others
james: So, if we say that VoiceOver fails this assertion, that does not actually reduce their score
Matt_King: They are aligned with the assertion priority
james: They are saying that it could be changed...
Matt_King: I didn't read that as suggesting that they want the priority increased in order to approve it
Matt_King: We made a decision that's actually contrary to the intent of the ARIA specification and is more aligned to the real-world practice where "alert" is misused. We hypothesize that misuse is most of the time
Matt_King: It's not a great rationalization, to be honest, but it is a practical one that matches JAWS and NVDA's design
Matt_King: It was intended to do exactly what VoiceOver is doing--to call your attention
James: I feel like that is part of the problem. If a web app wants to draw your attention, it shouldn't be the responsibility of the screen reader.
Matt_King: Yeah, that's true. It could be a browser responsibility. That is, practically speaking, a valid approach and something that I'm almost motivated to raise an ARIA issue for
James: There becomes an issue for me when too much is placed on the screen reader. Then, web authors have a lot less control. And it implies that the only people who would benefit from sounds are screen reader users
james: But if the app implements its own sound, then it doesn't have a way to turn off VoiceOver's sound
Matt_King: Right
Matt_King: Tabling that for now, in the same way that JAWS and NVDA make unique sounds when their mode switches, should we be marking this assertion as "supported"?
james: The wording is out of date with our current practices
Matt_King: agreed, but we can address that separately
Matt_King: The VoiceOver help does include a feature that allows you to hear every sound, and it describes the meaning of the sound. We can validate that it is the appropriate sound
Joe_Humbert: I don't remember hearing a sound. I probably would have left a note regarding a sound
james: They could be alluding to a sound that was not present when the testing was conducted.
james: It's on them to be explicit about what they're pointing out. It's not clear what the sound is and when it was added.
james: We should verify those details
james: Does VoiceOver enable audio ducking by default?
IsaDC: Yes, it does
james: So it could be the case that the sound is subtle, and it was ducked
Joe_Humbert: So they're saying that this is a new sound or that it is a sound that people have to enable specific settings to hear the sound
Joe_Humbert: If that's the case, that seems pretty extreme. If we, as professionals, don't know about this, then I can't say that even power users would know to do that
Matt_King: I'm testing this, now. It does make a sound that I do recognize as distinct from other VoiceOver sounds. I do think that it's making an "alert" sound in this situation
Matt_King: I've heard this sound before. It's different from the sound you get if macOS is prompting you in the background for a password or something. It's definitely more subtle than that one
Matt_King: I am using macOS 15.6.1 and whatever version of Safari came with that (perhaps version 18)
Joe_Humbert: I just did the same thing, and I did not hear anything
Joe_Humbert: And I'm on 15.7.1
Joe_Humbert: When I trigger an alert on the APG example page, I see it visibly open up, but I hear no sound effects, and VoiceOver says nothing
Joe_Humbert: This is the "alert" example
Matt_King: The last time, I used "VO + space-bar" to trigger it
Joe_Humbert: I used "enter"
Matt_King: Relative to the volume of the voice, the sound is not nearly as subtle as the "activation" sound. It's present, but it's underwhelming
Matt_King: I just tried it another way, and it's very consistent for me. Though you have to reload the page if you want to trigger it a second time
james: My goodness
Matt_King: I don't know if that is a Safari-specific thing. You should be able to trigger the alert many times in a row
Matt_King: Doesn't the alert disappear visualy
Matt_King: This isn't the "click", this is a dissonant chord sound
Joe_Humbert: The sound I'm hearing is probably the activation sound--two clicks with slightly different pitches
Joe_Humbert: I'm on Safari 26
Matt_King: I want to return to the hypothetical: let's say that everybody was getting the same result that I and James Craig observe. A distinct sound is played. For people who get that experience, should we say that the assertion "may" convey...
james: Do we say that the sound played by NVDA satisfies the assertion?
Matt_King: We do
james: So the answer to me is that, if VoiceOver conveys the role via a sound, then it should pass the test
james: However, I can understand the objection in the group here today. Because it does explicitly mention "inexperienced testers", but the people here today have the most experience, and they are not observing it
Matt_King: Could you comment with your experience and to share your macOS and Safari versions when you do?
IsaDC: I can do that
IsaDC: I can test the braille behavior, as well
Matt_King: That would be great. Thank you
Matt_King: I have another sort of related question for us. If this were to change in the future (and they didn't play the sound), then a bot would not be able to detect that change. I can think of a couple ways of approaching this. In this particular case, since the non-audio case would not be detected by the bot (since the bot doesn't receive braille instructions, either), I think we need a way to designate that some tests always requi
re a human tester
james: I think so, too
james: They are kind of pointing out that we are mainly testing speech, and that the project is not currently taking non-auditory feedback into account
james: There is a world in which the project addresses that in a truly holistic way. That's a huge lift, so I think in the mean time, we could have a flag that designates some tests as being "untestable by a bot"/"always needs human verification"
Matt_King: When we do automated testing, those assertions would need to be left as not set
James: Would they be marked only for VoiceOver, though?
Matt_King: Yes. The flag would need to be set at the command-assertion level. That becomes more difficult--not insurmountable, but a little bit
Matt_King: We could set it at the assertion level or at the command-assertion level
Matt_King: In both cases, I think it would have to be a new column in the CSV
james: I think this is another assertion exception
Joe_Humbert: I did re-test quickly, and I did experience. The problem I found is that it plays it almost concurrently with the activation sound, so unless you are specifically trying to find it, then you will miss it
james: And that somewhat goes against the intent of the ARIA specification design on this
james: But regarding the CSV, the format could go after the ID of the assertion
Matt_King: Like a whole separate word
Matt_King: I'll raise an issue for this. It can hopefully make our bot reporting more accurate over time