W3C

– DRAFT –
ARIA and Assistive Technologies Community Group

02 July 2025

Attendees

Present
Carmen, howard-e, Joe_Humbert, Matt_King, mmoss
Regrets
-
Chair
-
Scribe
Carmen

Meeting minutes

TOPIC Meeting schedule

Matt: I am out from 9 to 16th of July. I will cancel meeting between those dates. We will meet again on July 24th. I will try to make the Automation meeting hosted by Mike.

For today's agenda will talk about our focus for July as it will be light on the meeting side but plenty of work to do, talk about accordion and introduce plan for Android testing pilot.

Are there addtional topics?

Murray: i have one related to the test plan I have been looking at. About what to do when we can't select our OS version (related to accordion)

Matt: how about we talk about it during the accordion topic. Can you please remind us?

Murray: sure thing

Current Status

Matt_King I have been having a tracker and it seemed possible to get 23 plan into either candidate + recommended. It turned out to be ambitious. Some of them had more tech challenges that we anticipated even when we thought we chose simple ones. We did end up with 15 and 4 that seem pretty close.

We have accordeon + 3 others that are really close to done but they have little things that we are working out and they are mostly pretty little. The slider and disclosure navigation menu. These are things we had to wait from other actors. Those issues are resolved, we 'll get close to 20 potentially in July.

But I think that part of what I am trying to do is to get us to some kind of landmark that is a significant share out moment and I think it's important for us to have some milestones we can talk about later in the year

It will be important later in the year as we approach TPAC conference, which is W3C's main annual sharing event. I think we could bring more attention to the project and having the needed impact to bring more stakeholders we need to be able to show what we are doing in miningful ways.

We are approaching a significant place. That is that this month we will have all 3 of the desktops screen readers will have bots working. We also finished writing the code for running automated reports when a new screen reader version is released.

When a new version comes out we can automatically runs those plans and see if anything changed in those results. This is the most important part of the foundation to make interoperability scalable.

So, putting that all together in the month of July is a discussion we ve been having with Bocoup team. It would be good to have a round number of test plans - 20 would be great. It's a third of the APG examples. So that's where my head is thinking to have them all run in an automated fashion. And I wonder if other people have thoughts about this

and how we can use this to drive momentum.

@michael I think it's a huge milestone but unsure about how to leverage it.

Matt_King some people can chew on and think about it and come back to the table later in time. It's not like we have to do just one thing and it's not like we have to do it in a specific timetable. TPAC is in November.

I am hoping we can do something with Vispero related to this.

This wouldn't happen without all the people here. Thank you everybody!

Joe_Humbert to do this scalable we need to make it wasy for people to test and resolve conflicts. It's sometimes hard to wait 5 days, 2 weeks to get an answer to what the conflict is and how to resolve it. And I've been on this project since the begining but if we get more people to volunteer to manually test I think it would be good for them to

have quick feedback.

Matt_King if we had a slack channel used by all testers and if we had designated test support?

Joe_Humbert yeah, I just think we have done desktop and need to scale to mobile. To do this scaling and go beyond APG examples we need to get an easy way for people to do a little bit of volunteering. If I can test 5 test but as long as you are running the same OS that should be enough for me to run 2 test and split it up. I don't think a lot of

people have a lot of time to give.

Matt_King we haven't talk about it, how about splitting the test plan into 10 people to not have only 1 person running the whole plan. Is that the idea?

Joe_Humbert Yes, I think people would be more inclined to help if it's a 20 min volunteering. A full test plan can deter people

Matt_King that is definitely something to think about in terms of how we can make this more approachable and more scalable . We do have a very high onboarding and participation cost here.

That is something else for us to talk about - Carmen create tracking for this

Carmen I remember W3C discouraging blogs. So I wonder if there is another medium to reach people?

Joe_Humbert I can brainstorm some ideas about a format that is short.

Matt_King We want the developer world and the blind community to become more aware of our work.

This month focusing on achieving the goal s and in August we can talk about sharing

Accordion Test Plan

Issue 1262

<Matt_King> github: w3c/aria-at#1262

Joe_Humbert Going through the testing with JAWS and NVDA. Go to test page and run the page set up it just say to press space or enter. If you do that nothing happens. I thought that is odd, something should be happening. It should be registering a key stroke. I did it a second time and pressed space twice and it worked then. And after that it seems

it worked if I only pressed it once.

I think it is an issue with accordion clones, I couldn't test it or modify the code. I didn't dig too much because I already had spent more time than planned. I am assuming that with accordion you should press enter and it should open or closed

Matt_King I did the test without pressing the set up button at all. So I expended the accordion manually and collapsed the accordion using tab to get there. And it always seem to take only one press of the key.

Let's see here.

Joe_Humbert I think that is the expected behavior.

Doing one key press to collapse and expand.

Matt_King yes

Let's see I am opening this from the issue. So when I load this test page if I just move focus to the navigate forward link and then I press tab.. it's expanded by default and if I press enter it collapses. But if I run the test set up it's on the billing address and collapse and if I just press enter -- it expanded it. No way.

Joe_Humbert I just ran a test set up without JAWS or anything but pressing enter doesn't do anything. Even without the screen reader.

Matt_King ok, then your intuition that it is related to the set up script it kind of says it is related to the set up script

To make this issue more comprehensible I am going to edit the issue title.

I will describe the experience in the title

Joe_Humbert I'd rather test the other outstanding stuff as I know Isa made a change to the set up (about pressing down) and it might be needed here too.

Matt_King Was that a test for which you had conflicts?

Joe_Humbert yes I believe it is that one.

Matt_King so that must be the source of that conflict. So that is the one you put it as a side effect and Louis did not.

Joe_Humbert Mostly because I wanted to bring attention to it.

Matt_King we ll get the script fixed and I suspect it will make the conflict go away.

Issue 1261

Github:w3c/aria-at#1261

Matt_King this is another crazy one Joe. What is the story here?

Joe_Humbert I was testing and I think I accidentally didn't require focus mode. I think you need to manually turn it on. Actually I think the issue is that you should have it on focus automatically but you get a different output if you do manual focus mode. I am forgetting the name for these. Form input vs manually targeting mode gives you

slightly different outputs. And I think that is the origin of the conflict.

Matt_King What was shocking to me was is 1251 the same but for JAWS?

Joe_Humbert no, I believe it is the same thing.

Matt_King so that is test 2

Joe_Humbert I think the difference in the output is if you do it without manually going in and out of focus mode it adds the heading level. But doing it without doesn't include some of the same information. For JAWS doesn't include heading level 2 [..]

Matt_King this is a very interesting discovery. It looks like the bot is doing it .. When you say the bot results is same as manual, what do you mean?

Joe_Humbert Bot results mimic going focus to browse and back to focus. At least it matches the output. And I believe it also matches the output of the other tester and it made me married because the way you get into focus mode impacts the way what the AT will say affects the tester results.

I also don't think manually going into browser mode and then focus mode is not something a user would do unless there is a specific region. It doesn't seem a real life scenario. Unless you are an advance user, most user prob don't know how to do that.

Matt_King it is weird that two different screen readers have almost the exact same inconsistent behavior and it might be too much of a coincidence.

Joe_Humbert it could be my computer that I am using. It would be good for a separate testing to ensure this is the case.

Matt_King I think that Vispero could reproduce this if we had multiple people confirm it. If I do it the way the test is writting I only get expanded and the button name. And if I run test set up but then I exit by going to virtual cursor and then focus mode and the click shift I get the same result. But I don't have default setting going on here

Joe_Humbert Shouldn't it be the same result?

Matt_King yes

Joe_Humbert I think in NVDA when you manually run the test set up it indicated that you have gone into focus mode but it say you have gone into form mode. They might be treated differently ? I don't know.

Matt_King It's interesting I can't reproduce your results with JAWS right now with the version I am using. However if I use my normal settings I don't.

Joe_Humbert when it is automatically going or manually you get the heading level information with default settings? I think that might get a conflict because Louis was not getting the heading information.

Matt_King I am on a JAW beta, unsure on which version.

Joe_Humbert If I need to do more testing please somebody test me.

Matt_King I think we might need more people and look at versions. I noticed you were on two different browser version. Thank you Joe!

Joe_Humbert bye everybody!

@mike

Piloting android test runner prototype

Matt_King last topic on the agenda is on the Android prototype and so we have a wiki page that explains this prototype. What it is, how it works. Howard I want to describe this and have you correct me.

Accordion Test Plan

@murray Sequoia is not selectable from the app

Carmen we can get it on the app

Matt_King is version Sequoia 15.5 that we would need to add

@murray that is what the IT lets me use. I will check it out

Carmen CArmen can add this to the app

Piloting android test runner prototype

Matt_King this is a prototype experience where instead of having to browse to your android device and try to capture what the talkback is saying and manually capturing it. The idea is that you plug an android device to your computer, you set up the device in the usual way, you go to you dektop to the test page and here is my question.

If I set up from my desktop would it set it up on the android device?

howard-e You would have a button to open it in the android device instead of opening test page in the computer browser

Matt_King It opens the pop up in the Android device?

howard-e correct

Matt_King you run the test and all the output that the talkback speaks is collected and it gets copied to the clipboard after closing the example page or finish navigating through the example.

howard-e you can control V into the test page

Matt_King this is amazing!

howard-e excited about this too

Matt_King we have the documentation about how it works and what you would have to do: set up the Android. We are planning to run a study on this in a couple of weeks

howard-e yes, still targetting the week of July 14th

Matt_King Does it work with Android + MacBook?

howard-e yes, and with linux

Matt_King I wonder at what point we are going to collect the browser version. Right now we collect the browser version from the test plan page but it won't be the right version. Can we detect the browser version on the android device?

howard-e yes you can

Matt_King we might want to change that part of the experience. Would we want to do it always? We need to think about when we detect browser version. I don't know.

carmen to create follow up about browser version

Matt_King Elizabeth you had an Android device. I have to get one. Does anybody else have an Android device? Murray?

@murray I don't unfortunately.

Matt_King we have to see how it works in practice. Michael do you have one or can you find some people.

@micha

mfairchild I have an old one. i don't know if it runs the older android versions

howard-e there is prob a restriction there. I can't remember at the top of my head. I will mention it in the system requirements.

Matt_King right now for testing the prototype we want to know is if. we are on the right track. It would be awesome to have your feedback on the experience. We don't care much about the results.

I think it would be good to send something to the email list so that Elizabeth and others can test it.

zakim: end meeting

zakim: end meeting

Minutes manually created (not a transcript), formatted by scribe.perl version 244 (Thu Feb 27 01:23:09 2025 UTC).

Diagnostics

Maybe present: Matt, Murray, zakim

All speakers: Matt, Murray, zakim

Active on IRC: Carmen, howard-e, Joe_Humbert, Matt_King, mmoss