Meeting minutes
Review agenda and next meeting dates
https://
Matt_King: Requests for changes to agenda?
lola: When you are mentioning goals for the project, is there room to talk about ACG?
Matt_King: Sure, let's do that
Matt_King: Next CG meeting: Wednesday March 25
Matt_King: Next AT Driver Subgroup meeting: TBD
Current interop reporting status
Matt_King: I was able to advance the "two-state checkbox" test plan from "draft review" to "candidate"
Matt_King: I think I will be doing the same for the switch that's constructed from an HTML checkbox in the coming days
Matt_King: those represent test plan number 19 and 20 in "candidate review"
Matt_King: I should have other plans ready for people to test by next week
Axe-con and CSUN updates
Matt_King: Axe-con was the end of February, and CSUN was last week
Matt_King: I gave presentation at both
Matt_King: The presentation I gave at Axe-con is available online (as long as you registered for Axe-con, which is free to do) https://
Matt_King: My presentation at CSUN was not recorded, but I wish it was!
Matt_King: That was more about helping the community to understand what we're doing and why we're doing it. A few people expressed interest in joining, and a couple have already joined (although none are present today)
Matt_King: The presentation included five presenters. Coordinating it was quite a challenge!
Matt_King: The fact that we succeeded is a testament to the dedication of all the presenters
Matt_King: I'm really pleased with the content. I think it landed the message
Joe_Humbert: The presentation was really good. I was hoping more people would attend
Matt_King: I was hoping for more, as well, but we reached 50 or 60 people that we wouldn't have reached otherwise (plus the people they will talk to). Every step forward helps
mfairchild: I think CSUN went well and that this presentation had a good audience. I appreciated the interest that the presentation garnered.
Jill: Was there anything unexpected presented at CSUN? Or was it all about AI?
mfairchild: It was largely AI
Matt_King: mfairchild's work on benchmarking was novel and exciting!
Matt_King: The thing that stands out for me at CSUN now is how it has returned as a strong conference. It was hit hard by the pandemic, and for a couple of years, I thought it could potentially die. Last year's conference was very strong, and so the success of this year's conference has confirmed to me that it is truly back
ARIA-AT strategy discussion
Matt_King: I want to raise some questions. I don't have a specific goal for this discussion. I want to take a temperature from this group for the strategy and near-term goals for this project during the next few months
Matt_King: We have challenges due to recent changes. I want to talk about those challenges
Matt_King: Most of these meetings, most of the time, have been about writing the tests and running the tests. But there are additional considerations
Matt_King: I want to bring those up and learn how engaged people are with those topics. I'd like to use the conversation as a sort of springboard to learn how I can do a better job at moving the group forward
Matt_King: To begin, I'll summarize where I think we are and what I think the biggest challenges are
Matt_King: After we conducted an initial R&D ("buy versus build", "what needs to be built?", "what are the major hurdles to make screen-reader interop feasible, then then AT interop")
Matt_King: From 2020 to 2025, we developed methodologies and processes
Matt_King: We learned what to test, how to test it, how to reach consensus, and how to leverage that consensus
Matt_King: The skeleton for that is now quite well-refined
Matt_King: https://
Matt_King: The application has technical challenges and usability challenges. But the infrastructure is there and is mostly functional (barring a current problem with automation--I'll address that in a moment)
Matt_King: We identified the two largest hurdles to make the ARIA-AT project to be "web-platform-test equivalent"
Matt_King: First: there were no standards governing the behavior of screen readers. We declined to write a standard in favor of something I call "test-driven consensus"
Matt_King: Second: automation via AT Driver. That is a draft implementation in the W3C Browser Testing and Tools working group. It has three implementations: one for each of JAWS, NVDA, and VoiceOver
Matt_King: That allows us to test at scale, to repeat tests, and to keep data current
Matt_King: The biggest challenge we have right now is just a bug. The JAWS and NVDA Bots leverage the GitHub infrastructure to run tests. We believe something has changed in that space which causes them to fail
Matt_King: The other, larger challenge is that Meta (which had been providing seed funding) decided that they were done. They no longer want to provide any funding. Previously, we were able to have a significant number of people dedicated to the project.
Matt_King: For a little bit, I was super-nervous about that because I wasn't sure how we could continue. When we saw the writing on the wall, though, we tidied up quite a bit on the infrastructure that brought us to a reasonably stable point to continue the work
Matt_King: We're at a point where if we can build a volunteer engineering team, we should be able to at least keep it running.
Matt_King: One lead that I got at CSUN is that I'm going to be setting up a meeting with some folks about establishing a funding mechanism so that if we receive grants, we have an organization that can hold the money and pay it out
Matt_King: That can get complicated quickly, though, and I would like it to be as simple as possible
Matt_King: There are plenty of other challenges, but that does it for my overview
lola: We are going to have time to talk about ACD in a bit. Half of that project relies on ARIA-AT, so it's important to us that ARIA-AT is funding. I'm including some ARIA-AT work in the funding that I'm applying for. It doesn't approach the level of investment that Meta previously made, but it's not nothing, either. So you're not in this alone!
Matt_King: Let's coordinate a time to talk offline to find the best way to collaborate in the funding space.
Matt_King: Collaboration is absolutely essential for the success of a project like ours.
cyns: I've been working with lola on some of the ACD stuff. I'm very interested in the overlap with testing and automation. I think there's a lot we can do when it comes to ensuring that "Accessibility-supported" actually means something
Joe_Humbert: What is ACD?
lola: It stands for Accessibility Compatibility Data. It's a sister project to BCD and ARIA-AT. We're working with Mozilla to explore how to provide practical data on the actual behavior and implementation status of the various screen readers
lola: If folks want to know a bit more, I gave a talk at Axe-con this year. It's called "The Gaps We Inherit" and it covers the kinds of holes that ACD intends to address
Matt_King: Without ARIA-AT, then there wouldn't be any screen reader data with consensus-based testing
<lola> The Gaps We Inherit: https://
lola: ACD can't exist without ARIA-AT or web-platform-tests (not in its current form, anyway)
Matt_King: I spoke with mmoss and elizabeth about the bot bugs because they expressed interest in learning about the engineering side of the project. Are you still interested in that work?
mmoss: Sounds good to me!
elizabeth: Sounds good to me, too
Matt_King: I want to learn the most interesting/exciting work for each person who is participating. I want to find where everyone believes they can add the most value.
Matt_King: I encourage anybody to reach out to me directly to individually talk about how you might contribute. I'm happy to have one-on-one meetings to help organize this team and set it up for more success!
Joe_Humbert: I'm wondering how we can make doing the work quicker. Is there a way that isn't so involved to write tests?
Joe_Humbert: Also, after talks with Adrian, I wonder if there is room for testing ATs beyond screen readers. Screen readers are important, but there are so many other types of ATs, and I'd like to see them recognized
Matt_King: One caution that I've received from people with a lot of years of experience in the W3C space is that you have to achieve some success before you expand too far. In other words, "don't bite off too much at once"
Joe_Humbert: I understand that. I'm saying that we should expand once we have a better cadence for testing
Matt_King: Agreed. I envision ARIA-AT as enabling a more distributed workflow. In particular, I believe that the web-platform-tests has a notion of "inform tests" that are landed with a lower threshold for consensus, with the expectation that they could be graduated to true conformance tests in the future
Matt_King: Thanks to everyone for their time, participation, passion, and hard work! Thank you for being here today!