Meeting minutes
W3C Accessibility Maturity Model Publication Update
fazio: welcome to larger group, introduction to demonstration
Fazio: Introduction to demos from Benetech and Neha
CharlesL Introduction to @John Higgins
<gb> @John
CharlesL: expained how Benetech used the maturity model and took the spreadsheet and created a tool to gather the data
CharlesL: also customized it for Title II
CharlesL: tested with 4 schools, they found it valuable
CharlesL: They were able to create some templates to streamline things like Procurement
CharlesL: added a glossary, made it more like a turbotax interface, cross checked for inconsistencies
janina: there will be a discussion after the presentation (not recorded)
John: trying to create a tool that is more self-explanatory to not have to hold the schools hands quite so much
John: What can we borrow from other people who have done the same thing, like healthcare and SOX
John: demoed tool for a different maturity model with categories, proof points, progress bar. Borrowed from this for the AMM
John: This tool was generated by AI in a day, with 5 hours of clock time
John: will allow for rapid feedback, remove barriers, get it past a wireframe
John: Each "Subject Area" (dimension) has a list of what is reviewed, what is the score, and the score for the dimension
john: top of the page has all subject areas/dimensions rolled up
John: each subject area/dimension has a details page, there are filters so you can look at stuff that hasn't been scored (for example)
John: API calls can go out and look at document inventories for example
John: some proof points are multiple choice questions, there are notes fields, and a place to attach documentation
John: continued demo
John: added admin interface for maintaining organizations, users, proofpoints, and assessments
How do you override false positives?
John: it will suggest a score based on the API run, but the user can override the suggestions
Janina: You should propose this for a TPAC breakout. Can you say a little about the process of how the tool was built? How did you choose the AI that was chosen? How did the narrative and spreadsheet get fed into the prompts
John: I wasn't sure what was possible. I looked for the lowest barrier to entry to get a good result. Stuff is happening really quickly; what I tell you today might be obsolete in a month. This was created using Replit. We didn't have an existing code base, and Replit worked well for that, and it handled the deployment.
John: You have to make no assumptions in the logic, tell it what NOT to do in addition to telling it what to do
John: prompts need to be verbose
John: could feed the entire spreadsheet into the prompts
Neha: Where did the questionaire come from? At what stages did you use AI
John: The questionnaire was generated by AI, but it is fairly naive.
john: aggregated questions that were being asked by schools
John: think the best path forward is to have it reviewed by SMEs
John: AI was basically the designer and the engineer
CharlesL: has not yet been reviewed for accessibility
John: instructed the AI to run aXe on all changes, and not commit change until aXe was happy
John: for privacy reasons, the school data was not fed back into the AI
Neha: can the questionnaire be simplified
John: yes, we need to get the questionnaire in front of more people and see what they are struggling with
John: push it out, see what comes back, use that to drive changes
John: AI might not be able to fully represent users
Mark_Miller's questions were answered by previous discussions
John: There is nuance between are you not doing the thing, or can you not prove that you are doing the thing
Fazio: AMM is not normative, its about measuring progress
<Jon_avila> Thank you for the presentation
Janina: would be useful to have a resolution for wide review
RESOLUTION: Maturity Model group is ready for wide review of the editors draft prior to note status
no objections