13:39:22 RRSAgent has joined #rqtf 13:39:26 logging to https://www.w3.org/2025/02/19-rqtf-irc 13:39:26 RRSAgent, make logs Public 13:39:27 please title this meeting ("meeting: ..."), jasonjgw 13:39:30 meeting: RQTF meeting 13:39:37 chair: jasonjgw 13:39:39 scribe+ 13:39:41 present+ 13:40:01 agenda+ Accessibility of machine learning and generative AI. 13:55:23 Joshue108 has joined #rqtf 14:01:18 janina has joined #rqtf 14:01:22 present+ 14:04:44 zakim, next item 14:04:44 agendum 1 -- Accessibility of machine learning and generative AI. -- taken up [from jasonjgw] 14:06:52 JW: We need to discuss this 14:07:07 JW: We need a structure on the issues and happy to hear update 14:07:35 Scribenick: Joshue108 14:08:04 https://w3c.github.io/ai-accessibility/ 14:09:04 scott_h has joined #rqtf 14:09:11 JW: I've suggested structure on list 14:09:30 JW: We are not likely to identify all or even most relevant use cases 14:09:41 We can treat the cases we know about as examples and raise issues 14:09:41 present+ 14:09:51 This would help with a conceptual framework 14:09:55 +1 to Jason 14:10:13 JW: This would help the document age well - without the need for constant updates 14:10:20 Leads to a more enduring approach. 14:10:23 q+ 14:10:35 +1 from Janina and Scott 14:10:37 ack jan 14:10:59 JS: We can point to things in various domains and aspects 14:11:09 SH: Good point - its too vast. 14:11:17 We can pick on examples and case studies 14:11:24 JS: Can we brainstorm? 14:11:29 Re: Table of contents 14:11:51 JS: We have the person with a disability on the web trying to take advantage of AI enabled contents 14:11:51 stacey has joined #rqtf 14:11:55 present+ 14:11:57 We need considerations related to that. 14:12:06 We have content creation being assisted by AI 14:12:16 we need to be aware of that - good for text 14:12:30 Problematic for images - what degree of confidence can we have? 14:12:38 Code generation is also to be included 14:12:50 Then the user interacting with various agents - smart travel. 14:12:56 What will make this helpful? 14:13:36 JS: How can AI bring accomodations? 14:13:46 Then there is the user interaction with their own systems. 14:14:17 I wouldn't mind if it was monitoring what I was doing and helping me be smarter on things that I could do. 14:14:24 Or notify me from time to time? 14:14:42 Can vary from user agent. Rich vein is thinking of this. 14:14:52 Assistive Technologies should be looking at this. 14:15:15 SH: Saw some smart home/car videos are interesting, immersive AI 14:15:31 SH: Very granular - and the a11y conversation is not there yet 14:15:48 JS: The standard agent for folks without a11y needs will not work 14:16:09 It needs to be considered or it will be an inaccessible world all over again. 14:17:11 https://w3c.github.io/ai-accessibility/ 14:17:45 Raja_Kushalnagar has joined #rqtf 14:18:20 JS: We are going to build the TOC shortly and get an a FPWD out shortly 14:18:31 JS: Would like to see this published soon 14:18:48 JS: Thanks Janina - these suggestions fall well into my analysis 14:19:18 We have ML learning in the authoring env, as a provider of app functionality, and then its use to enhance a11y of UI 14:19:37 q? 14:19:41 q+ 14:19:56 JOC: Above comments from Jason White btw 14:20:09 JW: Hope this agreeable 14:20:22 JW: Suggests working on these contents 14:20:49 JS: I want to move on with that job - I dont disagree with the categories. Not sure its the top level of the taxonomy 14:21:00 The user needs to be the centre of everyting 14:21:09 s/everyting/everything 14:21:31 JS: All the usual cliches need to be removed 14:22:04 JS: The hyperbole is exaggerated 14:22:05 LG AI video https://www.youtube.com/watch?v=gYRM00Oe2BM 14:22:15 JW: Cory doctorow has good article 14:22:34 JW: There will be good writing on AI A11y - Jason is working also on interesting stuff 14:22:50 LG video shows AI integraiton across multiple devices and user agents but accessibility not considered 14:22:57 RK: There is a lot of potential with new ways of using AI - advantage and risk needs to be considered 14:22:59 q+ 14:23:02 The ack jan 14:23:43 RK: The errors need to be considered, disability tax e.g. 14:24:28 Disability tax is the increase of work and effort because of a disability in your everyday life 14:24:47 RK: This needs to be authored in partnership 14:25:04 As a deaf person, voice detection and captions can be added. 14:25:22 But there is more work, as it is not in Sign - so there is another layer of translation 14:25:30 Additional work needs to be thought of 14:25:37 q? 14:25:41 ack janina 14:25:46 +1 to Raja 14:26:23 SSG: There is also a 'real' tax to have equity - the tools that are needed, medical care etc 14:26:57 SH: We have noticed this in Oz - things that are 'special' or disability supported are more expenses 14:27:15 ack me 14:27:20 scribe+ 14:27:38 Joshue108: +1 to Raja, agree on tyhese scenarios 14:27:47 Joshue108: agree also with user first 14:27:56 Joshue108: This is our opportunity to flag this 14:28:10 Joshue108: the overlays are built by nurds who don't understand the userneed 14:28:22 Joshue108: this is important to flag in this doc 14:28:55 JW: Jason, I'm hearing a concensus as from the user perspective as the top level taxonomyu 14:29:08 s/taxonomyu/taxonomy 14:29:36 JW: I think that is reasonable to a large extent but there are issues around the authoring content 14:29:42 Error handling for example 14:29:44 q+ 14:29:47 ack me 14:29:58 q+ to say why cant we do both 14:30:04 q+ 14:30:08 ack me 14:30:08 Joshue, you wanted to say why cant we do both 14:30:16 Joshue108: Why can't we do both? 14:30:25 Joshue108: Agree with jason's points, very valid 14:30:50 Joshue108: AI now a panacea, the devil is in the details; we're still in the sales stage 14:31:06 Joshue108: We should ask these larger questions 14:31:07 q? 14:31:14 ack jan 14:31:29 JS: My comments are pragmatic - I want to know the structure 14:31:42 Am concerned about scope/size 14:33:17 JOC: I dont see the suggestions that Jason made are antithetical to putting the user first 14:34:12 JW: I dont think user needs capture it well for this document 14:34:21 We have other areas to cover 14:34:28 JS: What is an example? 14:34:56 JW: Ok, regarding content creation - you are satisfying user needs for abstract users at that point 14:35:08 Under application development that could be a heading 14:35:35 JS: Counter argument - that sounds like a philosophical thing - sounds like user needs to me 14:36:50 SH: This work started on talking about their perspective in AI - there are other things - remedial actions, how to engage on AI to support People with disabilities and to Josh point about how can we trust it and its ability to support users with disabiities 14:37:07 SH: Those elements need to come in 14:37:17 JW: I have to agree with Janina on this 14:37:38 e.g. the authoring process does support people with disabilities - 14:37:41 JW: How? 14:38:03 JS: Difference between quality management 14:38:12 +1 to Jason - quality is critical here 14:38:26 JW: Yes, this will be error prone tech for a long time 14:39:05 JS: Just to be clear - in the intro we can say this tech will be error prone, some of this will produce content that wont work 14:39:11 I don't want to discuss that 14:39:19 JW: I think this is critical 14:39:21 q+ 14:39:22 ack me 14:39:50 q+ to say that the potential scope of this is so big these conversations are important for us 14:40:03 RK: The question about authoring .. 14:40:18 Creating videos and content - what are we using the term authorship? 14:40:40 JS: I think Jason is thinking very broadly, doctoral work, cool app building etc 14:40:44 JW: Yes 14:41:35 14:41:49 SH: Who we writing for? 14:41:59 JOC: Great question 14:42:15 SH: Devs and designers that engage with the user? 14:43:20 jasonjgw: Anyone trying to integrate AI into their applications; code,, AT, content, etc 14:43:30 jasonjgw: Highly relevant audiences for this work 14:43:45 jasonjgw: We want to influence those people in W3C and elsewhere 14:43:59 scott_h: To support a user need? or other applications? 14:44:25 scott_h: So errors, about ML to pick up errors in ml itself? Is it its own end? 14:44:43 jasonjgw: Well, that's useful, but in the end that needs to serve greater a11y 14:45:05 Raja_Kushalnagar: Currently no way for AI to catch its own errors 14:45:14 Raja_Kushalnagar: maybe someday, but not yuet 14:45:23 s/yuet/yet/ 14:45:40 jasonjgw: We have error handling as a major problem 14:45:55 scott_h: Yes, ai halucinations ... 14:46:09 Raja_Kushalnagar: Showing to the user and the audience 14:46:48 Raja_Kushalnagar: Sometimes we'll see things -- in captions 14:46:48 Raja_Kushalnagar: Both people involved need to see them 14:46:52 scott_h: If AI generates someone with 6 fingers, a low vision person might not catch 14:47:13 We also have the potential for AI to produce trashy code that could be sold as being accessible remediation 14:47:19 q+ 14:47:19 jasonjgw: when ai is generating part or all of the ui ... not scripted -- all ml generated 14:47:35 jasonjgw: what capability makes the user's a11y needs supported 14:48:07 jasonjgw: errors an issue there too, but can it meet my cognitive needs, my at needs 14:48:27 q- 14:49:12 Joshue108: also a11y remediation ... 14:53:25 jasonjgw: Two things we've identified: 14:53:48 jasonjgw: 1 Issues of accuracy in general -- accuracy of what the ML does 14:54:40 jasonjgw: 2. what are the capabilities of the ml when it's producing content and/or ui -- does it comprhend my a11y need and accomodate 14:56:53 zakim, end meeting 14:56:53 As of this point the attendees have been jasonjgw, janina, scott_h, stacey 14:56:55 RRSAgent, please draft minutes 14:56:57 I have made the request to generate https://www.w3.org/2025/02/19-rqtf-minutes.html Zakim 14:57:04 I am happy to have been of service, jasonjgw; please remember to excuse RRSAgent. Goodbye 14:57:04 Zakim has left #rqtf 14:58:16 janina has left #rqtf