16:59:52 RRSAgent has joined #aria-at 16:59:56 logging to https://www.w3.org/2025/04/23-aria-at-irc 17:01:31 mmoss has joined #aria-at 17:02:17 Matt_King_ has joined #aria-at 17:03:09 ChrisCuellar has joined #aria-at 17:03:20 zakim, start the meeting 17:03:20 RRSAgent, make logs Public 17:03:22 please title this meeting ("meeting: ..."), jugglinmike 17:03:34 meeting: ARIA and Assistive Technologies Community Group Weekly Teleconference 17:03:38 present+ jugglinmike 17:03:49 scribe+ jugglinmike 17:03:49 present+ 17:03:49 Isa has joined #aria-at 17:03:49 present+ isaDC 17:03:49 present+ 17:03:52 present+ Joe_Humbert 17:03:54 present+ dean 17:03:57 present+ ames 17:04:11 s/ames/james/ 17:04:22 present+ Matt_King 17:04:30 howard-e has joined #aria-at 17:04:34 present+ ChrisCuellar 17:04:51 Carmen has joined #aria-at 17:04:53 present+ howard-e 17:04:57 present+ Carmen 17:05:17 Topic: Review agenda and next meeting dates 17:05:20 https://github.com/w3c/aria-at/wiki/April-23%2C-2025-Agenda 17:06:48 Matt_King: Requests for changes to agenda? 17:06:59 Matt_King: Hearing none, we'll stick with the agenda as planned 17:07:12 Matt_King_: Next Community Group Meeting: Thursday May 1 17:07:21 Matt_King: Next AT Driver Subgroup meeting: Monday May 12 17:08:28 jugglinmike: Valentyn of Vispero ought to be able to join us for that one 17:08:40 Topic: Current status 17:08:53 Matt_King: james and IsaDC and I have been working on a schedule for upcoming test plans 17:09:26 Matt_King: There's a pretty solid plan through the end of June. Of course, circumstances may change, but I think we can have 25 test plans in "candidate" or "recommended" by the end of June. That's my target for now 17:09:40 Matt_King: That's based on the fact that we currently have four in "draft review" 17:10:10 Matt_King: We have seven more where the plan itself will be ready by the middle of June. That's probably the biggest stretch. I don't know how much testing we'll be able to get done before the end of June 17:10:18 Matt_King: And we have 14 currently in "candidate review" 17:10:24 Matt_King: That all totals to 25 17:10:52 Topic: Testing of Rating Radio Group 17:11:02 Matt_King: There were no conflicts when I checked yesterday 17:11:32 Matt_King: For JAWS, it looks like we're all done. Thank you! 17:11:42 Matt_King: We can mark it as "done" because there are no conflicts 17:11:51 IsaDC: Will do 17:12:13 Matt_King: In that radio group, were you getting the set size spoken consistently? 17:12:29 IsaDC: Yes. I wonder if it has something to do with the version, because I used the April version 17:12:36 IsaDC: I was going to ask if I should re-test 17:12:50 Matt_King: Right, I think we'll get to that in a minute 17:13:06 Matt_King: NVDA is set up for Joe_Humbert and dean. Joe_Humbert is all done 17:13:12 dean: And I plan to finish this evening 17:13:25 I'm just trying to get it done when I have time 17:13:26 Matt_King: No one has started VoiceOver yet. That's assigned to Joe_Humbert and mmoss 17:13:57 Joe_Humbert: I had 15.4.1 update which took a while last night, so I didn't get to the third screen reader last night 17:14:06 Matt_King: Do we have 15.4.1 in the system, isaDC? 17:14:13 IsaDC: Yes, we added it last week 17:14:23 Matt_King: This test plan is going very smoothly. This is great! 17:14:37 mmoss: I should be able to complete this today. 17:14:49 Matt_King: We may be landing this one pretty quickly, then. That's awesome 17:14:58 Topic: Re-run of JAWS for color viewer slider 17:15:36 Matt_King: This is assigned to Joe_Humbert and Hadi, and it looks like Joe_Humbert has done this one, too. Wow! But we don't have anything from Hadi, yet 17:15:52 Matt_King: I think he said to give him a couple weeks, so I think we're good there 17:16:06 Matt_King: So I guess there's no update there, other than that Joe_Humbert is done. Thank you, Joe_Humbert! 17:16:14 Topic: Issue 1221 - Conflicting JAWS results in radio group tests 17:16:23 github: https://github.com/w3c/aria-at/issues/1221#issue-2948183453 17:16:38 Matt_King: I commented on this yesterday, and I tested it with a beta version of JAWS which is directly after the April release 17:16:44 Matt_King: I put my information in the issue 17:16:53 Matt_King: I'm getting the same results that Joe_Humbert got in Windows 10 17:17:05 Matt_King: And then you got the same results in the radio group 17:17:14 IsaDC: I'm happy to change my results to match Joe_Humbert's 17:17:29 Matt_King: That would complete this one. Wow, we could end up being done with all the "radio groups" very soon 17:17:51 Matt_King: If that resolves everything, then you can go ahead and push this forward into "candidate review" 17:17:53 IsaDC: Sure 17:18:02 s/Matt_King_/Matt_King/g 17:18:10 Topic: Issue 1212 - Sandbox testing of test plan changes 17:18:23 github: https://github.com/w3c/aria-at/issues/1212 17:19:13 james: This came about just because we've had a few instances where we updated a test plan and wanted to see how it manifested in the app. Or where results have unexpectedly not been carried over 17:19:58 james: The individual things which prompted it are worth discussing on their own, but essentially, we think it would be helpful to preview test plans in a version of the app that is authentic, without requiring merging a pull request 17:20:31 james: We currently don't have many roll-back opportunities after merging 17:21:19 james: for example, we have the "staging" environment and the "sandbox" environment, but they don't really reflect "production" to the degree that we can use them to reason about things. They lack the wealth of data in the "production" environment 17:21:45 Matt_King: If we had the ability to get a most-recent test plan data into the staging environment, but to operate off of a branch... 17:22:12 Matt_King: I guess the main thing here is: once a test plan is merged into the "master" branch, the only way to correct problems is to merge a new version into the "master" branch 17:22:35 Matt_King: The problem is that we don't have an environment that has all the prior results for the test plan, so we can predict what merging will really do. 17:22:40 james: yes 17:22:47 Matt_King: So this is really a data thing 17:23:05 ChrisCuellar: What was the original intention behind the "staging" environment versus the "staging" environment 17:23:29 Matt_King: The "sandbox" environment was for developers to push whenever they like. Like a "nightly" build 17:24:05 Matt_King: The "staging" environment was meant to give external stakeholders an opportunity to view something new while still using "sandbox" for ongoing work 17:24:21 Matt_King: We should be able to experiment with the data in the sandbox to any degree 17:24:33 Matt_King: I kind of wonder if maybe the data in staging should be closest to production 17:25:03 Matt_King: But james is saying that we don't have an environment that matches production and which is safe to mess around with 17:25:16 james: We also recognize that "staging" may have features which is not in production 17:25:53 james: I think the ideal would be the ability to have a copy of "production" on-demand. And to have that copy be able to read from GitHub 17:26:32 Matt_King: I want to be careful to not build a massive new thing in order to solve an occasional problem 17:26:43 Matt_King: I think there may be a way to use "staging" to solve this problem 17:27:52 Matt_King: It could be (especially given the way we've been working lately) that we could do something so that the staging environment is essentially equivalent to production, except for the possibility that we push something new to it. But we have the ability to go to the staging environment and say "at this point in time, run this script to set the data in the staging environment to be equivalent to production" 17:28:19 Matt_King: ...and if we also had the ability to pull in test plans from a branch other than "master", and for you to choose which branch that is 17:29:00 Matt_King: In staging, we could have a feature that says, "from which branch do you want to pull?" So if you have a pull request branch, you could pull from that branch and then work with it in staging, and it would be just like working with it in production 17:29:18 howard-e: It won't be exactly the same, though. There will be times where staging will have updates that go beyond production 17:29:52 Matt_King: yes, but I think 90% (or even 99%) of the time, those changes are not related to the kind of functionality which affects how the test plans are going to be processed in the test queue (and things like that) 17:30:09 Matt_King: We don't often touch the code that affects things like the code which controls how test results are copied 17:30:22 Matt_King: It feels like the velocity of change in staging is quite manageable 17:30:26 howard-e: Good point 17:30:41 howard-e: To re-share what I stated in the issue, I agree with all of this 17:31:18 Matt_King: I don't want to let the perfect be the enemy of the good .Even if we could get a "90%" solution... 17:31:42 james: I'd love to understand how often we need to share something on "staging" with an external entity. It doesn't seem like it happens very often at all 17:32:19 james: If this could be put in place, I would encourage us to always put changes through this flow in order to be diligent about testing our changes 17:32:42 Matt_King: We currently have the "preview" capability, and our process involves using the "preview" capability before we merge 17:33:17 james: Right, and I don't want to suggest that we should start looking at every single test in the app. I think the preview is good for reviewing the underlying test plan itself 17:33:42 james: Most of the time, I think this kind of review should only take a few minutes at most 17:34:19 Matt_King: We could have staging serve two purposes. We can have it serve the purpose you're describing, james. We can also have it serve the purpose of staging new changes to the app, but only when the process is such that the development team needs it for that purpose 17:34:40 Matt_King: In other words, we could go directly from sandbox to staging to production on a super-fast path almost all the time 17:35:26 Matt_King: Right now, we have the feature related to automated updating. When IsaDC and I were giving feedback, we did it in sandbox. When it's time to deploy that feature, do we always go through "staging", or do we go direct from "sandbox" to "production"? 17:35:31 howard-e: We always go through "staging" 17:36:15 howard-e: It generally takes a week to move from staging to production. It's a manual process that we run internally 17:37:06 Matt_King: I wonder if that adds much risk to the kind of previews that james is describing. If you make a change to the test plan, and it looks good in staging, so you merge it, and it goes to production. If something isn't quite right, it could take a week to resolve 17:37:32 james: I think it adds quite a bit of cognitive overhead to need to have a sense of "what state is staging in?" 17:37:55 james: Also ambiguity. It somewhat undermines what I was trying to achieve when I raised this issue 17:39:49 Matt_King: I don't know how to do this without a whole new environment 17:40:19 james: Couldn't we make that environment much more ephemeral? Couldn't it happen in GitHub Actions? We only need it for a short time, and then we can throw it away 17:40:22 Matt_King: I don't know 17:41:06 james: Or, how easy is it to get the app up and running locally? If everything is Dockerized, and all someone has to do us run "docker-compose up", then these concerns go away 17:41:33 howard-e: It is not Docker-ized. It could be. While the operating instructions are minimal, it may be preferable to Dockerize 17:42:03 ChrisCuellar: I wonder if the Bocoup team can take this internally for discussion. It sounds like there are a lot of options, and I wonder if it would be helpful for us to consider it as a team and come back to you all with some recommendations 17:42:09 Matt_King: Yeah, why not? 17:42:26 Matt_King: I was actually a little surprised that building locally which PAC might prefer 17:42:45 Matt_King: Maybe making it possible for anybody to do that more readily might be good for the project overall in ways that I don't foresee 17:43:00 james: Dockerizing is something we do with other clients quite regularly 17:43:13 james: We'd have to figure out a way to share the latest SQL dump 17:43:47 Matt_King: I'm not familiar with the dockerizing process, but I kind of wonder if others (e.g. Vispero) might benefit from that capability 17:44:46 Matt_King: Let's assign this issue to someone at Bocoup and remove the "agenda" label. When you are ready to discuss again, please add the "agenda" label back on 17:44:52 Topic: App issue 1382 - Candidate review improvement 17:45:04 github: https://github.com/w3c/aria-at-app/issues/1382 17:45:26 Matt_King: When someone goes to the "candidate review" page (e.g. James Craig from Apple), you have to tell them exactly where to go 17:45:53 Matt_King: It would be really nice if the tables on that page were sorted by the order of priorities that we cared about, and if there is not something for them to do, then there isn't a row there 17:46:08 Matt_King: So I'm proposing that we sort the rows in the top three tables 17:46:51 Matt_King: Those tables have rows for every plan. If the vendor has approved, the "status" column says "approved", but they are still present (even though there is nothing for vendors to do) 17:47:20 Matt_King: The summary table should always show that it is approved. But we treat the vendor tables as the "to do" and remove them as they are addressed 17:47:40 Matt_King: The second part is to prioritize this "to do" list according to target dates 17:47:55 Matt_King: I'd like to use that feature better--to set realistic targets for AT vendors 17:48:09 Matt_King: I was thinking of super-light-weight changes that we could do to speed things up, and this is what I came up with 17:48:27 howard-e: When it comes to omitting "approved", won't the reports be in a state of ambiguity? 17:48:33 Matt_King: I'm not talking about changing the table 17:48:40 howard-e: Right, but that isn't represented anywhere else 17:49:37 Matt_King: It would be represented in two other places: one is the "summary" table at the bottom 17:49:58 howard-e: I think what I want to discuss may be a separate issue. I'll raise a new issue to discuss that 17:51:09 Matt_King: As I'm talking out loud about this (I only just raised the issue today), I'm thinking about how to make those first three tables on the "candidate review" page to function more like a "to do" list for those representatives 17:51:28 Carmen: I wonder if we're trying to get the same page to work for distinct use cases, and that's making friction 17:51:50 Matt_King: I think it's all one purpose: it's candidate review, and it's intended for AT developers 17:52:05 Matt_King: Is there another thing that someone is thinking that the "candidate review" page does? 17:52:27 Carmen: I thought people were also using it to understand the overall status of the test plans 17:53:11 Matt_King: The assumption is that the "candidate review" page primarily serves the AT vendors. Though the "summary" table is somewhat for us. But it's also for them--to allow them to see where they stand relative to their peers 17:53:15 Carmen: I understand 17:53:39 Matt_King: Right now, we expose the information about each AT vendor's competitors 17:54:06 Matt_King: I think having three on this one page is okay, but I didn't want to go far and suggest that we customize for each AT developer. But that feels like a much bigger lift to me 17:54:50 Carmen: If they're saying it's complex for them, then maybe we remove the summary at the bottom 17:55:13 Matt_King: I actually love your thinking there, and I think it's kind of cool! 17:55:26 Matt_King: I don't want to make too much work, though. Maybe we can start talking as a task after this 17:55:37 Carmen: I can create an issue for us to revisit later 17:55:42 Matt_King: great! 17:56:10 Topic: Issue 1214 - AT version recorded for reports where bots collected some of the responses 17:56:19 github: https://github.com/w3c/aria-at/issues/1214 17:56:54 james: This came from a discussion with jugglinmike 17:57:22 james: It revolves around a bot run happening, and the bot run recording its browser and AT version, and then a human can take those responses and provide their assertion verdicts 17:57:53 james: however, when this happens, the human's test plan run retains the original browser version and AT version, which may be inaccurate 17:58:19 Matt_King: When we add someone else's results, we have a pop-up dialog to confirm your AT version 17:58:37 Matt_King: Could we utilize that functionality? Or would we make it manual for the tester? 17:58:55 james: I was thinking that it would work the same as when they started their own test run 17:59:12 james: right now, we're losing data in that we're not tracking what versions the human used 17:59:29 Matt_King: If a human is working with results following a bot run, do we care the versions used for the bot? 17:59:40 ChrisCuellar has joined #aria-at 18:00:15 james: I guess we care if the human needs to change the results that the bot had. We might care about discussing why that was. Or, more likely, we might want to know that it happened so that we can have visibility and so that the tester can raise red flags 18:00:47 Matt_King: Maybe when a report is worked on by both a bot and a human, we record both sets of browser and AT versions separately. That could even be part of the run history. 18:01:20 Matt_King: We're out of time, but we can continue during next week's meeting 18:01:35 Matt_King: Thanks to everyone for the excellent support! Talk to you next week! 18:01:39 Zakim, end the meeting 18:01:39 As of this point the attendees have been jugglinmike, ChrisCuellar, isaDC, mmoss, Joe_Humbert, dean, ames, Matt_King, howard-e, Carmen 18:01:42 RRSAgent, please draft minutes 18:01:43 I have made the request to generate https://www.w3.org/2025/04/23-aria-at-minutes.html Zakim 18:01:50 I am happy to have been of service, jugglinmike; please remember to excuse RRSAgent. Goodbye 18:01:50 Zakim has left #aria-at 18:01:54 rrsagent, leave 18:01:54 I see no action items