W3C

– DRAFT –
ARIA and Assistive Technologies Community Group Weekly Teleconference

23 January 2025

Attendees

Present
ChrisCuellar, dean, howard-e, IsaDC, JamesScholes, Joe_Humbert, jugglinmike, Matt_King, MichaelFairchild, mmoss, RichardSteinberg, stalgia
Regrets
-
Chair
-
Scribe
jugglinmike

Meeting minutes

Review agenda and next meeting dates

Matt_King: Requests for changes to agenda?

mmoss: There was an issue that I filed to capture the agenda item request from two weeks ago. The one related to some of the results for the navigation menu example with results pertinent to Vispero and Apple. Issue number 1174

Matt_King: Ah, yes. Let's add that

Matt_King: Any other changes?

Matt_King: Hearing none, we'll stick with what we've got

Matt_King: Next Community Group Meeting: Wednesday January 29

Matt_King: Next AT Driver Subgroup meeting: Monday February 10

Current status

Matt_King: We have not set a goal for mid-year, yet

Matt_King: I typically set goals for every six months internally here at Meta, and I tie these public goals to that

Matt_King: We had a really aggressive goal for last year. I think we still need to be aggressive this year, to show progress. But I want to set something more realistic at least for the first half of this year

Matt_King: Of course, we have a lot of dependencies on people beyond this group when it comes to reaching those goals, but that's the nature of the project

Matt_King: This week, we had a meeting with Vispero and reviewed some plans that they haven't approved, yet. A couple of those are blocked by us right now. One is blocked by them

Matt_King: On our side, we're making changes to the "radio group" plan, and we have some issues to close in the three "menu button" plans that are in there (two of those, I'm going to get on the agenda for next week)

Matt_King: A dependency we have on Vispero is related to automation. As you all know, we have automation facilities for NVDA and VoiceOver, but not for JAWS. That's made it hard for us to keep the plans for JAWS up to date

Matt_King: Vispero have made a lot of improvements recently, though, and they'd like to see those improvements reflected in the plan

Matt_King: They may approve the plans as-is if their support for automation gets delayed further than they expect

Matt_King: That's the current status on what's in Candidate Review. I don't have any updates from Apple at this time

Joe_Humbert: Have you been able to secure a commitment from Apple, NVAccess, Freedom Scientific, etc. about a more timely review?

Joe_Humbert: A lot of us are volunteering our time to get things done, and it sits idle while we wait for those folks to do their part

Matt_King: I haven't been very aggressive about that. I would say some of that is more about me than it is about them. I've been struggling to make sure we're always meeting our end of the bargain on time

Matt_King: When they have feedback on things, and we have to make changes to test plans and run them through the review process again--sometimes the delays in that part have been because of me

Matt_King: I want to get better expectations, process, and more discipline around this. Part of that has really been dependent on me being able to dedicate more time

Matt_King: We've been making staff changes around here, so I'm hopeful that I'll be able to do that

Matt_King: 2024 was really hectic, but I think we have most of the technical barriers behind us

Matt_King: I hope to finally establish a better rhythm this year

Joe_Humbert: Thanks. I understand that there are things that are on us. Often times, during the meetings that I'm able to attend, I hear that we're waiting on Vispero or on Apple. It would be nice if they had a commitment to get things done in a timely manner

Matt_King: I think we should revisit this conversation at least once a quarter, and I'll try to take more accountability for making that happen

Matt_King: If there are people (especially the vendors) that have thoughts on this, we can talk more offline

Testing for radio group

Matt_King: The issue Joe_Humbert raised related to the group label getting hidden--we now realize that has effected not just the current radio plan but also the other one

Matt_King: We made a list of changes that JamesScholes and IsaDC are working on. Hopefully we will get new versions of radio test plans ready by next week

Matt_King: From there, we can continue testing

Matt_King: Hopefully we'll be able to preserve a decent amount of test results of the plan that's already in Candidate Review, but due to the nature of the changes, we may lose quite a few results

Joe_Humbert: I skipped the ones where I could visually see the label was getting hidden. In those tests, especially in VoiceOver, it would ask you to do a key command twice, and would certainly put you in the wrong position. I skipped those during my testing. The rest of the tests, I did

Matt_King: That's it for radio. Hopefully we'll have more to say on this next week

Testing for remaining link plans

Matt_King: I was shocked to find so many conflicts here

Matt_King: It turns out to be an NVDA behavior where it reads a whole bunch of elements as being on the same line

Matt_King: That's a default behavior which oddly did not impact link example 1

Matt_King: This is a side-effect of how the "navigate forward/backward from here" links are added in to the DOM

Matt_King: All of those side-effects would go away if we were to change the test case, so I filed an issue for that

Matt_King: My assumption is that, at minimum, if we put a paragraph tag or a "<BR>", that would separate them and prevent the side effect

IsaDC: I made the changes since December for those, but the app did not update because the changes were in the examples but not the CSV files

JamesScholes: In the APG, these are all under one page in a table. The test cases we are using are very different

JamesScholes: The first is a "span" element. The second is an "img"

JamesScholes: separately, the app doesn't register updated test plan versions when you change the example (as opposed to when you change the CSV files)

Joe_Humbert: They can see the changes uploaded into the system, but the system isn't recognizing that it needs to update the code

Matt_King: Ah, but the app did not import the new reference file

IsaDC: Yes, I reported it already

howard-e: The problem here is that with the reference files, they are really just pointed to. The contents are not actually "diffed." I suggested that if there is a change to the reference file, the reference folder's timestamp should be updated as well

howard-e: We can also track the file contents directly. I suggest creating a new issue to manage that feature addition

Matt_King: We don't have to modify the timestamp, right? We can just add a suffix because that name is referenced from references.csv

Matt_King: If that works for you, IsaDC, then I would rather do that then spend more engineering time on this

IsaDC: The issue here is that there is not consistent updates. Because I rebuilt the whole test plan, it should include the new files

JamesScholes: I would encourage us to be a bit more intentional. Matt_King is right that the date indicates when we pulled in the example. It doesn't say when the example in the APG was updated, though.

JamesScholes: Rather than adding something that's vague and easy to overlook, I would support updating the time stamp

Matt_King: Yeah, because that import date really isn't that important

JamesScholes: It allows us to say, "well, we last imported this on this date"

IsaDC: If we change it now, then we'd be breaking that pattern

JamesScholes: So it's up to us if we want to change the meaning of that date

Matt_King: I'm changing my thinking, I think. The Git history is keeping track of our modifications. We should rely on the Git history. And we do have a commit that has a hash which says, "hey, this file changed"

Matt_King: When you say we "reference" the files but we don't "import" them--is the URL pointing directly to a file in GitHub?

howard-e: Yes, it would be pointing to a file in GitHub. It would be pointing to a commit

JamesScholes: So in this case, the app doesn't know to change the hash

JamesScholes: how do you know to update the hash for CSV files?

howard-e: Pretty much any change to the test folder that isn't "reference folder only" will force an update

JamesScholes: Could you read every file under the reference directory and hash it combined together?

howard-e: Creating a hash of these reference files seems like the easiest way to go. I think there are a couple things to explore here, and it likely involves hashing

Matt_King: For this particular occurrence, if IsaDC makes a copy of the timestamped reference folder. Just copy it and put something in its name that indicates a change...

JamesScholes: Could we change something elsewhere in the test plan in a way that is not significant

Matt_King: Perhaps the reference row in the reference CSV

JamesScholes: could we change something in the setup script, perhaps?

Matt_King: A copy of the reference folder with a slightly different name would address our problem right now. You would change the path in the "reference" row or the references.csv file

Matt_King: We could do that this one time, and howard-e could investigate a better solution for the future

IsaDC: if we are going to make these changes, could howard-e make them? I think that would be faster

JamesScholes: I can make them

JamesScholes: could we rename the whole directly?

howard-e: It would think of it as a new plan

JamesScholes: Have we made these updates to all of these plans?

Matt_King: The first one doesn't need to be changed because it doesn't exhibit the problem

Matt_King: There are not changes required to the HTML. The HTML changes have already been made

Matt_King: [reiterates steps for JamesScholes]

Matt_King: The net effect for people who are running this (I think it was Joe and Dean), the bot will have to re-evaluate, and then Testers can go in and review the responses

JamesScholes: do you care if its a rename or a copy?

JamesScholes: Git will follow change history through renames, but it will consider copied files to be completely new

JamesScholes: Currently, they all say "link.html". What if I rename them to "link_css.html" and "link_whatever.html"?

Matt_King: I like that. Git will follow changes across renames well

JamesScholes: Then that's what I'll do

issue 1174 on disclosure nav testing

github: w3c/aria-at#1174

mmoss: The results appear to be stale at this point

mmoss: With current macOS 15.1.1, we're no longer getting the expanded attribute state change when it happens

mmoss: What is the process? Would we let the current results move through the process and eventually make their way to the APG site and then do a new round of testing?

Matt_King: This is related to the next topic

Matt_King: The report that we have out there right now... What is the Candidate Review status of these? Has either company approved them?

Matt_King: Actually, we have a discussion with Vispero ongoing about this one

Matt_King: Vispero has recognized some bugs in their behavior, and they are planning on making some changes

Matt_King: However, the specific thing that you're talking about was a VoiceOver behavior related to ARIA-expanded

mmoss: Right, some of the core functionality--the "musts" in the test plan

mmoss: At the time I tested it, they were met, but now, they are not met

Matt_King: So VoiceOver has regressed

mmoss: That's correct

Matt_King: So we could re-run this test plan for VoiceOver and update that. I think that would be a good thing to do before having more discussion with Apple on this plan. I don't think we've even touched on this plan with Apple

Matt_King: I believe we can just re-add this to the test queue for just VoiceOver.

Matt_King: We could replace that report by running with a later version of VoiceOver

Matt_King: Ideally (related to the next agenda item), when a new version of VoiceOver comes out, we will automatically re-run all Candidate and Recommended test plans and re-generate those results. We shouldn't have that problem when we have that capability

mmoss: I'm happy to run through the test plan again

Matt_King: Okay. We can add this to the test queue right now

IsaDC: I'll re-add the "disclosure navigation menu" test plan to the test queue, just for VoiceOver

Matt_King: Then we will re-run it

Matt_King: That's a good observation, mmoss. It's a bummer that there's been a regression

System behavior after adding new AT versions

github: w3c/aria-at-app#1162

stalgia: This feature has to do with refreshing reports for test plans that have a report completed for a previous AT version when a new AT version is added to the system

stalgia: We thought the best way to approach this is iteratively with a low-impact and minor extension to the interface

stalgia: We're thinking that when a new AT version is added to the app, there would be an additional element that appears whenever the AT is viewed in the "manage AT versions" disclosure

stalgia: that would allow the user to start testing with that version and tell them how many runs that would involve

stalgia: We wanted to give admins some control over when that whole slew of test plan runs will be created

Matt_King: I was thinking. In the "data management", we have "report status". The button that opens the dialog will say something like "required reports complete" or something of that nature. Or "not complete" or "in progress"

Matt_King: I'm wondering if that's where we might want to indicate that this is in the data management

Matt_King: If you opened that dialog, you would see rows

Matt_King: (There is one thing about that table that I am finding a little difficult to use. That is: we don't group all the required rows together.)

Matt_King: I'm wondering if that might be more consistent with our current approach to managing data and reports

Matt_King: In the "Candidate Review" column, in the very first row, right now, for "action menu button", there's an "advance to recommended" button and then a "Required reports complete" button

Matt_King: I'm wondering if, when a new version is added, that could be changed to something like, "updates to required reports pending"

Matt_King: . And then when yo u open the dialog, we have a table. We could add rows to the table. It's candidate, so it says, "or later". We could add a row with a specific NVDA version in the "AT" column (e.g. NVDA 2024.4.4 or whatever), and then in the "report" column (the far right column), it could say "update pending"...

Matt_King: I guess the problem with this approach, though, is that you'd have to press a lot of buttons.

stalgia: Right. that was the thinking behind a centralized action

Matt_King: What if it appeared just above the data management table for the administrators?

Matt_King: It would be nice if it was a heading that reflected both the number of reports and the screen reader

Matt_King: If there's just one button to run them all, that would be great

stalgia: we can get to work on a proposal

Matt_King: If some output is different than it was before, we'll need humans to intervene. I believe those should be automatically added to the test queue, and I think we have all of what we need in the test queue for that. My expectation is that anything requiring human review ends up in the test queue

stalgia: I think the system already has this notion of tracking historical verdicts like this

Matt_King: How would we know when it's complete? Would we have something else on the data management page? Should we have a log of what happened below that table?

stalgia: That's an interesting question. We can think on it and include a suggestion in our proposal

Minutes manually created (not a transcript), formatted by scribe.perl version 242 (Fri Dec 20 18:32:17 2024 UTC).

Diagnostics

Succeeded: s/manage/management/

Succeeded: s/greast/great/

All speakers: howard-e, IsaDC, JamesScholes, Joe_Humbert, Matt_King, mmoss, stalgia

Active on IRC: howard-e, Joe_Humbert, jugglinmike, Matt_King, mmoss