Meeting minutes
ARIA-AT AT Driver Subgroup Monthly Teleconference
<jugglinmike> present
Reviewing agenda and meeting dates
Next meeting will be Monday, Sept 8
Next BTT meeting will be on Weds, Aug 13
Revisiting the UI for assigning test plan runs
jugglinmike: there are two issues associated with this
<jugglinmike> https://
#1242: Add functionality to reassign test plan runs to the bot
Github: w3c/
Github: w3c/
jugglinmike: I would like to take a step back before implementing these two issues and take time to unify the overall UI for re-assigning test plans and make sure the system rules are transparent for all users.
matt_king: Yes, that's sound good and let's do everything we can to simplify this and make it a lightweight project. Let's also consider letting anyone who has write access to the database be able to use the bots. Let's reconsider the requirement for the bot to be closely coupled to a test run. I would like to not have to bother with re-assignment when a bot runs.
… We restricted bots to admins for good reasons initially, but it should be easier now to run a bot in the context of a test plan.
jugglinmike: Should non-admin testers be able to re-assign test plan to other non-admin testers?
matt_king: Why shouldn't they?
james_scholes: We used to tightly couple the tester and a test plan historically. If you unassign someone, do you lose the data in the test run?
… it seems like we have to delete the tester's data even in situations where testers are unable to finish their test plans. Test plans should be less coupled to whoever started the test plan, similar to the way Github issues are less coupled to whoever opened the issue initially.
… If we do make it easier to re-assign testers tho, we still need to think through the logic around handling browser and AT versions, etc.
matt_king: I don't want to step too far back here, because I don't want to create too much scope creep at this point in the year. There could be some simple changes that we can get in this year, though. We already have run history. Do we have requirements in the app currently that restrict situations where test plans are done with different AT version? If so, then we shouldn't mess that with now. If not, then maybe we can work with that at
least even though it's not ideal for AT vendors.
james_scholes: As long as the new assignee uses the same AT version or can use the bot to update the test results, that would be ideal. If the bot can save the tester time, that's the important thing.
matt_king: Data can go stale quickly if there's a new AT version available.
… it would be great to mark tests that can be re-assigned could be visible in the test queue. So it would be great if a tester can unassign themselves, keep the historical data in the run, and then another tester can go in, assign themselves and use the bot to update the test plan if needed.
james_scholes: I can see testers leaning on this too heavily.
matt_king: That's ok, as long as we can filter those unassigned tests easily. The big thing for me is to be able to use the bot without having to "assign" the test to the bot.
james_scholes: Maybe you can request bot input at any time when you're running a test.
… What happens when I request bot input after I've already completed my own manual input?
matt_king: It might be unclear. Currently, testers cannot see other tester's input, even when tests are in a read-only state.
james_scholes: It seems like you can only see other input is when things are surfaced as a conflict.
matt_king: You can impersonate other testers using the "Run as..." feature. Maybe there should also be a "View as..." option.
jugglinmike: How do you feel about testers being able to re-assign to other testers?
matt_king: I think it should be that testers should only be able to unassign themselves.
jugglinmike: We'll pick this back up at the next CG meeting.
Output normalization
jugglinmike: I'm still unclear about what the rules around spacing should be.
james_scholes: We shouldn't normalize punctuation. As far as spacing, there's not that meaningful difference between one or two spaces. NVDA does have some differences here, although the information doesn't seem to be that valuable.
matt_king: The main thing for me is that we only use the normalized data when we compare new bot runs against historical data.
james_scholes: Normalization data should be ephemeral and not stored in the database.
… it's like the way rendering works in any CMS.
matt_king: I would like the UI to display the un-normalized output. I think that's important for an AT vendor to see. The unicode characters are an exception, that's definitely a bug.
jugglinmike: When it comes to saving the normalized data to the database... if we change the heuristic in the future, we would want to apply it retroactively, correct? That's an implementation detail though, I think.
… What about capitalization? Currently, we capitalize only the first letter, unless the entire word is all caps.
matt_king: Would it be a problem if we just lower-cased everything? Especially if this is only for comparison purposes?
ChrisCuellar: Would we want to know, for example, if one AT version rendered the word "html" all lower-case? Versus as all caps in the next version?
ChrisCuellar: Is that a difference in how it is spoken? And would we want to track a change in that case?
matt_king: I'm trying to think in terms of interoperability
matt_king: If I was writing a screen reader interoperability spec, I think the requirement is for screen readers to faithfully render the label. If the label were to include case and punctuation, then the screen reader shouldn't change that.
matt_king: If the screen reader decides in some point in time to convey the roles in lower case, but then later renders "Switch Button" with a capital S and a capital B. If that caused random breakages in a test...
matt_king: The only case I can think of for screen reader rendering requirement is in the rendering of a label
matt_king: We kind of know what part of the text is the label because we put it in apostrophes
matt_king: If something egregious happens, I think testers would probably note it
matt_king: Detecting case changes in the rendering of labels--that seems like a very unlikely event. It might be unnecessary scope in practice. It might be the kind of thing that, if we wrote a spec, could be an informative note (e.g. "A common expectation of screen readers is that they don't change labels")
matt_king: And in the future, there could be new tests written. It's not as though the data doesn't exist. Right now, in terms of getting to a workable normalization algorithm as efficiently as possible, I think we can just lower-case everything
matt_king: I don't think we get any real-world value from preserving any normalization
ChrisCuellar: We can make a issue to modify the algorithm that we recently deployed
ChrisCuellar: And we also need to preserve all punctuation--not just hyphens (which is what we recently deployed)
Accessibility of automation consistency reports
jugglinmike: We've been sharing the consistency reports with you, Matt. I'm concerned that they might not be very easy to read using an AT. I'm concerned about the diff viewer especially. We're using Github-flavored markdown for the report. The diffs render pretty well for sighted users. But since we're only using markdown, I'm not sure that this renders well for AT and screen reader users.
… I'm curious what your thoughts are here?
matt_king: Yeah, this is pretty hard to read. I would prefer to have this rendered in a table. Let's put this into the roadmap and prioritize it. I think this might be a little lower priority, though. I like the proposed solution to render these the same way that they do in Github.
jugglinmike: Perfect! That does it for the agenda then.