W3C

– DRAFT –
ARIA and Assistive Technologies Community Group Weekly

07 September 2023

Attendees

Present
Hadi, howard-e, Isabel_Del_Castillo, James_Scholes, Joe_Humbert, jugglinmike, mfairchild, murray_moss, Sam_Shaw
Regrets
-
Chair
-
Scribe
jugglinmike

Meeting minutes

Matt_King: No meetings next week because of TPAC

Matt_King: The next meeting will be two weeks from today--September 21

Matt_King: I've updated the W3C calendar, so everyone

Matt_King: s calendar should be up-to-date

Expectations for current automation user interface

<Sam_Shaw> jugglinmike: What I want to raise is that there is an aspect of this we haven't discussed and which isn't well supported in the design of the automation UI

<Sam_Shaw> jugglinmike: Basically the case concerns when we use automation in the future to collect results for test plans we already have data for. In these cases, we expect not all AT responses will match results. In this case we expect a human tester to come rerun the test, but only for the tests that the responses have changed

<Sam_Shaw> jugglinmike: That is not new and doesn't need changing. In our current UI, we have all been discussing that at that moment, the test admin would assign the test plan to a bot

<Sam_Shaw> jugglinmike: In that moment we don't have a way to get a second human being to verify those tests

<Sam_Shaw> Matt_King: Can we run it again and assign it to another person?

<Sam_Shaw> jugglinmike: Yes, thats what I had in mind, its alot of extra work for a test admin

<Sam_Shaw> Matt_King: I guess my understanding is that this is a MVP, so we may run into issues. We still don't know how easy it will be for us to determine the difference between the bots results and a humans results, how much they wont match in substantive ways

<Sam_Shaw> JS: That sounds okay to me

Changes to unexpected behavior data collection and reporting

github: w3c/aria-at-app#738

Matt_King: Before we talk about the solution I've proposed, I want to make sure the problem is really well-understood

Matt_King: The status quo is: after you answer all the questions in the form about the screen reader conveying the role and the name, etc

Matt_King: ...there's a checkbox asking if other unexpected behaviors occured. Activating that reveals a bunch of inputs for describing the behavior

Matt_King: In our reports, we have a column for required assertions, a column for optional assertiosn, and a column for unexpected behaviors

Matt_King: When we surface this data in the APG (but really, any place we want to summarize the data), it lumps "unexpected behavior" in a separate category all on its own

Matt_King: I see this as a problem for two reasons

Matt_King: Number one: we're going to have too many numbers (four in total) once we move to "MUST"/"SHOULD"/"MAY" assertions

Matt_King: Number two: people don't know how to interpret this information

Matt_King: to that second point, there's a wide spectrum of "unexpected behaviors" in terms of how negatively they impact the user expecience

mfairchild: I agree that's a problem

Hadi: so we don't have a way to communicate "level of annoyance"?

Matt_King: That's right

Matt_King: we might consider encoding the "level of annoyance" (even to the extent of killing the utility of the feature)

mfairchild: This extends even to the AT crashing

Matt_King: As for my proposed solution

Matt_King: There are three levels of "annoyance": high, medium, and low.

Matt_King: And there are three assertions associated with every test: "there are no high-level annoyances", "there are no medium-level annoyances", and "there are no low-level annoyances"

Matt_King: That way, what we're today tracking as "unexpected behaviors" separate from assertions, would in the future be encoded and reported just like assertions

Matt_King: Ignoring the considerations for data collection, do folks present today think this is a good approach from a reporting standpoint?

Matt_King: Well, let's talk about data collection

Matt_King: Let's say you have excess verbosity. Maybe the name on a radio group is repeated

Matt_King: The way you'd say that on the form is you'd select "yes, an unexpected behavior occurred", then you choose "excess verbosity,"...

Matt_King: ...next you choose how negative the impact is ("high", "medium", or "low"), and finally you write out a text description of the behavior

Matt_King: It wouldn't be that each of the negative behaviors always fell into one specific bucket of "high", "medium" or "low". Instead, it's that each occurrence of an unexpected behavior would require that classification

James_Scholes: I think it's a positive direction, but I wonder about how Testers will agree on the severity of these excess behaviors

James_Scholes: I also wonder about sanitizing the plain-text descriptions

Matt_King: I don't think we need to collect a lot of information on descriptions; I expect it'd be pretty short for most cases

Matt_King: If you look at our data, these things are relatively rare. In a big-picture sense, anyway

Matt_King: If you look at it in the reports--in terms of what we've agreed upon--they're extremely rare.

Matt_King: I think that mitigates the impact of the additional work here

Matt_King: But I think we need to work toward really solid guidelines, and be sensitive to it during the training of new Testers

Hadi: My general concern is that once we start to categorize the so-called "annoyance" level, we might get into rabbit holes that we might not be able to manage

Hadi: I think anything more than the required assertion should be considered as "annoyance"

Hadi: e.g. if the AT repeats the name twice. Or it repeats even more times. Or it reads through to the end of the page.

Hadi: Where do we say it crosses the line?

Matt_King: I'm not prepared today to propose specific definitions for "high" "medium" and "low". We could do that on the meeting for September 21st

Matt_King: But I also don't think we need that to move forward with development work

<howard-e> https://datatracker.ietf.org/doc/html/rfc2119

jugglinmike: I'm having trouble thinking of what "MAY NOT" means. And, checking RFC22119, it doesn't appear to be defined for normative purposes. I'll think offline about the impact this has on our design, if any

Matt_King: We could reduce it to just two: must not and should not. That could be a good simplification. I'm starting to like that, just having two and not having the third

mfairchild: I like that, too

mfairchild: but does this go beyond the scope of the project?

Matt_King: It's really clear from a perspective of interoperability, that one screen reader you can use with a certain pattern, and with another screen reader, you can't

mfairchild: I agree. I think where we might get hung up is on the distinction between "minor" and "moderate" or "may" and "should"

mfairchild: I think that, from a perspective of interoperability, we're really focused on severe impediments

mfairchild: It's still subjective to a degree, but if we set the bar high enough, it won't be too distracting to actually make these determinations

Matt_King: There might be some excess verbosity that we don't include in the report

Matt_King: The best way for us to define these things is real-world practice

Hadi: can you provide an example of "must not"

Matt_King: For example, there's "must not crash." Or "must not change reading cursor position"

Hadi: For example, imagine you are in a form that has 30 fields. When you tab to each field, it reads the instructions at the top of the page (which is two paragraphs long), that's just as bad as a crash!

Matt_King: James_Scholes would you be comfortable moving to just two levels: "MUST NOT" and "SHOULD NOT"

James_Scholes: The fewer categories, the less nuance we'll have in the data. That makes the data less valuable, but it also makes it easier for testers to make these determinations. I support it

Matt_King: I'm going to make some changes to this issue based on our discussion today and also add more detail

Matt_King: By the time of our next meeting (on September 21), I hope we can be having a discussion on how we categorize unexpected behaviors

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Maybe present: Matt_King

All speakers: Hadi, James_Scholes, jugglinmike, Matt_King, mfairchild

Active on IRC: howard-e, Joe_Humbert, jugglinmike, murray_moss, Sam_Shaw