Silver Task Force & Community Group

30 March 2021


ChrisLoiselle, Chuck, Francis_Storr, JakeAbma_, jeanne, Jemma, JF, johnkirkwood, Laura_Carlson, Lauriat, Makoto, mgarrish, MichaelC, sajkaj, sarahhorton, Sheri_B-H, SuzanneTaylor, ToddLibby
jeanne, Shawn

Meeting minutes


<ChrisLoiselle> I need to step away, sorry.

Updates to content templates

jeanne: Good news that has been slowly happening over last month. Mobile accessibility task force of AGWG has decided to start working on moving their WCAG SC

jeanne: to WCAG 3. They have looked at some of the support docs that are dated.

jeanne: For how to write guidelines, and have updated them.

<jeanne> https://docs.google.com/document/d/1Z3qUKZBZkZ17ghWXJouOTZDWFawZyGdlTAiQPd6HnVM/

jeanne: They wanted to share them back with us. Here's a link:

jeanne: A google doc they took of our resource for templates on writing guidelines. They polished, clarified, made it easier to use.

jeanne: I also wanted ... one of the important things about this is when you get down into part 5, where you are actually writing...

jeanne: ...the tabs, the tabs are correct, but some of the sub-headings under the tabs are no longer accurate. I'm looking for a volunteer to help look at

jeanne: at what went in fpwd for headings, and identify the variations of what got published. There were a # of changes at the last minute...

jeanne: We want to make sure the right things are in our support information. If you look at the "how to" and "methods" in the fpwd

jeanne: and compare the headings in each tab, note the variation in the headings only.

jeanne: Just a little tedious, but very valuable to do this. We can later decide which version we want. Does anyone want to help us with this?

michael: If you do this, don't fix things, just note things, as I will automate this where I can.

jeanne: Thanks! The goal is to identify the difference. Not a big job. 5 tabs on method and 6 tabs on how to's

jeanne: Please email me if you can help. Would very much appreciate the help.

jeanne: One of the other items about this is one of the things that happened between writing the guidelines and publishing is we ended up losing the exceptions.

jeanne: We started talking about it in the planning meeting, and did not agree where the exceptions should go. Exceptions are things like.. in clear words

jeanne: one of the... certain specialized technical approaches for a specific audience was one kind of exception. They ended up not being put anywhere.

jeanne: When we discussed yesterday, some thought it should go in methods, some thought it should go in outcomes, in the template it's in the "how to", so arguments for all those options.

jeanne: I want to get a sense from the group where they should go.

<JF> +1 to mcooper

Michael: Are the exceptions part of the normative content? If they are exceptions to the outcome then they go in the outcome. If it's method related should go in the "how to".

jeanne: Shawn you had a reason for putting them in the methods. Can you recap?

Shawn: It would make most sense to go into the tests themselves, but Michael makes a good point.

Shawn: Boils down to the shape of the exception, and it should go where it contextually makes the most sense.

jeanne: We'll need to write some advice, on where to put them.

Shawn: This will benefit from us building up some examples.

<Zakim> JF, you wanted to confirm that exceptions are part of the normative text

jf: q something Michael said. Are exceptions part of the normative content? Answering that would be helpful.

Michael: Relates to my q... we might want to say that all are normative, all are not, or say some are and some aren't, and they look like x and y.

Michael: And there could be more than x and y. If we define the shapes will help inform us in the future.

jf: We are looking at it through similar lenses.

<Zakim> MichaelC, you wanted to say do we define a shape or shapes for the exceptions?

jeanne: Our next step would be to write up some examples of exceptions. I would like to ask any of the groups that are revising or working on new ones to start thinking about applicable exceptions.

jeanne: And how you might want to address them. Write them up as a couple of sentences or a use case so the group can use that to discuss.

Chuck: Did we have some before

Jeanne: Yes, we did lose them, I've got an idea of where they ended up.

jeanne: I was putting them in templates. I didn't notice that there wasn't a place to put the exceptions. They are still in the original drafts.

jeanne: It didn't make it in, I think we can pull them from there. We have the original exceptions as originally written.

jeanne: We should put it on our radar to include in our next draft, and we can move on.

Critical errors

<ChrisLoiselle> I'm back. I can scribe Chuck.

jeanne: This is a continuation of Friday's discussion. For the benefit of individuals not present on Friday. Conformance Options sub-gruop presented some use cases they developed

jeanne: to test WCAG 3 first draft to see if we have covered important examples. And how we cover them.

jeanne: They did a nice job, i recommend the minutes be read to review the draft of the use cases.

<sajkaj> https://www.w3.org/WAI/GL/task-forces/silver/wiki/March_Report_to_the_Silver_TF

jeanne: One of the key takeaways they gave us is that the critical errors need more substance and more guidance to help people decide whether or not a critical error applies.

jeanne: Seems like a section of writing we can do.

<ChrisLoiselle> ok

jeanne: We should start with questions.

jf: One of the q that came up on Friday, I don't know if we got a resolution. Enumeration. Headers... "Consider a web page with 200 words". Is less than 200 words excluded...

jf: With a standard we can't have vague things. If that's the breakpoint, that's fine. WE can leave it "something like..." that doesn't require interpretation.

jf: I was concerned with use of numbers. I don't know if we resolved that question. That's where we left off.

jeanne: That's... how do we determine that, that's what we are looking at. That's what the group talked about as a way to address this problem w/o looking at the numbers.

<jeanne> https://www.w3.org/TR/wcag-3.0/#critical-errors

jeanne: What I want to do first off. In fpwd, we have critical errors section. I'll paste those errors in.

<jeanne> Errors located anywhere within the view that stop a user from being able to use that view (examples: flashing, keyboard trap, audio with no pause);

<jeanne> Errors that when located within a process stop a user from completing a process (example: submit button not in tab order); and

<jeanne> Errors that when aggregated within a view or across a process stop a user from using the view or completing the process (example: a large amount of confusing, ambiguous language).

jeanne: The idea from Sarah was to have more information in each method of how to determine what the critical errors are.

<JF> how much is "a large amount"?

<Makoto> +1 to JF

jeanne: We had an interesting discussion on the exception that is guideline oriented, may need to move to a broader place.

<ChrisLoiselle> Chuck: We didn't come to resolution per se, but an understanding. They aren't boundaries of numbers, but would start the conversation.

<ChrisLoiselle> I.e. 200 words starts the use case , but is not a boundary.

<ChrisLoiselle> JF: It is a concern, we need to start and have that baseline and boundary.

jeanne: We agree. We are definitely in agreement, and the conversation is about HOW we do this. And to determine where we set the boundaries.

jeanne: I want to keep us focused on taking these examples and creating guidance for people to address.

let's start with the critical errors for first one... flashing and keyboard trap.

jeanne: what is some of the edge cases that a tester would run into about flashing that they would need to have guidance on?

<Zakim> Lauriat, you wanted to say how I think the tests within the overall flow/task/thing should define at least the first two types (non-interference and submit button example) of critical errors.

shawn: We talked about before, framing it in the context of these 3 examples is helpful. The critical errors should be identified through specific aspects of the texts...

shawn: the 3 flashes, doesn't matter where it is, it is a critical errors. Doesn't address the 3rd situation, where errors in aggregate sum to a critical error.

shawn: There may be other places to identify errors of the second kind. We need to figure out how to identify errors of the 3rd kind.

jeanne: The uses cases are very helpful, to test the boundaries, and challenge the numbers. We have edge cases we can work within.

sajkaj: Question becomes 2 more or less out of bounds? Ideally we can write this w/o getting specific. But the ideal may not be practical. Best intention.

sajkaj: flashing is a deep dive case for us, it's more than a nuisance, it can be life threatening to some people. It magnifies it.

sajkaj: How do you know? But you host some content providers, what do you know about the movie produced in the past, if it contains flashing that you are doing your best to avoid?

sajkaj: In addition to directly created content, the kind of content hosted for 3rd parties... we also know that a solution is possible. Judy saw a demonstration from MIT labs.

sajkaj: Where it can be solved in the browser, but we can't say that. I don't know the answers, but those are some of the parameters.

jeanne: Library of congress, where they have a responsibility to save the historic content. Some contain flashing, and how do we handle that. That's a use case for the first error.

Suzanne: I have an edge case for flashing. Normally we think of one animation. I've been seeing failures for WCAG 3 that are based on aggregate of things that happen on screen.

Suzanne: Lot's of activity on screen, pixels flashing, but there isn't anything that appears to blink. That's a good edge case. This kind of content is appearing more and more.

Suzanne: 2nd comment about the 3rd type. One option for aggregate type of failure, each guidelines would contain a description of what makes up a part of an aggregate error.

Suzanne: There would be a description in each guideline, and that would be a separate failure. That's a way we could possibly handle it.

jeanne: That's a good idea. Can you write it up yourself, and monitor the minutes to see if they are captured correctly?

shawn: Embedded content with a test. So far we have talked about flashing, if it exists, opposed to the examples of films and movies where the flashing is a part of the content.

shawn: We put a warning on it (also games), so different kinds of tests in place for flashing can help with this. Library of congress wouldn't have any critical errors if they provided warnings.

<JF> embedded ads on a social media platform - how do you flag for potential flashing there?

shawn: As opposed to no warning at all, and then we subject it to a user when it's too late.

shawn: It should be the responsibility of the platform itself to filter these. They would just fail if they have flashing adds.

jeanne: These are good use cases. Taking it up to a meta level, how do we structure... some could be handled in the methods.

shawn: For flashing it would go into the tests. In the context of user navigating content, here are the tests that can help you decide if there's a critical error.

shawn: If there's flashing with no warning what-so-ever, it's a critical error. If there is flashing, but it's behind a warning that needs to be acknowledged, it's not ideal, but not a critical error.

jeanne: So it could be scored lower and still pass.

shawn: it would be flagged as "not awesome, be careful", but it helps the user decide how to procede.

<JF> +1 to Janina

sajkaj: I'm almost comfortable with that, but have an issue with "is". But what happens if you have so much content? How do you test for large amounts of content?

sajkaj: May be a warning of "maybe"...

sajkaj: We are prevented by our charter from offering.

jeanne: We can offer it, but we can't make it normative.

sajkaj: a tech solution is the best. This should be labeled solved, but in real time... not as manageable.

sajkaj: Suzanne's first use case, it's being handled with media queries 5. We have some content to send out, coming along. Michael's challenge is to determine where to post.

<Lauriat> You can programmatically detect that kind of flashing, so this actually should prove *easier* for the larger systems than the smaller ones.

<JF> prefers-reduced-motion works in the browser, but what of native apps?

sheri: In addition to flashing, pseudo-flashing, black and white optical illusions. They trigger migraines for me. Some content providers have this in their content.

sheri: I had experience in previous employment where an optical illusion was tweeted, and triggered a very bad reaction in a user. This can be really very serious.

jeanne: Thank you. We should spin up a group for migrating the flashing guideline. We use this example so often. We should start writing it and have more detail.

sheri: I'm happy to participate, but definitely resource constrained.

<Zakim> Lauriat, you wanted to answer sajkaj's example with a test

<ChrisLoiselle> Chuck: Maybe a warning for not yet evaluated on specific critical errors .

<JF> what is the potential downside of that Chuck?

shawn: I put in a note. Those that have large catalogs, are better suited for automatically flagging content that could have flashing, and they would have the infrastructure.

<Zakim> Chuck, you wanted to ask about "not yet evaluated"?

jf: may not be a practical solution.

<ChrisLoiselle> JF: On flashing, say library of Congress, which states "warning" , what does that do for the end user? Does it solve the problem?

jf: Just posting a warning may be a get out of jail free card and may not solve the problem.

suzanne: Warnings should be offered case by case, but we shouldn't promote too many warnings, may be diluted.

sheri: Medication warnings "this could be harmful..." you just ignore after a while.

shawn: Since we are getting into weeds on warning mechanics, let's bring the discussion back... I think we can help identify critical errors structurally, we don't need to iron out here.

<Zakim> Lauriat, you wanted to bring up my point for warnings as structure proposal, not solid guidance.

sajkaj: Want to get on record... it's not as neat, but could be put on streaming services as well. They have agent plugins anyways.

sajkaj: They may be concerned with the requirement, but for the users where it is very serious, it could matter.

sajkaj: I think that we have to convince people that it's an issue. Maybe we should pick up pointers to stories and examples of where it exists.

sheri: some content in Japan 15 years ago.

sajkaj: some examples would be useful.

jeanne: 200 children ended up in the hospital for some flashing content.

<SuzanneTaylor> +1 to finding more/newer examples

jeanne: It can be a challenge in gaming, and is important for streaming platforms.

wilco: I think it's worth bringing up, these solutions... we should encourage them. There are things that can be done at scale in the browser or extension.

<Lauriat> +1 to Wilco, a good opportunity for methods to illustrate

wilco: Other environments ... where being pushed to content authors. WCAG 3 may not be doing enough, I'd like to have that conversation.

jeanne: Yes, very useful!

<SuzanneTaylor> +1 to Wilco

jeanne: I'm coming out of this thinking that we need a group for flashing soon. We could ask AGWG. Most people here are already active in sub-groups.

jeanne: Maybe spin up a new group of AGWG participants.

<JF> the problem with Wilco's suggestion is that many of the browser vendors don't want us to be imposing on their software - ref: UAAG

jeanne: Any next steps? Another example? Bronze/Silver/Gold?

shawn: We talked about the first type of critical error, but we haven't really addressed 2nd or 3rd. 2nd is errors in a process that halt progress.

shawn: And third error type is errors in aggregate. Small issues pile up to the point of blocking the issue.

wilco: Don't have a fix, but worth pointing out, WCAG 3 is focused on views. These seem like things that are much bigger, span across multiple views or an entire site.

<Lauriat> +1 to Wilco, the reason I didn't want to center on views

wilco: We need to figure out how to deal with that. Things like this are problematic when they occur in a lot of places, but not specific to a view.

<JF> +1, and also beyond "web pages". Applicability to PDF, XR, native apps, etc.

jeanne: Interesting point, that we are focused on views, our intention was to focus on sites or products, going beyond web pages. It's interesting that what we ended up with is focused on views.

<Lauriat> +1 to JF's point, too, definitely.

jeanne: perhaps this discussion could help us move towards the goal we wanted.

sajkaj: I think that's exactly the reason and purpose for the examples, it's not the numbers. Yes we need some sort of judgement, but that these show up in a process

sajkaj: and fail the markup, but if it's a page of 200 words, and it's a 10 or 20 view process, could you live with that and pass it? Arguably yes. How do you write that up.

sajkaj: and dito for the other examples. Leads to also consider if you have a 20 view or step process, and you have to make this judgement 15 times, you should probably not accept them.

<Lauriat> +1, I like that.

sajkaj: too much of a load in aggregate.

sheri: Janina said what I wanted, but channeling my peers, aggregate is important for Coga. Some individuals will be more impacted by small aggregate of problems.

wilco: One way we might want to look at this, there might be different scopes of things we can test. Something like, title describes the page is a very page centric test.

wilco: makes sense to test one page at a time. Some tests may make more sense to be tested for a process, and some for a whole web site.

wilco: I proposed different docs of wcag that approached from different layers and scale.

shawn: Good to explore. Might benefit from examples, especially since this is a new proposal.

<Zakim> JF, you wanted to ask what the solution is to Sheri's comment? I understand the problem statement, I'm not sure what the solution is

jf: Sheri - you talked about the aggregate concern. How do we solve that, and how do we write up a requirement so that any kind of solution is scalable?

<sajkaj> Yours truly experiences the aggregate problem (or used to experience it) making airline reservations. I could go into gory detail!

jf: We can explain the problem, but I'm not sure how we can create guidance that we can test or measure.

jf: What is the right thing. That's my concern. At the end of the day, I need to know what is too much and what is sufficient and acceptable?

<jeanne> Suzanne had a proposal for addressing the aggregate

<SuzanneTaylor> Jeanne requested a write up on an idea related to Aggregate Critical Errors: Failure for “Aggregate” may be defined outside of the Outcomes, or be its own Outcome. Each Outcome might then have its own definition of an aggregate failure. The Aggregate Outcome could be something like, “Each process/task includes no more than x aggregate failures as defined in other Outcomes.”

shawn: Janina and wilco brought up some proposals that may explore those solutions.

sheri: I think David Fazio may have some ideas.

jeanne: <reads suzanne's comment>

shawn: It de-couples the specific test for the specific thing. So aggregates of different kinds can add up to a critical failure.

shawn: Next steps are to explore some of these and write up some examples. Especially in the aggregate that will best come from different examples and see how they mash up.

jeanne: to have examples for the guidelines we currently have so we can experiment with real data.

<JF> I am reminded of a phrase from our colleagues at EO: Critical for some, useful for all.

sarah: Wondered if the group has worked on the process/task concept.

jeanne: Yes.

sarah: That's another gap that the suggestions depend on a clear definition of process. I'm wondering if we should put on our to do list some work to define what that means.

<JF> +1 to Sarah

<johnkirkwood> +1

<jeanne> +1

<Lauriat> +1 https://www.w3.org/TR/wcag-3.0/#processes (defined by examples, rather than a definition)

<Zakim> sajkaj, you wanted to Sarah

jeanne: Shawn posted link, it's defined by examples and not an actual definition.

shawn: kind of "you know it when you see it". I like using existing guidance. The ones we have make a good set to start with.

shawn: Visual Contrast is comprehension, if you have a lot of text that's annoyingly low contrast, it builds up quickly. Good place to start, so we aren't starting from scratch.

jeanne: Bottom of hour, good place to wrap up.

jeanne: I encourage anyone who is interested in this conversation to write up some notes. Doesn't have to be anything big and formal, that we can talk about on Friday.

Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).


Maybe present: michael, sarah, Shawn, sheri, Suzanne, wilco