W3C

– DRAFT –
AGWG Teleconference

01 November 2022

Attendees

Present
alastairc, Ben_Tillyer, bruce_bailey, Caryn, Detlev, Francis_Storr, GreggVan, iankersey, JakeAbma, Jay, jeanne, Jennie, JenStrickland, kirkwood, Laura_Carlson, Lauriat, mbgower, MichaelC, Poornima, sarahhorton, ShawnLawtonHenry(first_part), ShawnT, SuzanneTaylor, ToddL, wendyreid
Regrets
Makoto
Chair
-
Scribe
jeanne, Jennie, mbgower, Rachael

Meeting minutes

Alastair: If anyone is available to scribe for hour 2, please let me know

Alastair: If anyone can scribe for hour 2, please let me know

<Zakim> AWK, you wanted to ask about survey closing

AWK: I know we have been closing surveys in advance of the meeting. This 1 was closed yesterday.

Alastair: It was a quarter of an hour ago, but it appears like it was yesterday

AWK: It can be hard to get time to review things
… I would love additional time.

Wendy R: Apologies - I didn't know I was meant to write a full report.
… I wasn't able to vote in the survey because I didn't have time to read all the proposals in the time I had
… Can anyone scribe?

<kirkwood> +1 to @@ regarding closing survey per prevous deadline standard

Alastair: We appreciate that
… To Andrew's point: it is different for something like this because there is substantial content to review
… We will do our best to gather comments today
… We can share what we have today
… That survey format is difficult to read through the results of
… We are pasting into a separate document
… Apologies for that
… In general we are trying to stop the surveys about 30 minutes before the meeting to help structure the meeting
… There is a lot to get through, we will do our best

WCAG 2.2 status update

Alastair: We are slipping a bit
… We have implementations for most things
… We have got through 1 AAA with Funka
… If anyone has another site that is AAA to WCAG 2.2 please let us know
… Also, if you are interested in helping with testing, please let us know
… We will use the WCAG 2.2 meeting on Friday to make further progress on that

Alastair: Any questions on the WCAG 2.2 side of things?

<shawn> [ Shawn wonders about LFLegal for AAA ?]

Alastair: Do we have anyone new to the group that wants to introduce themselves?
… Any other topics you would like added to the potential topics list?

Subgroup check-ins

Francis: Issue Severity 2 subgroup met last week
… We were unsure on the functional needs information
… We had Joshua O'Connor join us
… He confirmed that there are a lot of functional needs
… We decided that, since this is a proof of concept
… to take the functional needs from the 11 301.549
… to allow us to carry on
… looking at issue severity, and working out how they can go together
… We have another meeting tomorrow morning
… We are keeping the UK time (normal time for UK people)

Alastair: The calendar invite I has is on US time
… members should confirm the time

Juanite: Testing requirements as methods subgroup met
… We are making progress on the process types
… Hopefully we will have more to present in a few weeks

Jeanne: Silver task force is working on writing outcomes as user needs
… to see if this is a viable direction
… Last week's call was about granularity of outcomes
… We took a homework assignment
… Makoto did some image types
… We are writing outcomes for that

Alastair: Thank you

Wilco: There is a design WCAG3 subgroup that started last week
… They are working on mockups for the how-to and the methods
… They hope to have 1st drafts next week

Conformance conversation https://www.w3.org/2002/09/wbs/35422/conformance_models/

Alastair: We have been through several models in the last few weeks
… I will share my screen

<Rachael> Slide deck that Alastair is sharing: xhttps://docs.google.com/presentation/d/1CvtUu-3h0wx-j-p3MHTsTAuEC3imTU936yDLrLVwISs/edit#slide=id.g17b55e5217a_0_0

Alastair: We will go through each option, and the comments on those options
… I will start with what has been already added to the survey
… Then we will open up for other comments

<Rachael> Comments in a google doc: https://docs.google.com/document/d/1Ekgz4ultg0X8rd5SiYwJ4jbkew77pEULPvIWQrmZLqU/edit#

Alastair: Commenters: please summarize your comments
… (reads through summaries of models, listed by option number on slide 2)
… (reads through summary of option 1)
… David M had some concerns
… (reads through David's comments)
… I assume he is talking about Silver and Gold level tests

Rachael: I think it probably applies to many of these because it is a base concept
… I think they will be repeated in several places

Alastair: We have discussed assertion based things in the past

<kirkwood> David’s concerns seem valid to me

Alastair: but it is still useful feedback

Alastair: Moving to Gregg's comments

GreggVan: the scores were yes or no.
… I refer to these comments in later reviews
… Several use "use of views"
… We tried this the last time around, and it doesn't work because there are an infinite number
… Example: toolbars pop up, expand and contract
… Second 1: partial score, or good enough
… In the example it didn't show any of this
… This can be marked true if there is enough of them
… and it counts for conformance
… I think this is good for showing progress before conforming
… Or for things beyond conformance
… The 3rd thing: user needs base - which user?
… Users have diverse needs. We should talk about specs for the page
… We should talk about user needs, but most authors have no real understanding of the users
… You have to tell them what their thing needs to do
… Before they release a product they need to figure out if they pass
… The next one had to do with context
… and adaptive tests
… Again, this falls into the same category
… Say if this, then that - make it a conditional test
… The authors are not going to know what these contexts are
… And you don't want them defining the context
… That is not accessibility
… Extensibility talks about multiple specifications are available for the same tests
… To say to the authors that there are multiple tests
… Pounds vs metric is fine because of conversions
… But different tests could come up with different results? You have to tell me which test to do
… And maybe it is compound if you need to meet all 3 tests
… Next: qualitative testing, which I think we need to have
… But for a basic level of conformance I don't think we can have it
… Once I ship something, it is frozen for a while. I need to know it is accessible
… I have to be able to defend it
… Then, user testing - love this
… It should always be done, but the results are dependent on the user used to test
… It is a process, not an outcome
… To say the testers have to be able to use it, I can find testers that cannot
… I have tested sites that fail WCAG, but some people that are blind can still use it
… So I worry about this
… Those are my concerns
… There is so much good stuff in these

Alastair: Bruce is next

Bruce: What I was trying to get at in regards to extensible
… When I first read it, I thought it would be easy to get to Silver
… The other options perhaps don't call this out as well
… I think I need to see more in order to get to Silver

Alastair: Rachael?

Rachael: The hardest part of this is shifting the decision making and prioritizing to the working group
… I think the tree structure would be an extensive lift
… having to create it
… I think the biggest gap is not motivating people between levels
… Something would need to be fixed with that to make this work

Alastair: Makoto said he wasn't sure if this was realistic or not
… (reads from Makoto's comments)

<alastairc> https://docs.google.com/document/d/1Ekgz4ultg0X8rd5SiYwJ4jbkew77pEULPvIWQrmZLqU/edit#

Alastair: This is in the shared folder as well
… Next is Rain's comment (reads Rain's comment)
… I think this is a good point, and might be something that could be improved and worked on
… Jeanne?

<kirkwood> +1 to Rain’s comment

Jeanne: In all of my comments, we don't know what the validity, reliability, and complexity is for all of these because we didn't test it
… option 1 has good ideas
… It is complex and needs to be simplified
… I like Bronze, Silver and Gold, but worry about using test types
… When we did the Silver research, we interviewed people doing Civil Rights in the US
… They felt this was a weakness of WCAG 2
… When the Civil Rights office was doing their own studies of WCAG for use in legal cases
… The lawyers thought it was a flaw that testing could be used to stratify

<kirkwood> +1 to Jeanne’s comment. regarding OCR. Found that as well with my interaction.

Jeanne: A, AA, and AAA
… That it was inherently not a civil rights issue
… It was more of a convenience, or cost issue
… A number of things may or may not have been used to decide
… Their concern was testing, and ease of testing

<GreggVan> ah ease of testing got it thanks

<bruce_bailey> @Jeanne -- is this DOJ OCR and is any of the OCR concerns with testing public facing?

Jeanne: Because something is hard to test, it shouldn't be AAA

Kirkwood: I have had this exact interaction as well and +1 to Jeanne

Alastair: OK Wilco

Wilco: I think it comes down to: I am not sure this is going to work well
… I think the 3 errors will be difficult
… I am not sure this is a good place to put this bar - seems like a policy question
… Knowing how many errors something is is very tricky

<bruce_bailey> IMHO there are few AAA SC that are at AAA (and not AA) because they are hard to test.

Wilco: ACT has steered away from this because we are not sure this can be done reliably and consistently

Alastair: I will put this as a topic
… It would be useful for people to think about how we talk about perfection being the
… enemy of the good
… It becomes a fail, and you cannot progress any further
… How do we meet requirements around those aspects not becoming a complete blocker to an organization making progress
… Michael Gower?

MichaelG: I did not find the ranking (like Gregg) very useful
… I made my own spreadsheet offline
… Option 1: main concern is the no more than 3 errors
… Page or site, that scale is pretty problematic

<mbgower> https://webaim.org/projects/million/

MichaelG: I think most people are familiar with the WebAIM 1 million, just for home pages
… They have done this every year for many years
… Average of 50.8 errors per page
… (reads stats)
… Remember, this is one page
… Lots of interesting information
… If we come up with an error system, we have to assess what is realistic
… My overall comment: I have spent several weeks to see if a site can pass 2.2 to AAA

<AWK> +1 to Mike Gower's point about grounding in reality for conformance.

MichaelG: and I am concerned that we are adding stuff
… and people cannot pass now with their home page
… I think we have to concentrate on how can we make more things automatically testable and understandable
… And for the WebAIM report - that is with their own automated testing tool
… And, how can we reward people for exceeding that?

Alastair: That's everyone's comments in the survey on option 1

<Zakim> SuzanneTaylor, you wanted to say that when trying to address combined functional needs requiring, for example, 2 out of 5 granular SC (which were called "methods" in option 4) allows us to include, for example, 3 more granular success criteria, rather than having those items completely left out: slightly non-uniform accessibility may make a more accessible internet overall

<bruce_bailey> +1 to mg wrt need to give props for orgs *starting* with accessiblity

SuzanneTaylor: On the concept of not being allowed to say you need to choose 2 out of 5, or 4 out of 10
… When we are trying to address intersectional needs
… we might want to have other techniques like visual instructions, cartoon like instructions
… To do that, there would be so many more success criteria and people are already struggling with the number of success criteria
… If you have to do 2 out of 5 - and you are the users needing the other 3
… You will find sites that meet your needs
… Rather than leaving things completely out, we should not completely through this out

GreggVan: One of the things that comes up all the time: a few small errors blocking conformance
… I think it is important to separate bugs from omits
… I don't see bugs being a thing for conformance
… especially for people that fix these bugs
… But omits are different
… I think we should admit that there are bugs
… But we try to repair them as we find them.
… Omits are people saying I know I am not doing everything

<Zakim> alastairc, you wanted to suggest something at silver / gold could over-ride the results at bronze, if it is robust enough (e.g. an industry standard on the same topic).

Alastair: Responding to Gregg - I am fairly sure I have heard about people
… having difficulties even if they are bugs
… Regarding building on top of bronze - I wonder if this would work in option 1's tree structure
… Could something in gold
… might enable you to pass even if you have got a few bugs or whatever at the bronze level
… Maybe we could construct conformance if you are doing a more difficult process at the silver level
… to provide flexibility at the bronze level
… This has taken us 45 minutes for option 1
… Some of the comments are applicable to the others
… Any other comments about option 1?

Conformance option 2

Alastair: This was around tests and assertions
… (reads from slide 2 of slide presentation Exploring Conformance Models)
… (reads like 4)
… moving on to the comments
… Gregg's comments are in the previous option

Bruce: I assume the thresholds are up for discussion everywhere

Alastair: Definitely true

Rachael: The combination of percentages and adjectival could hide issues within the complexity
… I think at the lower level it needs to be really clear
… I am also concerned that the COGA issues would be repeated

<ToddL> +1 w/Rachael

Rachael: Jeanne"

<kirkwood> +1 to Rachael’s comment

Rachael: (reads Rain's comment)

Jeanne: I think there are some good ideas. I like the scoring of the adjectival
… and separating it out at the end of the process
… I would like to see that go forward
… I think a lot of the other testing is being forced into binary/pass faill
… I found it difficult to untangle how this would work
… I think it needs to be simplified, and be made consistent
… The bronze, silver, gold would probably be approved by Civil Rights but needs to be tested

Alastair: Gregg

Gregg: The comment on COGA - one of the major reasons for WCAG 3 -
… we couldn't figure out ways to get multiple groups' needs included
… COGA's is so much broader - we need to go beyond what we have known how to do
… 1. We make sure that all of the rest of the things we can't require
… we get into the guidelines mixed in with everything else
… 2. They don't get in here in a different place, like a AAA
… You could have the adjectival mixed in
… Is it there? Is it clear? Can be a higher adjectival in the same provision
… to make sure the COGA stuff gets in front of them
… People will consider them because they are right in front of their eyes

<Zakim> alastairc, you wanted to comment on the coga / assertions issue

Alastair: I have a similar suggestion to Gregg
… I think there is a danger that a lot of COGA requirements could end up not in the bronze conformance area
… but it doesn't need to be just the objective test
… It shouldn't be part of the conformance model that makes it so

Mbgower: I want to highlight things I like in this option
… 1. The idea that the focus is on automated, then human as needed
… 2. Using assertion rather than protocol - I am not sure if it is testing, but assertion is easier to grock
… I had a lot of difficulty understanding the range of measurements
… I also thought the use of very poor and poor
… I would like to get to something like poor and insufficient to more clearly show progression

GreggVan: That is for large and small text.
… To answer his question about the measurements

Bruce: I think I made this comment on a couple of them
… I know what unacceptable means, but I don't know what poor means

<kirkwood> +1 to insufficient rather than ‘poor’ don’t like it either

GreggVan: Pass is passing
… The other 2 are not passing
… We talked about needing people who haven't yet made it can measure progress towards passing

<bruce_bailey> +1 for need to measure progress towards passing

GreggVan: Example: had many poors before, now have only one poor - to help people see reports internally or externally

<bruce_bailey> +1 for rating below pass

<Zakim> mbgower, you wanted to say that 4.2.3 Recommendations has potential, but I worry about if it can scale

Mbgower: Section 4.2.3 recommendations
… That includes things that are beneficial but couldn't be required
… I think this has potential
… I worry about having some way that scales to how people follow looser guidelines
… as opposed to specific requirements

Alastair: That's all the comments on option 2
… Looking for a scribe change

Option 3

<Jennie> * Thanks Bruce!

Alastairc: I collapsed very poor and poor into one level
… although I take Bruce's point about capturing improvement
… It was also trying to allow for some things not passing and that not instantly being a fail

Gregg: It was similar to 1, with the changes you talked about. I think there should be 2 below pass, because some have terrible sites. If getting all the way up to pass is too daunting. Gives opportunity to show progress.
… There may not be 2 levels below because we may not be able to think of 5 levels, but we may have to collapse

<bruce_bailey> +1 to idea that we need a couple adjectival levels below passing. I just do not think "poor" is one to use.

Alastair: [reads out Bruce's comment]

Alastair: Do we think adjectival is a useful way to progress?

Rachael: My primary concern is using adjectival at the test level. It can be effective to speed up, but it is hard to get consistency

<Wilco> +1

Rachael: We played with it using percentages in the FPWD and it didn't seem to work

Alastair: [Reads Rain's comment]

Jeanne: The idea of picking the lowest score for the outcome would probably not be valid.
… I spent a lot of time studying validity. It's much easier to understand, and the bronze/silver/gold would probably be accepted by civil rights experts.

Alastair: Just so I understand, there's a gated approach where you have to score at least "okay"

Alastair: Why wouldn't that be valid?

Jeanne: Validity is how well it reflects the real world.
… If you're always picking the lowest score, then you're always going to be dragged down.

<Detlev> I don't get it

Jeanne: When you look at the validity, it's going to be too low. You have to have some way of picking the middle. The average.

Wilco: I'm essentially in agreement with Jeanne and Rachael

Alastair: This had skipping methods. The tests would be normative.

<Zakim> SuzanneTaylor, you wanted to say that lower level quality tiers may not be helpful - It's difficult to imagine a team re-writing alt text to move from very poor to poor, for example

Alastair: It's a lot to get through with the number of documents we had.

Suzanne: Multiple levels below pass would not be motivating.
… I don't think they'd alter their alt text to go from very poor to poor.

Gregg: If you think about it, even your proposal has 2 levels. You didn't get to poor. You got to poor.
… You used percentages. '75% is poor, not passing'
… when we talk about percentage, we have to be careful. Percentage of what? For contrast, paragraphs, wordings, letters?

<SuzanneTaylor> I didn't have any percents in my proposal, but my comments are not designed to steer toward my proposal

Gregg: I think they're solvable, but there can be wierdnesses

Jennie: Responding to the comment on needing more categories below pass, it's important to remember that people could be using these measurements in other ways.

<Zakim> bruce_bailey, you wanted to say mediocre sites are where the work is, i think

Bruce: I think the opportunity for growth is at the mediocre. That's where we need to juice the standards. To motivate people.

Bruce: I'm not sure we need to focus on rewarding people who are doing great jobs.

Option 4

Alastair: This is the badges option.
… the example is Access to audio
… At silver level you need to provide quality outcomes of the bronze levels

At gold, you'd need to do more things, or gold-specific qualities, like atmospheric music descriptions.

Gregg: There's lots of things to like here.
… It restricts scoring to quality. It introduces badges.
… It still has views in it, which is a problem.
… It includes adaptive and extensible, which are problems I talked about before.
… We should talk about what should be true, not how they do it.
… I didn't understand what 'aggregate outcomes' meant as used.

Alastair: i'm not sure whose option this was.

Suzanne: The aggregate outcomes are a special kind of outcome where if you have 1 outcome, that's fine, but 20 starts to build up on cognitive load.
… One irritatingn bump in a road is okay. A bumpy road becomes exhausting.

Gregg: That was helpful.

Alastair: Bruce said this approach seems the most distinct.

Alastair: Rachael your concerns were similar to last time.

Alastair: [Reads Makoto's comment]

Alastair: [Reads Rain's comment]

<Ben_Tillyer> +1 to Rain's comment

Alastair: That sounds like an interesting shift I hadn't thought of.

Jeanne: This is an innovative appraoch. It has a lot of good ideas.

<kirkwood> +1 to Rain’s reframing language

Jeanne: I love the badges.
… I think it's more motiviational.

<GreggVan> +1 to Rain comment about shifting language from what people can or cannot do - to what they need - or needs to be true about the content

Jeanne: It might motivate people who are just meeting WCAG today.
… In 5.1 the scoring test ideas were very strongly disparage in issues filed in FPWD.
… We probably shouldn't pursue that piece of it

<sarahhorton> +1 to focusing on guidance for designers and developers

Wilco: I feel like I don't fully understand the proposal.
… I really liked that if you tried to conform, it became 'required'. So it allowed WCAG 3 to be extensible. So if you make it accessible to kids, you declare it, and you are judged for that. Or for gaming.
… There is a bit of a risk to that, in that people may not always want to do more. So we need a strong baseline still.

Gregg: Just one thing. I misread the badges as being a step between silver and gold. I like badges being for particular provisions.
… Can we make something that encourages people to do steps in the right direction?
… I like that badges being at almost a provision level.

Option 5

Alastair: "Required or optimised"
… it uses captions as an example

For gold, all methods pass.

Gregg: Highlights: required items bronze; silver and gold progress
… mixes required with optimized, which I like
… My concern is it passes about 'pass conditional'. I'm not sure what that means.
… I couldn't find a full write-up

<SuzanneTaylor> +1 to optimal at the bronze level

Wendy: I can address if you want to go through comments first

Bruce: This is the one that was in the speaker notes only?

Alastair: Yes

Bruce: I could see how that could go into a Chinese menu approach, which could be very accurate. I do worry about complexity.

Rachael: I'd like to hear about 'pass conditional'. I'd like to see it fleshed out more.

Alastair: Rain wasn't sure how scoring worked.

Jeanne: I really liked this proposal. I want more details.
… I like the required/optimized categorization
… We need to keep a careful balance between 'required' and 'optimized'.
… I didn't like the bronze/silver/gold. I think there are better versions.

<sarahhorton> +1 to "required" concept as prerequisites to any conformance claim

Jeanne: I think this has great potentail as a structural base

<ToddL> +1 w/Jeanne

Wendy: Apologies for not fleshing this out more.
… I can do that.
… To clarify pass conditional, it is separate from required.
… I envisioned all things between required and optimized. It helps address disability groups.
… If it's not present, it has an outsized impact on a user group.
… so it should be required.
… Pass conditional. I definitely see many where we have pass/fail. Something is present or not present.
… There are tests where nuance is required. That's where I see pass conditional coming in.
… Yes, this thing is here but there is an element of measurability. Yes, this image has alt text, but is this actually good?
… There is an element of subjectivity, but it allows us to assess quality.
… And I think it could be on a scale.
… It would allow implementers to set up their own schemes.
… I'd love to combine with some of the other proposals.
… I can definitely flesh this out more. Thanks for the feedback.

Best aspects to take forward

Alastair: the best outcome was 'some combination'. I think comments helped with that. What are the best things we can start with and take forward?

Laura: Option 2 simplicity is more like WCAG2.x, continuing A, AA, AAA but that might be an equity problem
… possibilty a solution from a proposal from John Foliot.

Gregg: Best is Option 2. Wants to add Badgees from Option 4
… I'm leaning toward having only one level below pass
… Looking at Option 5 of mixing Required and Optimal
… explore the "criteria of evidence to support the assertion" from OPTION 6

Alastair: If we use the Badges, then we don't need the levels below Pass because that introduces two motivation systems

Bruce: Option 4 highest. LIked the changes to adjectival.

Suzanne: Haptics, 3D printable files, are examples of different types of things at each level. Any of these is a great start and put concepts into the whatever one we do as a base.
… I see it as a grid. Optimal tests are next with each other. If those setting policy for a specific organization, country etc, are not able to adopt the quality-type tests, they can use the true-false side of the grid for each level. This way the levels are not based on testing. Others might be able to use both sides of the grid.

Alastair reads from Makoto: After I'm done with this survey, I'm feeling that I'd like to see how the WCAG 3 Outcomes/Methods will look like to discuss the conformance model. If we can see how the existing SC in WCAG 2 and additional/new ones will look like in WCAG 3, we might be able to discuss the suitable conformance model more effectively. That's what I'm feeling now.
… Plus, they might need a lower level than Bronze to promote making digital contents more accessible in countries where web/digital accessibility has not become legal requirements

Alastair reads from Rain: Unclear on Model 5 so not marking as "don't want" or ranking. There may be something to the conformance that is being suggested here if it is more clearly defined.
… In general, for *all* of these, I think it would also be helpful to have the conformance rating based more on success with entire tasks or jobs within the product rather than units. Instead of trying to conform to a % of optimal methods, moving more towards, "users can complete all tasks with x-ease" or something more closely tied to success with the actual purpose of the product than with

individual bits or components.
… Essentially, it would be great to see the conformance and scoring based more on *outcome* and less on *component interactions.*.

<alastairc> Jeanne: Option 1 needs simplification but overall structure is excellent. Option 2 has a very good idea for scoring adjectival and for Bronze, Silver, Gold. Option 3 has a very good approach to organizing the testing. Option 4 badges are more motivating than any other proposal and should go forward. The edits to the intro to make them more designer and developer oriented should also go forward. Option 5 required and optimized is a simpler and

<alastairc> easier to understand way of organizing the guidelines and outcomes.

Wilco: I like the opt-in requirements in Option 4
… I worry about lowering the bar. Full conformance should be full conformance, not lower the bar. There are other and better ways that should be in regulations not guidelines

<kirkwood> +1 to Wilco

<Jay> +1 to wilco "do not lower the bar"

jeanne: -1 to Wilco and regulation. It wrecks harmonization and ultimately makes it harder for multi-national organizations

<jaunita_george> +1

Mike: This would be a good time for user studies on when things become barries "when a bump becomes a bumpy road".
… test for things that exceed the baseline
… testing to help define the baseline
… thinking about a lot of things

Alastair: What of the options should we use as a base? We have support for 2 and 5
… and drawing in other things like the badges

Rachael: I agree with 2 and 5 and pulling ideas in.

<mbgower> To summarize what I said, can we do user studies that assess accessible qualities of sites across functional needs, including intersections

Rachael: Suzanne, would you like to explore badges on your own, or do you want to do it within 2 and 5?

Suzanne: I would like to work within 2 and 5 and develop two different proposals.

<mbgower> And can we work from that information to come up with computational tests that assess not just the baseline, but help assess attaining results beyond baseline?

ALastair: We can start subgroups in January

Rachael: We don't want to lose momentum on this. Survey next week on people who want to work on it.

Alastair: I have some themes that I think we can continue this discussion.
… Gregg's comments on Views need to be sorted out separately from this because it can be used in any of the options. It's not a requirement for this conversations

<mbgower> I found 2, 5 and 6 in particular had ideas worth incorporating

Alastair: the theme of COGA ending up as higher level assertions. Mike's comments can be a solution for it.
… I found @@ an attractive concept
… cumulative errors concept would also be helpful

<kirkwood> +1 to Gregg about COGA language

Gregg: I want to stop using the language that "COGA is pushed to a higher level". It has nothing to do with them being COGA, it has to do with ease of testing.
… it is good that COGA and other disabilities get some things that are difficult to test before the eyes of developers.

<alastairc> The adjectival rating was an attractive, I'd like to find out about people's previous experience.

Gregg: levels below. I am moving to the camp of onely 1 level below.
… I didn't think that Badges were a way of acknowledging below.

Rachael: If you have proposals that aren't based on 2 or 5, please propose it. I don't want to close out ideas.

Alastair: If we started out with 2 and 5, what other ideas would be helpful
… to answer Gregg, I tend to use COGA as shorthand for the difficulty of making things testable.

<Zakim> SuzanneTaylor, you wanted to say yes badges were above in the initial proposal, but meeting comments last Tuesday resulted in a comment in the doc about using them for a path from a level below basic compliance to build up to basic compliance

Alastair: I thought using badges for lower rating was a soft thought

Suzanne: I can write a clean doc on badges and base it on comments.

Holiday scheduling - please check your calendars

<alastairc> https://www.w3.org/WAI/GL/wiki/Upcoming_agendas

Rachael: We will survey next week on US Thanksgiving holiday, and we also are thinking of being on holiday the last two weeks in December.

<Rachael> s/we are also thinking of being/we will be

Minutes manually created (not a transcript), formatted by scribe.perl version 192 (Tue Jun 28 16:55:30 2022 UTC).

Diagnostics

Succeeded: s/Michael Cooper/@@/

Succeeded: s/unerstand/understand

Succeeded: s/... liked "//

Succeeded: s/@@/If those setting policy for a specific organization, country etc, are not able to adopt the quality-type tests, they can use the true-false side of the grid for each level. This way the levels are not based on testing. Others might be able to use both sides of the grid.

Failed: s/we are also thinking of being/we will be

Maybe present: Alastair, AWK, Bruce, Francis, Gregg, Juanite, Laura, MichaelG, Mike, Rachael, Suzanne, Wendy, Wilco