WCAG 3 Protocols

03 June 2022


Chuck, Jaunita_George, jeanne, JF, Le, mbgower, MichaelC, Poornima_Subramanian, Sheri_B-H
Jaunita George

Meeting minutes

<JF> Preent+

Review/edit comparison table (30 minutes)

<Jaunita_George> https://docs.google.com/document/d/1gQ8k6Dkaxnl9fSY3hTbRzTGgdr-FTdlO5fmU5wPSI5E/edit

Work on a single proposal (30 minutes)

Review/edit comparison table (30 minutes)

<Jaunita_George> https://docs.google.com/document/d/1gQ8k6Dkaxnl9fSY3hTbRzTGgdr-FTdlO5fmU5wPSI5E/edit

jg: above table, we tried to compare the proposals, avoid paraphrasing

we want to see if we agree the details captured are accurate

then see if we can develop a harmonized protocol

if not, we´ll just go to the AG with both proposals

to start, any changes on the points of comparison?

jf: my comments are responses to questions, may be should be incorporated

big point is that protocols must be vetted by AG WG

they can be externally developed, but AG WG vets before they are listed as approved

mg: does that mean we have to validate protocols in all human languages?

jf: review everything to send forward as proposal

yes, that may be hard for us to do across languages

<Zakim> mbgower, you wanted to say I think it is just one example and we have a scaling problem if we have to vet

but there are broad principles that apply across examples

mg: don´t think analogy is to tests, but to techniques

don´t think we can vet everything

<Zakim> Chuck, you wanted to say I acknowledge the challenge, can we note it down as an issue to be reviewed later?

<Jaunita_George> https://docs.google.com/document/d/1UgoMz3OPyoEVLbU4uCU5F5K6aEM7E1rii6oCaWqQy50/edit#

ca: can we flag that as an issue? so we can take the proposals

jg: ^ we have draft editors´ notes for each proposal

<Jaunita_George> https://docs.google.com/document/d/1W_5H0MCoKzGaD9XCxgzdqZ-1TiVCXHVipE_vNnG2DOQ/edit?pli=1#

jeanne: there are lots of concerns around scalability

<Jaunita_George> The first link is the Points for Protocol proposal and the second is the Evaluating Procedures proposal.

can you compromise on this point so we can harmonize the proposals?

jf: if we don´t vet protocols, they can write whatever they want

<Zakim> jeanne, you wanted to ask if John has any interest in compromising on this item

we haven´t answered whether protocols will be part of the scoring model

so don´t know @@

can´t have fox guarding the hen house

<Chuck> MichaelC: A broad comment. What we are doing right now is identifying accuracy of columns. Maybe we identify that there is a difference. For 1 proposal or 2, I'm seeing that there is a lot of commonality in visions of protocoals, and some differences in how they might be applied.

<Chuck> MichaelC: They are questions that are seperable from what a protocol is. Let's identify the points of similarity and differences.

<Chuck> MichaelC: We can handle them in the group discussion.

<Chuck> +1 to Michael

jg: circling back to the comparison, have added content to the points column

jf: a difference I see is evaluation proposal, protocol evaluates something

in the points proposal, it´s more instructional

functioning like the COGA supplemental guidance

they illustrate using user stories, but final determination is @@

so I see protocols as used earlier in the production timeline in the points proposal

jg: so with these changes, does the points column look accurate?

jf: guess so based on what I know

jg: pasted content that was changes, others seemed more as comments

moving on to the evaluation column

ca: for clarification, we´re checking this mapping accurately describes the proposals?

<Zakim> Chuck, you wanted to ask for clarity on the purpose of this portion of the review.

<Jaunita_George> ack

jg: yes, it was an exercise Jeanne and I made, want not to mischaracterize anything

jf: I´ve been concerned that we use different meanings of the term ¨protocol¨, see MG as supporting

at some point someone is going to want to own the term ¨protocol¨

jg: don´t think we need to own terms

returning to evaluating procedures column

<Zakim> mbgower, you wanted to say what we're talking about is similar, but the process/interpretation of how to use is to me under discussion

mg: this table helps to see similarities and differences

I think the main differences are in the process of using a protocol

I don´t think it´s unsolveable, it comes down to use cases

<Zakim> Chuck, you wanted to say that the merging is the next agenda item. This stage is to...

ca: just to refocus that at the moment, we´re making sure the columns are accurate

<mbgower> column 2 is pretty thyin

<mbgower> thin

<mbgower> for the definition 'what is a protocol'

lsmn: the document isn´t clear enough to represent externally yet

<Chuck> acknowledged, we are moving into 2nd agenda.

Work on a single proposal (30 minutes)

jg: moving on to harmonization

let´s look at similarities and differences

jf: think the evaluation proposal is about @@

but the points one is more about educating and incorporating work earlier in the process

and get points for having gone to that effort

lsmn: so the points protocol is a documented procedure that defines qualitative best practices

and the evaluation one is about fitting into scoring

jf: not procedure, which can be too rigid, but e.g., a user story that explains the issue

that may not be measurable, but it´s describable

lsmn: so it´s more of a commitment, something the org is striving for

do they state how they´re doing it?

jf: via a programmatically linked conformance statement

conceptually like an EPub manifest file that has a bunch of metadata

making the commitment, and making it publicly via this statement, is what keeps them moving

gives people scope to question how they´re doing it

lsmn: so in points proposal, the conformance comes via the statement where they commit to it and outline how they´re doing it

jf: or report on steps taken to achieve

<Chuck> MIchaelC: The evaluation procedures is a superset in this sense, may not be clear enough in the documenation. In both proposals, protocol defines what you are trying to do.

<Chuck> MIchaelC: It does document. The point of difference is the evaluating is you might test against this. But other view is it is impractical.

<Chuck> MIchaelC: We have an uncompleted discussion on what happens for minimal conformance. Simply adopting the protocol is enough for minimal. I interpret this as the same for "points". We go on for higher conformance levels. I hear them being different levels of details or conformance for same general thing.

<mbgower> +1 to what Michael just said

<Jaunita_George> +1

jg: so in both proposals, companies state what they´re striving for

<Chuck> Le: Are you saying striving same thing, but minimal conformance is that we are trying to adopt?

<Chuck> MIchaelC: At minimal conformance, if you are making a public claim, but you only need to commit to it.

in the evaluating one, at minimal conformance, it´s similar to the points one?

<Chuck> Le: Later conformance, that's where you define how you reach these goals.

ps: think we can have protocols that define what is needed, and define how the feed into rating

jf: the conversation keeps coming back to evaluating

points is about providing guidance, evaluating is about evaluating outcomes

think @@ is oriented to UI designers, other is oriented to engineers

ps: should make clear the audience for the protocol, that will clarify some of those questions

<Zakim> mbgower, you wanted to say that one of the challenges we have is trying to compare solutions when I think if we focused on the problem space, we'd see these two proposals are VERY similar

mg: we don´t need to view as a dichotomy

we´re all exploring ways to lead to WCAG conformance

<JF> struggling with the term "measure"

<Chuck> +1 to MG

we should take the best ideas, rather than filter at this stage

<Jaunita_George> +1 to MG

<mbgower> From the How do they fit column: "Protocols describe inputs such as documentation of steps, actions taken, date completed, conformance claims, etc. but do not necessarily measure outcomes."

<Chuck> MIchaelC: I keep hearing "evaluating procedures" talking about "evaluating outcomes". It is about evaluating effort towards the outcome. That's a significant clarification. If you take away the evalauation, you are at the same thing.

jf: +1 to MG, but I cringe at measure

we can´t measure, they´re all contextual

lsmn: can we agree if we could measure them, they´d be in the guidelines elsewhere?

jf: yes, it would be a test procedure

instead, we might provide test scripts

a machine can´t test e.g., alt text quality, but you can provide a decision tree to get to a good version

jg: looking to how to combine the protocols

the points one wants to give people credit for adopting, and @@

the evaluating one includes measuring their adoption

so it´s a higher threshold

maybe we can combine in such a way that effectively the points one is a baseline

which is a public conformance statement

you can´t do less than that and get conformance points

then the evaluation one takes it further and measures the adoption effort

orgs could choose which level they want to target

jf: how do you measure adoption effort?

especially where org has a small web team

<Chuck> MIchaelC: As best as I can characterize, we are in proposal stage and have to work out details. You define the steps you follow, in addition to the guidance, and the steps to implement the guidance, and the check is to confirm the steps have followed well.

<Chuck> MIchaelC: Evaluation one has a baseline that does not have a requirement for self evaluation. That may persist in further discussions.

sbh: just want to point out that the maturity model already has established how to measure adoption

jg: maturity model might be a protocol or procedure

<Sheri_B-H> it's a good opportunity to link the two

jg: let´s continue this discussion for the next meeting

we might be able to combine the proposals into a multi-tiered approach

<mbgower> https://docs.google.com/document/d/1twjaSude_5-1VdpFKPX1Bw_hA09cIvyzeP1h8PowSxo/edit?usp=sharing

mg: ^ I added commonalities, I think there are a lot

lsmn: +1, see more commonalities than differences

lsmn: see the evaluating one as adding a few more steps to otherwise common proposal

a big difference is on who can write a protocol that is valid for conformance

<Zakim> Chuck, you wanted to propose that MG and LE share what they perceive to be the commonalities next meeting.

jg: evaluating one provides requirements so self-documented protocols aren´t spurious

<mbgower> Thanks for the discussion!

<Chuck> 6AM for me!

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).


Succeeded: s/but I stop at measure/but I cringe at measure

Maybe present: ca, jg, lsmn, mg, ps, sbh