W3C

– DRAFT –
(MEETING TITLE)

08 June 2022

Attendees

Present
jspellman, mbgower, Rachael, thbrunet
Regrets
-
Chair
-
Scribe
Rachael

Meeting minutes

objective vs subjective document: https://docs.google.com/spreadsheets/d/1vLuxeNwdgkSS0wenjAcm9eQlYMSGtwCHP125QYbSOKI/edit#gid=0

https://github.com/w3c/silver/wiki/Scoping

reviewed wiki page

Jeanne is referring to https://www.notion.so/nomensa/633121a1bcc94120833db2df302d8794?v=818fb138d19f42648d6d784ca90de239

mbgower: walks through work to date.

Jeanne: Test reliability is trying to make outcomes more precise than WCAG 2 SC. Currently testing out process.

mbgower: Is what we are doing redundant with test reliability group?

Jeanne: Trying to avoid contradictory work.
… already know they could be more objective.

mbgower: There is not a lot of agreement that current SC are not objective.

Rachael: I think there is value in showing that the current guidelines have a certain level of subjectivity

<Wilco_> inter-rater reliability

Some people think objective = high inter-rater reliability
… may be a definition problem.

also inner-reater reliability. Same person gets same result. Inter-rater reliability = many people test and all get the same result
… we mean objective as is there any judgement call.

<mbgower> inner-rater reliability = can the same person get the same result (do the same thing)

<thbrunet> I think it's intra-rater rather than intra-rater.

<Francis_Storr> this reminds me of the Challenges with Accessibility Guidelines Conformance and Testing, and Approaches for Mitigating Them document: https://www.w3.org/TR/accessibility-conformance-challenges/

<Wilco_> -1

Rachael: can we get a list of agreed upon terms for these conversations

Michael adds problem statement

<mbgower> We lack defined terms for discussing testing concepts, which makes it difficult for us to all be talking about the same thing – or agreeing we are.

Jeanne suggests we go back to look at research from research development working group, symposium summary paper

Jeanne to provide link

<Francis_Storr> Reminds me of this information architecture article on defining things: https://abbycovert.com/writing/what-do-we-mean/ I'll dig through that and see what might be useful.

thbrunet: There are 3 things: 1. Completely objective, automatable. 2. Well defined through human intervention 3. Human interpretation

Rachael: Also maybe 4) implied (audio description has good timing, accessible name is a good name

<jspellman> https://www.w3.org/TR/accessibility-metrics-report/

<jspellman> Web Accessibility Metrics Symposium

mbgower: Trying for a quick exercise to help move along work

wilco: May be test reliability more than this

Rachael: Maybe review and revise the categorization content, Maybe write visible controls, need to work on terminology (both definitions and also for test types and scoping)

Wilco - test reliability may be able to take test terminology work.

https://www.notion.so/nomensa/633121a1bcc94120833db2df302d8794?v=818fb138d19f42648d6d784ca90de239

mbgower: Focus on a better set of terms.

testing concepts (objective, inter-rater reliability, etc) and test matrix terminology

<Wilco_> https://docs.google.com/document/u/1/d/1sugAtqie_x1XqHDZo1Im7ftDNllWeRV_ty4PULeoTV0/edit#heading=h.q6kvhdps0qv4

mbgower will continue to work on exercise as a double check to migration.

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).

Diagnostics

Succeeded: s/Jenne/Jeanne

Succeeded: s/inner-rater/intra-rater

Maybe present: Jeanne, wilco