W3C

- DRAFT -

Silver Community Group Teleconference

11 Dec 2018

Attendees

Present
Charles, kirkwood, jeanne, Lauriat, Makoto, JF
Regrets
Chair
Shawn, jeanne
Scribe
jeanne

Contents


<scribe> scribenick: jeanne

Conformance model working discussion: building up points

<Lauriat> Conformance super-drafty draft: https://docs.google.com/document/d/1wTJme7ZhhtzyWBxI8oMXzl7i4QHW7aDHRYTKXKELPcY/edit

specific section we are working on is: Point System https://docs.google.com/document/d/1wTJme7ZhhtzyWBxI8oMXzl7i4QHW7aDHRYTKXKELPcY/edit#heading=h.i2woik1lvd30

<Charles> https://docs.google.com/document/d/1lEkht-bhkaPMzOojWpDjZhbnGeckiYLh-J4-ze7cGS0/edit?usp=sharing

Charles: I created a set of heuristics based on Mandate 376 plus two more (in the document linked above).
... I am concerned about scoring where an individual heuristic from the set doesn't apply and how does that affect the scoring? For example, if a system doesen't accept voice input, then voice input would not apply.

Shawn: This gets into the capabilities of the platform. For example, if an ATM doesn't have voice input, we need to highlight that it doesn't have that ability.

John: If the system doesn't support it because the user doesn't have a microphone. That's a problem in regulatory.

Shawn: But it becomes a method where we can highlight that the method is missing.

JF: Web content is going to be the primary use of SIlver.

Charles: I was thinking of the opposite case where a particular user need doesn't apply because it isn't in the content, for example, text content. Only when something is actionable on the page, we need the fine motor heuristic.

Shawn: People will need to get a zero if they don't apply

Jeanne: We need a does not apply, because if we give them a low score, that limits the final score

JF: We need a way to protect people from saying each category doesn't apply. Give the basic heuristic test and document that the heuristic doesn't apply. Then give them 5 points in the category.

Shawn: Points don't apply except in categories
... For example, you need to get Bronze 10 times.

JF: In many ways this is going back to the all-or-nothing, you either have Gold in every category or we don't.

Charles: It inspires people to go higher to get the higher level

Jeanne: We can change this design -- it was a solution to the problem of leaving categories of disabilities behind.

John: Propose: A minimum score in each category. There are certain number of minimum points to achieve bronze, silver, gold.
... then we have range overall to achieve the overall goal.
... your site-wide is a combination of score in each category, with a minimum they must score, and the overall amount of points

Charles: Is it by page, by domain, by sub-domain

Jeanne: I think the organization decides.

Shawn: The tasks could be very different in different parts.

Charles: How would they claim conformance?

Jeanne: For all practical purposes, that is in the VPAT. People chose to file VPATS for individual products or parts of site.
... people no longer file conformance claims for W3C, it's all VPAT.

Charles: We need a scoring range that is clear. 1-4 with a high and low end, for example.
... mathematically, there is no middle point, so we need a middle point and the middle point is the threshold.
... each user need has to have the same range

Jeanne: Just to be clear, are we talking about heuristic evaluation against the categories, or the individual tests?

Charles: We could apply heuristics against individual criteria, or we could apply heuristic evaluation.
... this setting of a scoring range is against the whole site.
... the heuristics are against the entire product,

Jeanne: So the way I see it is that we would have heuristic tests for the minimum in each category. Then people can score more points by following the Methods. Some of those Methods would still have to have heuristic tests that are specific to the Method. Some of the proposals from COGA need specific heuristic tests.

Charles: You would have to have more than one evaluator and the evaluator needs domain knowledge.

Jeanne: I don't think we can do that on a practical level -- that would increase the cost tremendously.

Charles: If we don't do that, then we only have one person's opinion. 2 evaluators overlap by 80%.

Shawn: We are taking the opinion of how something is made and declaring that and owning it.
... if someone thinks it passes, then we give people the ability to say why they do.

Charles: That's why we need two, so that there is validity.

Shawn: This is a problem we have today, so we don't have to completely solve it. To give people a way to score it, even if it is less than perfect testing.

Charles: This is in answer to a problem that is part of what we said.

Jeanne: We have to be careful about requiring a hueuristic claim that would be the minimum in each category, I have no problem requiring two evaluators for "bonus"
... points in a Method, but we would have tremendous pushback for requiring it.
... I have no problem with people claiming they conform to a minimum for a Category, that's what we have today in the VPAT.

Shawn: Let's get back to an example of scoring.

Updates on the other prototype projects?

MikeCrabb: No news on the prototype, but I did get an inportant paper accepted that is based on the research we did last year. This is important.

Jeanne: Can you also do a view of the prototype that shows a list of guidelines sorted by tag? Mark Tanner has been asking for that, and I know we talked about it, but it would be helpful if we can show that.

Getting examples from Task Forces

Shawn: I talked with the chairs and staff contact yesterday about getting examples from the Task Forces.
... they suggested that we give them very specific examples that we want people to write.
... give them a proposal or a success criteria and ask them to write that in plain language following the template for plain language, for example.
... it was also a discussion of how to get poeple to write techniques, they suggested breaking them into smaller parts.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2018/12/11 15:30:39 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Present: Charles kirkwood jeanne Lauriat Makoto JF
Found ScribeNick: jeanne
Inferring Scribes: jeanne
Found Date: 11 Dec 2018
People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]