Silver Community Group Teleconference

30 Nov 2018


Charles, jeanne, LuisG, AngelaAccessForAll
jeanne, Shawn


Point System and Conformance

<jeanne> Jeanne: Following up on the meeting Tuesday, and what we were talking about people writing their own methods.

<jeanne> ... how would we assign points? People aren't going to do something if they don't get credit

<jeanne> ... what if we said that all methods had the same points?

<jeanne> Charles: It doesn't make sense in some scenarios

Charles: Ultimately, the goal is "did you meet the human need" not "did you meet 8 different ways?"

<jeanne> Shawn: We don't want to penalize people for creating new methods, but we also don't want to encourage them to make up new methods, because most of the time, that isn't a good idea.

Lauriat: If the points are tied to the human need, then the tests need to be tied to the human need.

<jeanne> Charles: We don't want people to game the system by just adding methods

<jeanne> ... we don't allow it into the document unless we validate that it meets a human need.

<jeanne> Shawn: But we need to allow people to create new methods

<jeanne> ... I think that may have been what David was trying to accomplish by creating a catch-all method

Jeanne: David wrote up an example, I put it on the Google Drive

<jeanne> David's proposal: https://docs.google.com/document/d/1UaYMTwcQv-4i6SbCne2o3JhPKgcK1QvS6R7VkVL5BqI/edit#heading=h.qjllof4t9wau

Jeanne: he included a long list of techniques that could be made into methods

<jeanne> Jeanne: I rather like it because it allows us to incorporate new Methods without having to negotiate points for each one. We could give a fixed number of points for the catch-all method

<jeanne> Shawn: I wouldn't want to penalize people

<jeanne> Jeanne: It wouldn't have to be less points than other existing methods, it just doesn't need to be more than other Methods.

<jeanne> Shawn: David's example put a success criterion as the catch-all Method.

<jeanne> Jeanne: I don't think that's bad, as long as it isn't the phrasing of every Method. It's a useful place for a success criterion.

<jeanne> Shawn: Tweaking the Guideline for Robust, keeping the Guideline and moving the other advise into Methods. It makes Robust a very straight-forward conversion to Silver.

<jeanne> ... so 1.3.1 would move to Robust, because it boils down to "did we code it correctly?"

<jeanne> Charles: The fallback Method could be, Is it Robust?

<jeanne> Shawn: The SCs that are based around coding could be one Method of ensuring that your application or product supports assistive technology.

<jeanne> ... parsing could be another Method that is markup specific.

<jeanne> ... Name Role Value is a better example.

<jeanne> SHwan: The thing I don't like about a fall back Method, is that it isn't a Method,

<jeanne> ... it doesn't tell you what to do

<jeanne> Jeanne: I don't think that is an insurmountable obstacle. It could just tell you that you are allowed to create your own Methods and this is how you validate that it worked.

<jeanne> Luis: I think we could restructure the fallback Method so that it is very generic. Take the example of using a kiosk: We could make it more generic.

<jeanne> Shawn: It's more of a generic test than a generic Method.

<jeanne> Luis: It could be generically phrased that you have to provide the information

<jeanne> Shawn: But what is the difference then between the guideline and the method?

<jeanne> Jeanne: It's a convenient place in the Information ARchitecture to include the test information and the point value.

Shawn: How would we have a scoring system for a user-task. Instead of focusing on the page, focusing on the task a user has on a page.

Jeanne: So let's say we have a hotel booking site. If we had a guideline that said, user must be able to accomplish the purpose of the product. We could have methods that cover different types of testing.
... and accomplishment. People could get points for how well people accomplish the task they want. It could just be something that adds to get to Silver or Gold level.
... after your granular component testing, you could do task accomplishment testing and get points for that
... The important part of that is that the organization needs to define what the tasks on the page should be.

Shawn: To pick up on that...if we have a hotel booking site. This isn't something where any user would go to one particular page to do a thing.
... they're trying to go through a flow to do specific tasks. For a hotel...someone could book a room on some location based on criteria they need
... by city, hotel chain, availability, etc.

Jeanne: They'd need to be able browse locations, select a date range...

Shawn: The user doesn't say "I want to go and filter by date" they go to "find a hotel"
... the overall user story is "go to the site and book a hotel room based on the criteria you have"
... another would be " I need to go to the site to alter or cancel my reservation"
... or "based on my stay, I want to leave feedback"

Jeanne: And the developers would be the ones that determine that
... I think we could have a variety of methods. One would be the developer does a cognitive walkthrough. Another could a heuristic evaluation of how easy it is to do that.
... and get some people with disabilities to see how well they could use the site

Shawn: Setting aside this topic. When in the design sprint, we were sketching out how a failure to provide alt text could be tied to how it affects a given task.
... this whole time I've been assuming the task would be expressed at the conformance level so you could say "all tasks for this site are conformant even though there are images without alt text that don't affect the task"
... or "the going in and leaving feedback part fails because alt text is missing for the star rating"

Charles: I think "task" has to be defined as part of this. For example, the user task could be something that isn't an interactive thing
... instead of completing an objective. Their model of a task might be "confirm that the hotel room has a safe in it" there could be an information only task
... like if there is an FAQ on the site that has the information, but there are images all over the site showing safes but the alt text doesn't show that

Shawn: My confusion with the tasks tying to a guideline instead. I'd like to explore that possibility a bit more. I think it could be extremely powerful
... I only have the disconnect between the actual guidelines and the tasks. Making sure you can express the meaning of a guideline via tasks.
... with the design sprint exercise, we were defining user success by doing a walk through with a given persona and saying "given this persona can't see screen and needs visual info via other means"
... it would fail that task if the information wasn't provided via text

Jeanne: We could have it both ways. It would count against an alt text guideline and overall usability. The ability to accomplish what you wanted to do

shawn: the bit I'm trying to understand is the IA and the expression in terms of conformance

Jeanne: Let's say someone has images on a menu without alt text, so people can't navigate
... that would mean they wouldn't get points (or would fail) for alt text
... they would also fail for "purpose of the site" so they fail in two places
... or if they did it well, they get points in both places

Luis: For tasks that could be single or multiple pages, would users get more points for doing navigation or forms well on multiple pages than if they did a single page?

Shawn: We could consider the task as one "entity" and all images, forms, navigation would be part of that one task entity instead of individual pages.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2018/11/30 20:09:18 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/Was there anything else we said afte rthat?//
Present: Charles jeanne LuisG AngelaAccessForAll
Regrets: Jennison
No ScribeNick specified.  Guessing ScribeNick: LuisG
Inferring Scribes: LuisG
Found Date: 30 Nov 2018
People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]