W3C

- DRAFT -

Silver Task Force & Community Group

17 Apr 2020

Attendees

Present
jeanne, Lauriat, Chuck, MichaelC, Fazio, KimD, sajkaj, AngelaAccessForAll, bruce_bailey, Rachael, Jan, kirkwood
Regrets
Chair
Shawn, jeanne
Scribe
chrisloiselle

Contents


Is anyone interested in working on a guideline on context-sensitive help?

<Fazio> crickets

I'll scribe!

<scribe> scribe:chrisloiselle

<jeanne> Contact jeanne if you are interested in working with a new AGWG member for a guideline on Context-Sensitive Help.

Change to Zoom for Conference calls?

<Chuck> +1 can switch to zoom, +1 happy to switch to zoom

+1 to zoom

<AngelaAccessForAll> -1 can't switch

<jeanne> +1 for Zoom

<Fazio> +1 to Zoom

<KimD> +1 to zoom

<bruce_bailey> +0 zoom is fine

<kirkwood> +1

<Lauriat> No strong feelings, as long as I don't need to install anything (not allowed at Google).

<sajkaj> +0

<Lauriat> Web client allowed.

<Fazio> Zoom has a browser version

<AngelaAccessForAll> I can do the web client on my phone!

<bruce_bailey> fwiw, my pc is pretty locked down and zoom has not been an issue

<sajkaj> There are also phone numbers -- audio only, of course

MichaelC: to Jeanne: Please send times you'd like Zoom and I can set those up and also hold the webex to make sure zoom works correctly.

Rachael: Not in queue, just preesnt

<Zakim> JF, you wanted to ask why "five"?

Is anyone interested in working on a guideline on context-sensitive help?

3 or 5 or other adjective categories

Jeanne: On adjectival categories , what is the number of categories?

DavidF: 5 different categories , average, below average, high, impaired, moderately impaired are what are used to test executive functions. We could derive off of that to a degree.

Jeanne: High would be above average. Bruce: talks to the point we are talking towards websites , so impaired may not be a relateable description.

<Fazio> +1 to 5 and Chuck's comments

Chuck: I like 5 categories. To many others or too few would complicate efforts. 5 seems to be a golidlocks area of how many categories.

<Fazio> yes

<Chuck> chris: With the (not in front of me), on the us based newspaper doc I shared in another call, 0 = not meet all tests.

<Chuck> Chris: It wasn't -1 for not meeting. The clarity we were talking to, but couldn't finish, was if a heading was programatically on page but not being used correctly.

<Chuck> Chris: h2 on page, but it was just a dotted dashed line, empty. They were on page programatically, but not correctly. We were wondering if that was a 1 or zero.

<bruce_bailey> 0 was for not meeting

<bruce_bailey> yes, -1 was just a way to note that testing/evaluation had not happened

ChrisLoiselle: This is what I had used in the silver scoring proposal: Adjectival Scoring -1 - No testing done 0 - Does not pass all basic tests 1 - Adjusted Score (see below) 2 - Passes all basic tests but not all advanced tests 3- Adjusted score (see below) 4- Passes all tests

So, I marked the score as 0, in that it didn't pass all basic tests

<Fazio> This is why I think studying/benchmarking neuropsychology eval process would be helpful

<Fazio> Theres also stimulus target tests

DavidF: If we could benchmark the neuro based tests and how distracted a user is for variety of test areas , it would be beneficial.

Jeanne: Understanding how distracted people get is very important and would be interesting to review further if we write a guideline on preventing distractions.

DavidF: Example of image of dog growling, a person might see that as negative. The person is then impaired due to the subconscious fear of the dog....

<Chuck> chris: From a high level, this may have been talked about before, on the adjectival and scoring side. When I hear all this, it comes back to system usability scale.

https://measuringu.com/sus/

<Chuck> Chris: <pasted> it talks to the 5 point scale we are discussing, high-low and everything in between. I don't know if we have pursued this to a degree comparing to usability field.

<Chuck> Chris: Progressively as Silver develops maybe we could explore this further. This talks about the usability aspect. Can the user get through what they need to? Also talks to adjectival.

<Chuck> Chris: Don't know if that has been talked to in detail.

<Chuck> Jeanne: Can you explain at high level?

<Chuck> Chris: It aligns with usability scale and talks to adjectival. I haven't used it in a long time, but I can research. I know it's a 5 point scale, and more about can a person get through a task... if so what was the stopping point. I have to read up on negative aspect of taking points away.

<Chuck> Bruce: Odd questions are positive, even questions are negative.

<Chuck> Jeanne: At first I was thinking I like strongly agree and disagree range, but that would not solve the problem of different testers scoring the same site differently.

<Chuck> Jeanne: At first I thought it was cool, but I don't think it helps. having a 5 point is good, but I think the disagree agree range is not a a good approach, given complaints. But open to other thoughts.

adjectival mapping: https://www.trymyui.com/sus-system-usability-scale, best imaginable, excellent, poor, worst imaginable.

<Chuck> shawn: It would be worth trying to use the scale, because we can write tests like heading structure used on this page conveys the structure of the information accurately.

<Chuck> shawn: Or something to that effect.

<bruce_bailey> +1 to what Shawn is saying

<Chuck> shawn: I think we can make it work, I don't have a clear sense if it's beneficial, but it's worth experimenting. Maybe restating a couple of things that are in headings exploration.

<Chuck> Shawn: To see what would happen if we had statements like that. The challenge for that is the context, whether or not you are talking about tasks or where you are in this.

<Chuck> Shawn: Maybe it would work to our advantage.

<Fazio> Also no system is perfect. No matter what we decide it won't be fool proof

<Chuck> Chris take back over.

ShawnL: Difficulty is the context.

I'll take over Chuck :)

<Lauriat> Big +1 to Fazio

<Fazio> Likert scale is 5 I believe

Jeanne: To Jan on partial scores: What does your company do for partial scores?

<Chuck> +1 to experimenting

DavidF: I use a likert scale, which is 1-5

Jeanne: Two different experiments. Take a WCAG success criteria and advanced test and see where we are on current review. 2) Take headings write tests and review place it on the agree / disagree scale

Rachael: WCAG success criteria and advanced test and taking that further I'll review that.

<Chuck> Chuck has no availability

<KimD> Some stuff I just found: Usability rankings: https://www.userfocus.co.uk/articles/prioritise.html - 4 levels (Critical, Serious, Medium, Low)

<KimD> Fico scores: https://www.experian.com/blogs/ask-experian/credit-education/score-basics/what-is-a-good-credit-score/ - 5 levels (Very poor, Fair, Good, Very Good, Exception)

<KimD> Survey Monkey talks about Likert Scale: (https://www.surveymonkey.com/mp/likert-scale/)

Next step on conformance

<Fazio> I think we agreed on 5 Levels of compliance right?

Chuck: EN functional criteria and how that impacts us needs to be addressed , as this question keeps arising.

<Chuck> awesome

ShawnL: Task based assessments needs to be aligned with people with disabilities. Talks to Chuck's point on functional criteria. What a user is actually trying to do is very important.

<sajkaj> +1 to Shawn

<Fazio> ISO 9001 Scoring: When doing an internal audit, I use the same scoring key as the external independent audit giving them a numerical value from 1-5 (1 being Critical to 5 being in compliance). Each department is given an individual score and there is the cumulative score for the entire company. It is easy to understand and use.

ShawnL: Defining scope. I.e. app, flat website. sub-component of an app. NYT crossword puzzle, for example. A declaration of scope as to what is being reviewed and what was tested against.

<Fazio> WCAG is ISO certified

Wrapping how something is tested and knowing the paths to reviewing against certain criteria.

Jeanne: WCAG EM scope definition does a good job. JF talked to a much smaller module vs. an entire page review, or many page review.

ShawnL: Headings for example, developing module scope and reviewing will be great, but does it impact the great good? Tying the scope to the user experience is key rather where code lives on a certain page, or module, or component
... We discussed this with Wilco, but will need to go into more depth.

Jeanne: We can talk to that topic more next week and pursue further.
... Any further comment on what we need to work on next?

Planning out the Essentials for Publishing

<jeanne> Definition of ENough

Jeanne: A few updates were made on the definition of enough / essentials for publishing

<jeanne> https://docs.google.com/document/d/1tQHgVFaJYS1WWs9BKucZxWboMNVuclvdNqnQuzPbWwY/edit

We will talk more on Tuesday

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2020/04/17 18:59:50 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/interesting to review further./interesting to review further if we write a guideline on preventing distractions./
Present: jeanne Lauriat Chuck MichaelC Fazio KimD sajkaj AngelaAccessForAll bruce_bailey Rachael Jan kirkwood
Found Scribe: chrisloiselle
Inferring ScribeNick: ChrisLoiselle

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]