W3C

- DRAFT -

SV_MEETING_TITLE

24 Jan 2020

Attendees

Present
jeanne, Rachael, KimD, bruce_bailey, janina, Lauriat, Fazio, AngelaAccessForAll
Regrets
CharlesHall, Chuck
Chair
Shawn, jeanne
Scribe
janina, Lauriat

Contents


Review draft of Methods for Visual Contrast

<janina> scribe: janina

<Lauriat> Scribe: Lauriat

<jeanne> https://raw.githack.com/w3c/silver/conformance-js-dec/guidelines/explainers/visualContrast.html

Jeanne: Latest version of the visual contrast explainer
... From the Test and Audit tab...it 404s. Need to fix that.

<jeanne> https://raw.githack.com/w3c/silver/conformance-js-dec/guidelines/methods/Method-font-characteristic-contrast.html

<bruce_bailey> https://docs.google.com/document/d/1lmTpfgublIqRggMVbrwo55FMlyJo3Avp_TAvpuFttxI/edit#heading=h.bet2jy61ll3p

Bruce: Spotted an error with that lookup table. Still regarding the doc as having the bulk of the work, and then you moved as much as possible into github.
... We still have some things in the doc not in the explainer or method, but I think it's as good as it can be.

Jeanne: Not sure what should go in or shouldn't.

Bruce: The doc has earlier drafts in it, the work that has had the most attention ends at the draft illustration.
... What you have in the github space is all very acceptable, but Andy (not on today's call) may disagree.

Jeanne: I don't think people would need to know this much about the formula in the test.

<Fazio> isn't that the point of silver?

+1 from me on that.

Fazio: Other tests, probably simpler, do exist as well, coming from the video game space.

Bruce: Researching visual contrast?

Fazio: Contrast sensitivity, which includes visual contrast, yes.
... I found these through Posit Science's website.
... White papers, lots of resources available.

Bruce: I don't know how we turn that kind of raw research from video game space...

<KimD> is it this: https://www.brainhq.com/?v4=true&fr=y

Fazio: From neuroscientists, not the video game space, just used in that space.

Bruce: Got it. I think Andy draws from a lot of this research.

Fazio: More specifically bringing this up in the context of how to test visual contrast.

Jeanne: I also want to be cautious about stepping on companies who write tools, we can share preliminary tools and have others turn it into something more slick.
... Very glad that we have the test tool, it helps a lot to show how it can work.

<bruce_bailey> https://www.myndex.com/SAPC/

Bruce: Not clear where the numbers come from for the fonts.

<bruce_bailey> font sample is after number

Lauriat: Andy hard-coded the values for each of the fonts for the weighting against the formula, something he and I will look at making generally applicable across other fonts and styles.

<bruce_bailey> https://raw.githack.com/w3c/silver/conformance-js-dec/guidelines/methods/Method-font-characteristic-contrast.html

<bruce_bailey> see test tab

<Fazio> I think the change itself implies the need for new tests

<Fazio> but I hear your point

<Fazio> It's significant

Kim: I think it's so confusing to figure out if something passes or not with the current not-yet smoothed out version of this test, having this in here will cause more churn than prove helpful.
... Agreed that we can include something with a note warning about the in-progress-ness of the test, but we may want to hold off.

Janina: +1

Bruce: The test is just using an eyedropping and then compare against the look-up table.

Jeanne: The test doesn't read like that, though.

Bruce: Easy enough to edit it, though.

Jeanne: We can also reference other documents, so for those who want to read up on the underlying science, they can.
... I added a couple of things above the lookup table that I thought we needed to use.

Bruce: We don't really need many of these, as the tool abstracts most of those away so you don't have to worry about that.

Jeanne: If you could update this in a document, I'll take that and replace what we have now.

Bruce: Will do.

updates from subgroups

Jeanne: Nobody from alt text on the call, but we'll need to make sure we get an update soon on that.
... Clear words?

<jeanne> https://raw.githack.com/w3c/silver/conformance-js-dec/guidelines/methods/Method-plain-language-principles.html

Jeanne: We didn't have a meeting this week, but we got feedback from AG WG at the meeting, particularly about the method.
... Good feedback from someone working on a plan document. On code samples, I got a sample on how to do more accessible in-line definitions.
... In the test tab, I got significant pushback around partially = .5 and how to tell the difference between different levels. Why not .2, .7, etc.?
... We can use those, this is a proof of concept to illustrate.
... We may also get help on the design aspects on this.

Hurray!

AG WG Survey

Jeanne: Not ready to go out, yet. If someone could help, I'd appreciate it!
... We don't need many questions, more of "This is what we wanted to accomplish with this section, did we? If not, what do we need?"
... Mostly to guide comments to stay at a high level.

Kim: I can help with that.

Thank you, both!

quality rubric for Headings

<jeanne> https://docs.google.com/document/d/1TgFWsggRNiUYU_N9GPCvU1KUhexiRWjYTelTKZPMAOE/edit#heading=h.2sf1t1ha25ip

<jeanne> #6 Rubric table

Jeanne: Looking at #6, the rubric table.
... We jumped around a bit, so I'd like to start filling in some of the blanks.
... The different between great, lousy, and in the middle.

Lauriat: I'd stick with that and offer guidance for how to use the scale to express more nuanced cases, like almost-but-not-quite-great, or almost-lousy-but-has-some-value.

Kim: Is the point of the rubric more to give an example, or are we really defining what's in each category?

Jeanne: A bit of both.
... We want to show a way of scoring that's not just pass/fail.
... "Based on all these factors, here's how good my headings are."
... Then we can potentially say "under 50%, you fail", but we're not saying you have to have all of this perfect in order to pass.
... We can then say "This is how we're moving toward a more reality-based way of measuring."

<bruce_bailey> @Jeanne, I just emailed you an MS Word version of the test for Visual Contrast, showing track changes to make it a little simplier

<janina> +1 to Shawn

Kim: We should really limit what we put into the cells so people don't get too caught up on those.

Lauriat: Agreed. Limited feels kind of like failure techniques, and I don't want to get caught listing those out or people will fail in other ways and think they pass.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2020/01/24 19:59:55 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Present: jeanne Rachael KimD bruce_bailey janina Lauriat Fazio AngelaAccessForAll
Regrets: CharlesHall Chuck
Found Scribe: janina
Found Scribe: Lauriat
Inferring ScribeNick: Lauriat
Scribes: janina, Lauriat

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting


WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]