Silver Community Group Teleconference

08 Oct 2019


janina, Lauriat, Cyborg, Makoto, CharlesHall, jeanne, Chuck, KimD
Shawn, jeanne
Cyborg, Chuck


<Lauriat> trackbot, start meeting

<trackbot> Meeting: Silver Community Group Teleconference

<trackbot> Date: 08 October 2019

Updates from each group

<jeanne> scribe: Cyborg

Jeanne: we have people here from Color Contrast and Alt-text

Color Contrast

Chuck: 3 of us are on the call - i'm going to open up with statement

<Chuck> We propose changing the name of “Color Contrast” to “Visual Contrast” as a signal of a paradigm change from one about color to one about “perception of light intensity”.   The reason for this change is that the understanding of contrast has matured and the available research and body of knowledge has made breakthroughs in advancing the understanding of “visual contrast”.   Conventional practice of color contrast for accessibility has be[CUT]

We propose changing the name of “Color Contrast” to “Visual Contrast” as a signal of a paradigm change from one about color to one about “perception of light intensity”. The reason for this change is that the understanding of contrast has matured and the available research and body of knowledge has made breakthroughs in advancing the understanding of “visual contrast”.

Conventional practice of color contrast for accessibility has been to convert colors to grey-scale and calculate the light ratio between two objects (such as words against their background), in an effort to make them stand out from each other. The proposed new guidance will more accurately model human visual perception of contrast and light intensity. The goal is to improve understanding of the functional needs of all users, and more effectively match th[CUT]

The goal is to improve understanding of the functional needs of all users, and more effectively match the needs of those who face barriers accessing content. This new perception-based model is more context dependent than a strict light ratio measurement; results can, for example, vary with size of text and the darkness of the colors or page.

This is what Chuck was reading

<Chuck> scribe: Chuck

<Lauriat> Working doc: https://docs.google.com/document/d/18CRsPyrpDHwt1fS-7IPR6iLyW2qr7LbdL-3lzzFiwRY/edit

Cyborg: We've identified user needs <lists various needs>
... Our template is not complete, but we've discussed in great depth. Test types are being identified through divergent process.
... There may be more test types that have been added since we began.
... We are next going to converge the large list of tests that we brainstormed.
... We'll finish converging before the November deadline.

<Cyborg> Andy: I'll make it as short as possible, details can come from questions

<Cyborg> Andy: One thing emerged from research that I was surprised with, has been in research for a while, but not applied, is how we perceive contrast in terms of spatial frequency.

<Cyborg> Stroke width of font has more to do with perception than color

<Cyborg> Andy: Stroke width of font has more to do with perception than color.

<Cyborg> Andy: international standards not dealing with this enough. Also as objects get smaller and thinner, the computer renders this font to a screen, also lowers the contrast chosen by designer

<Cyborg> Andy: if you turn anti-aliasing on, even if you have pure white screen and black text, you can move 20:1 contrast to less than 4:1 - using current WCAG math

<Cyborg> Andy: because of anti-aliasing and thin font. not even getting into how people perceive thin fonts.

<Cyborg> Andy: one important paradigm shift taken into account in new recommendations. another aspect is the concept of standard observer, or a baseline, from which we can determine how each functional need can be appropriately addressed.

<Cyborg> Andy: baseline environment in terms of perception and brightness of screen as a starting point from which changes can be built (universal point of assessing contrast)

<Cyborg> Andy: how do we then address each need? each impairment has individual functional needs, depending if related to acuity or related to contrast sensitivity (macular degeneration, field loss)

<Cyborg> Andy: conflicts between needs of different impairments. another reason to start from baseline and offset based on each impairment type, functional need, and individual person. no one size fits all solution.

<Cyborg> Andy: different ways to address this. author needs to do some things to provide user capacity to adjust settings without breaking the site

<Cyborg> Andy: for example, if you increase the size of the smaller font, without increasing the size of the font that is already big enough to read

<Cyborg> Andy: that's why customization is important.

<Cyborg> Andy: new algorithm in math - what is novel needs IP protection, will discuss with Jeanne afterwards - math is a different equation from current WCAG, based on perceptual models (some date to '70s, some new)

<Cyborg> Andy: some new research around spatial frequency is eye-opening. new math is addressing color vision deficiency in way that doesn't require pushing contrast farther than it needs to be. possible to accommodate color vision deficiencies in a way that doesn't throw everything else off or hurt people with other vision issues

<Cyborg> Andy: important to recognize where changes can cause harm to others and how to prevent/mitigate that

<Cyborg> Andy: math is modeling curved human perception of light and adaptation to light. e.g. how bright screen is.

<Cyborg> Andy: new math takes more factors into account. has a basic level for immediate implementation, somewhat migratory from WCAG, but also extensible as tech advances forwards, built-in additional functionality, such as light adaptation assessment for fine controls of human perception modeling, including for each impairment and functional need

<Cyborg> Andy: all supported by existing research. Also currently doing clinical research on this.

Cyborg: Hasn't been transcribed into the document yet. Chuck will transcribe. Will be ready to talk about on the 15th, not ready today.

<Cyborg> Jeanne: i'm interested in the tests.

<jeanne> +1 to writing a scalar test

<Cyborg> Andy: number we're working on. one that is coming directly rom math is simple test between yes/no and scalar. the new algorithm lends well to scale-based model. one test that is migratory is using new algorithm to assess contrast between two colors and there is set of levels relative to a certain size and weight, based on if font a certain size and weight, based on thickness of stroke. what is min contrast at that level

<Cyborg> Andy: step forward from what we're doing, migratory, but using new technology that is being developed

<Cyborg> Chuck: our divergent exercise generated large list of possible tests that are not yet filtered. lots of tests may not survive convergence test to filter it down. we've gone through all of the divergent exercise, we are still transcribing

<Cyborg> Andy: some tests may come together quickly, some may take more work, some are advanced and take scale and relationship between factors into account.

<Cyborg> Andy: difference between two colors are inter-related in tight manner to font size, weight and other aspects of visual process. those are tightly inter-connected. looking at contrast in isolation fails. that isn't how we perceive. and it doesn't address specific functional needs and their perception.

<Cyborg> Andy: how do we achieve each of these and the best way to do that? sent chart in a previous email about this granularity.

<Cyborg> Janina: can we see that email?

<Cyborg> Andy: yes.

<Cyborg> Andy: good with royalty free and public domain, but want to protect to ensure that.

<Cyborg> Jeanne: Andy, go through the email first.

<Cyborg> Janina: very fascinated by this analysis, want to thank you for it. how do we leverage what is relevant in mainstream. for example, auto adjustment to night/day mode. why not leverage that thinking and serve more people

<Cyborg> Janina: this sounds very promising

<Cyborg> Chuck: this is part of complexities of our review

<Cyborg> Andy: questions in terms of scope, algorithms have capacity to recognize ambient light (as sensors available). this is responsive design in terms of environment around you and impact on what you see on screen.

<CharlesHall> user agent support on ambient light: https://whatwebcando.today/ambient-light.html

<Cyborg> Andy: how do we get browser user agents to have API required to support this?

<Cyborg> Andy: contrast polarity - light over dark or opposite, depending on light in room.

<Cyborg> Jeanne: we are writing methods for user agents and browsers, awesome if you could write sample method. avoid words like "must" - we are suggesting user need and how to meet user need, not specific solution. not prescriptive. but users need ability to do x, and how that might be done.

<Cyborg> Jeanne: methods are optional.

<Cyborg> Andy: a possible method?

<Cyborg> Jeanne: yes.

<Cyborg> Jeanne: focus on color side of things

<Cyborg> Chuck: have asked Andy to construct new math as method that equates to what WCAG did with old math, and put that in.

<Cyborg> Andy: developing a lot of these aspects.

<Cyborg> Jeanne: what is doable by November?

<Cyborg> Cyborg: we are already working through a lot to get what we've discussed ready for November, my suggestion is to avoid taking on more commitments (such as user agent methods) for November

<Cyborg> Andy: padding aspect around text for 2.2 - could be included in Silver draft (Andy already has some work developed for 2.2 which we could include in Silver November draft)

<Cyborg> Andy: various factors all inter-connected as part of one visual world of perception, and model needs to take them all into account.

<Cyborg> Shawn: awesome work and exciting set of directions

<Cyborg> Andy: exciting to do this research. we often take vision for granted and as we get older, everyone will have some kind of impairment.

<CharlesHall> my update: [rough] silver outline added to [end of] Point of Regard doc: https://docs.google.com/document/d/1D1AczVDgSCgCci4t3sO-QV6VKaKIEyQ6zKgm0ocnFB8/edit?usp=sharing

<Zakim> CharlesHall, you wanted to discuss comment/question

<Cyborg> Charles: pasted it in and wanted to ask question re: contrast. fantastic and appreciate the depth of research and complexity of topic, but at the same time, it's also true that everyone who will review first public working draft, will expecdt that this will be simpler. is there a way to compartmentalize this research so that there is brief list of user needs,

Cyborg: Yes, yes and yes, we are trying to work through what Silver can look like in terms of simplicity. Part of the tension that exists...
... with the depth of new research and the existing material. Andy has brought a lot to the table. It's an important
... to deal with both at the same time. Both exist, we are working through that.

<Zakim> janina, you wanted to say yes to WoT but not IoT

Point of Regard

<Cyborg> Charles: fleshing out user needs and methods for point of regard

<CharlesHall> https://docs.google.com/document/d/1D1AczVDgSCgCci4t3sO-QV6VKaKIEyQ6zKgm0ocnFB8/edit?usp=sharing

Cyborg: We developed a methodology/process that may be helpful and I'd like to share with Charles.

<Makoto> Updated "ALT text" doc by Jennifer, Cybele and Makoto: https://docs.google.com/document/d/12MbzzePklkoITusRM7gQyKUHtx6Ygsnd8RYQbIZ-qhM/edit?usp=sharing

Alternative Text

<Cyborg> Makoto: Jenn and I took 1.1.1 at Access U and Cybele joined us. Cybele did tons of work to help us. Yesterday, went through functional needs section.

<Cyborg> Makoto: (reading from document about functional needs).

<Cyborg> Makoto: we think there are several grey areas in WCAG that can be addressed in Silver. loaded some of those grey areas in test section where some experts say pass, others say fail. image of text, ...decorative image

<Cyborg> Makoto: we are looking at what Silver can offer and next Tuesday we will present document to get feedback and another online meeting next week and we are meeting in Toronto in person in later October (A11YTO conference).

<CharlesHall> sorry. i have to drop off the call a minute early. thanks all.

Cyborg: You captured it really well Makoto. I echo Makoto... we are trying to see the value proposition of silver, plain language, deloping more user needs
... Emphasis on coga. Also looking at tests and methods for where there are gray areas, deficiencies in current WCAG, false passes, etc.
... Would love feedback on ways we can improve Silver, better quality alt text.

Reminder to vote on the charter

<Cyborg> Jeanne: very exciting

<Cyborg> Janina: very interesting work, thank you

<Lauriat> trackbot, end meeting

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/10/08 14:32:40 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Default Present: janina, Lauriat, Cyborg, Makoto, CharlesHall, jeanne, Chuck, KimD
Present: janina Lauriat Cyborg Makoto CharlesHall jeanne Chuck KimD
Found Scribe: Cyborg
Inferring ScribeNick: Cyborg
Found Scribe: Chuck
Inferring ScribeNick: Chuck
Scribes: Cyborg, Chuck
ScribeNicks: Cyborg, Chuck
Found Date: 08 Oct 2019
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]