W3C

– DRAFT –
Low Vision Task Force

16 December 2021

Attendees

Present
AlastairC, jon_avila, Laura_Carlson, Sam, Glenda, ?others. Guests: Gordon Legge, Yingzi Xiong.
Regrets
Shawn, ?others
Chair
Jon
Scribe
alastairc

Meeting minutes

Brief introduction of the group -- taken up

jon_avila_: Introduces session and LVTF

IRC, statement on minutes (the meeting will not be recorded) -- taken up

Short introduction of 2 minutes for each person who has signed up to overview their work before the main presentation -- taken up

GL: I'm a screenreader and maginfication user. Psych professor at Uni of Minnesota.
… One thing that has come out is the "Emmen read" (?) test.
… this project is called "designing visually accessible spaces", software for architectural scenes to flag hazards. Things which would be an issue for people with low vision.

Yingzi: Associate professor, lot of work with people with low vision. Digital reading is necessary, but difficult. Researching fonts, sizes etc.
… for this project, trying to simulate how certain text looks when you have low-vision.
… also looking at hearing impairments.
… Happy to answer questions.

(Shares slides)

Yingzi: Low vision simulation, starting with Contrast Sensitivity Function (CSF)
… showing a curve, set over fading black & white lines.

For someone with low vision, the red curve represents what they can see.
… key manipulation is to detect features which would not be visible.

Yingzi: We do that by comparing the curves, and calculating the antenuation ratio at each sensitivity.

(showing example)
… couple of features to point out:
… 1) For anyone with low vision, they often have acuity & contrast reduction.
… we've all encountered poor contrast of text, the algorythm takes both factors into consideration. The change of peak is contrast, reduction in width is acuity.
… three examples, same acuity but different contrast. Then same contrast with different acuity.
… as GL mentioned, different people have different combinations.
… so how to create a representation beyond just an educational simulation?
… We can parametrize the simulation. That can provide an individualised accessibility evaluation.
… We applied this to some of the icons provided.

Yingzi: There are 6 icons, battery full etc. (See slide)

Sam: It's a limited set, focused on the unique ID of each one.
… (reads slide comments in "evaluate accessibility of icons")

Yingzi: They are useful to try, with the 3 battery examples that look very similar.

Sam: The message looks different because it's outline is different. The email/recording icons are hard to differentiate.

Yingzi: I applied a simulation at ~20/40 acuity, and 1.60 log unit for contrast.

Sam: This is fantastic. I've been trying gausssian blurring, which is interesting, but low contrast large icons blurred don't look that different.
… so you could conclude that they are appropriate, which is concerning, there is a contrast limit.
… would love to see what happens with both the horizontal & vertical shift for low contrast icons.

GL: General comment, your classification of the 3 questions, where I would add a time variable. You want fast & accurate as well as just accurate.
… choosing 20/40 & 1.6, that's the boudary for low vision. Question is how far into low vision you'd like to proceed. 20/40 isn't very demanding.

jon_avila_: Traditionally, the group has focused on things which would by in area of 20/70 - 20/100. WCAG didn't have font min, weight etc.
… there's no contrast max either.
… we do want to provide guidance that helps at 20/40, but we have to be careful how we frame it.
… not to say that it helps everyone with low vision.
… what I liked was that a blur doesn't look like what I see. They don't look blurry to me, but low resolution.
… that's a common misconception.

GL: We're trying to present visibility, not subjective experience.

Yingzi: (Shows "evaluate accessibility of fonts")
… when getting to 20/80, only the last two lines become visible.
… in the study they were balances for x-height, but in this demo it is just done with font-size.
… that is something we control for in the font-study, based on x-height. We were looking at complexity, and the letter spacing of the font.

Sam: I've been looking into this for a while. It was interesting to compare the virtual to physical. In general, for simulations, it seems appropriate for minor degrees. For more severe simulations it seems like the assumptions break down.
… the degree of variability seems to increase, so I've backed away from simulating that.
… for WCAG, I'm assuming that they don't adapt it, then it should work for people with some degree of vision loss.
… without having to adapt things. In higher degrees of vision loss, other things come into play.
… that's why I focused on 20/40.

jon_avila_: Agree with the time aspect. A large IT company thought that text-alone would be sufficient. They did a study with internal users, don't think they took into account time or fatigue.
… When there's an affordance that clues you in visually, then you read the text, that is easier. We need good research that is accurate to those other factors.

GL: We did a study with a layout of hyperlinks, which did speak to that ease of navigation.

Yingzi: Interested with Sam's comment on real tasks, and real time consideration of tiredness and when people give up.

Presentation from Gordon Legge and Yingzi Xiong on simulating low vision -- taken up

AC: We've considered it an ecosystem, ranging from author responsibility through adaptation, personalisation, to assistive technology.

GL: We've considered it from the point of view of someone who needs to tune things. How does the viewing distance interactive with display size, with zoom, with configuring.

Sam: Wondering about your aspirations for the software an algorythm. Some aspects are for sale, would this algorythm be licensable? Or could that be available for the world, with software for sale?

GL: The software is on github, we'd like a software firm to take it on.
… not interested in selling, want it to be used.
… so far it is out there as our demo of what can be done.
… we're academics, not entrepreneurs!

Sam: That's the fence I sit on!

GL: Would like it to be used and useful. Not a turnkey bit of software.

jon_avila_: If, hypothetically, we wanted a requirment for icons, how would we go about that? Use the software to analyse icons to work backwards to the stroke width (etc) that makes it a measurable thing?

GL: For the architectural software, if you had a 3D model of a space, for that software you have to ID the viewing position. Is this visible to someone at 16"? Viewing perimeters come into play.
… we also have another layer, which looks at critical features and assesses if they would still be visible. So more than a simulated view, it flags things which might not be visible at given distance and impairment.
… what Yingzi was showing is equivelent for online.

<Zakim> alastairc, you wanted to ask about testing with low vision.

<jon_avila_> we can hear you typing

AC: What happens if the person using it doesn't have good vision?

GL: Were thinking that it would someone with regular sight so they could work out what it is like with low vision. Someone with low vision doesn't need a similation.
… you could have a test-bed of symboles or fonts etc.
… will depend on various factors, but if you can standardise the perameters it would help.
… but then still requires a human to evaluate particular content.

Yingzi: Do people who see these icons see what a person with 20/40 vision see?

Sam: The extent to which the viewers vision makes a difference, with the blurring experience, if you present the result at quite a large size (e.g. twice as big), you find that it no longer matters (as much) what the assessors vision is like.
… minor variations in the monitor size don't matter as much, nor does the monitor size. When you use the algorythm to control the distance etc, and blow the results up on the screen, those things don't matter anymore.
… so the assessor has a simpler task.

<Glenda> Thought: Gordon said that they are using this in testing architectual elements that will pose dangers (like can you see the stairs). Can’t we use that same bit of logic to beging testing non-text elements on a digital screen?

Sam: eventually distance/size does matter, but not in regular use.

Sam: For icon design, it's a bit dry to talk about 1.5px strokes etc. It's interesting to us, but better to provide the algoythm to the world and get them to understand it better.
… would then advocate every designer/developer to run it in their browser, and evaluate it for themselves.
… also provides an empathic experience.

<Glenda> Another Thought: Seems like we could teach Artificial Intelligence to say “Hey, this icon is hard to read” and highlight the bits that can’t be seen. If all the bits pass (on the icon)…great. If some of the bits don’t pass…it would take human analysis to determine is that bit of the icon critical for understanding what that icon is.

Sam: the enthusiasm really helps.

GL: We have the same wish.

jon_avila_: I really appreciate you doing this. It would be great to put it in the hands of real people, e.g. AR on your phone. Go to a school, look at the stairs and realise that they need lines painted on.
… People would get it more.

GL: In the architectural context, environmental variations are tricky, you need photo-realistic input.

Glenda: We spoke a couple of years ago, very excited about the architectural aspect, IAAP are doing a built-env project.

Glenda: Could have different levels, e.g. higher contrast at different levels.

Sam: Would love to connect again, and perhaps I can get a developer on this to make it in-browser.

GL: Would love to get it going. We aren't computer scientists of vendors.

Sam: We've got good uptake on the blurring tool, e.g. (big company) is getting their designers to use it.
… but your algorithm would be much better.

<Zakim> alastairc, you wanted to see what the options are

AC: Could use this for working out better guidelines, and could ask/require 'authors' to assess their own interfaces with it.

Glenda: Could pontentially semi-automate this.

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).