<janina> trackbot, start meeting
<trackbot> Meeting: Silver Community Group Teleconference
<trackbot> Date: 13 December 2019
<johnkirkwood> regrets
<janina> https://raw.githack.com/w3c/wcag/conformance-challenges-working-draft/conformance-challenges/
<jeanne> Janina: This is going to a survey on Tuesday
<jeanne> ... there is a new issue from Jason White on 3rd party content for Conformance. Issue 993
<janina> https://github.com/w3c/wcag/issues/993
<jeanne> https://docs.google.com/document/d/1gfYAiV2Z-FA_kEHYlLV32J8ClNEGPxRgSIohu3gUHEA/edit
jeanne: we left off on Developing
Tests - number 6, New Tests
... adding a note in the document to "add examples from alt
text about breaking down the types of code to check for"
... Makoto made a note "We should keep only 'True/false' in our
mind as long as it works. We can adopt other tests than
'True/false' only if 'True/false' won't work enough. Otherwise
it is going to be usability issue rather than accessibility
issue."
Cyborg typically would push back on these, saying that usability issues are inherently accessibility issues...or something similar
or rather would push back on the "usability issue; not an accessibility issue" idea
<jeanne> Jeanne: Usability issues become an accessibility issue in aggregate. When a person with a cognitive or fatigue disability cannot accomplish a task because of the "usability" problems, it becomes an accessibility barrier.
jeanne: I wrote up notes on some
tests that have been suggested in research, etc. over the
years
... we have true/false - the condition exists or not; common
WCAG tests
... then scale tests, if there is a scale condition like 1-5 or
percentage, the important thing with scale/rubric test is
testers need an idea of what belongs in what range
... they're not making up their own scale/range, we give
guidance on that
... the rubric is a variation of scale test where the
conditions can be subjective...so they can be placed on a
scale
... I found some different types of rubrics and a rough example
rubric, but the Clear Words group has been working on
rubric
<CharlesHall> (sorry joined late) please share the doc url being reviewed?
https://docs.google.com/document/d/1gfYAiV2Z-FA_kEHYlLV32J8ClNEGPxRgSIohu3gUHEA/edit
Developing Tests section
note: jeanne listed a few others from the document that I wasn't able to put in the minutes
jeanne: anyone have suggestions
for other kinds of tests or thoughts on how I'm defining
them?
... I did an example, this might related to what janina was
suggesting for user needs for alt text; it was noted blind
folks need text with alternative purpose; cognitive users need
text in plain language
... I think it's saying we can test for the accessible name or
alt text is present, but not if it serves the equivalent
purpose
<CharlesHall> Task analysis may be an appropriate test type for a guideline like multiple ways
<CharlesHall> an additional type to those in the doc
jeanne: instead of me reading through all the definitions of a rubric for alt text quality, etc. do we want to talk about that? any ideas or suggestions?
<jeanne> Text describes the purpose of the image within the context of the surrounding material. It is succinct and is written using plain language techniques. See Clear Words guideline for specifics on plain language.
<jeanne> 4
<jeanne> Text describes the purpose of the image within the context of the surrounding material, but it is not succinct or it is not written in plain language
<jeanne> 3
<jeanne> Text describes the purpose of the image, but it is not in the context of the page. (For example, a painting of a famous person has a different description whether it is in a history context or an art appreciation context. )
<jeanne> 2
<jeanne> Text does not describe the purpose of the image, but it does describe what the image is.
<jeanne> 1
<jeanne> Text does not describe the image
<jeanne> 0
jeanne: this is more an example
of how a rubric could work
... let me see if I can find the COGA one
here is one for Clear Language / Clear Words; we haven't decided what we're going to call it
jeanne: When it gets to the
rubric it gets complicated. We give three options of how to
evaluate it. We don't want it to get to the point where a
person needs to count the number of types of verbs, etc.
... for example, for "using proper grammar"
Substantially = 1 Passes a grammar checker. Errors improve clarity.
Partially = .5 Some errors
Limited = 0 Little to no effort has been made.
janina: I'm fine with this as guidance. I wouldn't want to lose hierarchy of preferences...we're listing some basic rules of communications and only diverge if there is a good concept to communicate that requires more
jeanne: this is a way of doing a
more complex rubric...the alt text one was a very simple table
of various conditions and their worth...this one is a more
complex table with multiple conditions and various values for
different levels
... there are a variety of ways to construct a rubric, and it
took us a while to find one that worked for this particular
example
<jeanne> We will be meeting next week, Tuesday and Friday
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Present: jeanne janina KimD AngelaAccessForAll LuisG CharlesHall Regrets: Mary_Jo Bruce Shawn No ScribeNick specified. Guessing ScribeNick: LuisG Inferring Scribes: LuisG Found Date: 13 Dec 2019 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]