<jeanne> Meeting: Silver Conformance
<scribe> scribe: janina
js: Louis, not recalling what you were going to write? ...
Louis: How might we score user's ability to define colors, customization
<jeanne> https://raw.githack.com/w3c/silver/conformance-js-dec/guidelines/
js: Believe I've made all changes from last week ...
<KimD> ยง 3.5 Sampling
js: Believe we left off at different sample sizes ...
js: Don't want us to spend time reading, so maybe we return to it ...
js: All sites under 100 pages
(including dynamic) need to test all pages
... These would be auto tests; and 10 of those 100 need to be
in depth testing
<Fazio> manually or automated?
Makoto: Would test perhapw 40 pages
js: In the U.S. that would
currently cost between $200-$300K
... That's prohibitive
... We can work on appropriate phrasing; concept is to apply
all available auto tests
Louis: We probably want to indicate that common elements (footers, navbars, etc) need only be tested once
<Fazio> +1
js: Would people be comfortable referring to EM Secs 2 & 3 for how to select sample sets?
<KimD> +1 to common elements comment
Louis: Yes, no reason to reinvent
df: Concerned about how to auto
test plain language
... Temp[late structure can remain the same, but the content
changes; so testing language remains important
Louis: Purpose of sampling is not to find every issue, but to identify patterns of problems
js: Concerned about the cost for testing every page, especially for small companies
Louis: Wonders on how this might affect flashing content
df: It's about unique content; the more infrequently it appears, the more sampling we might need to get at any issues
Louis: So different scores depending on what pages are sampled?
df: So, perhaps a certain number from each category of content?
<KimD> Sites or products from 100 - 1000 pages or screens may
js: Let's now talk on the 100-1000 range of pages
<KimD> - In-depth test of 40 samples, 15 selected and 25 random
js: I could see doing the same
<KimD> - Automated test of all pages or screens
js: We could say 10%; and discuss
selected vs random
... I've done my fair share of testing for companies that
wanted to test everything, and the truth is that one doesn't
learn much new after a short time
df: Does EM give guidance on how to identify primary work flows?
<KimD> WCAG-EM: https://www.w3.org/TR/WCAG-EM/
js: We're going to need to give
that job to the organizations themselves; we can't possibly
generalize sufficiently
... EM does explain how to figure it out, and it's well
written
js: I have a rewritten point
system proposal
... Please read and provide feedback
... Key is scoring by guideline, not by method
... supports setting a minimum for each disability category;
and normalizable to treat each disability group equally
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: s/concerns/companies/ Present: jeanne janina Fazio LuisG Makoto KimD Found Scribe: janina Inferring ScribeNick: janina WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]