if any links have been put up, could you put them up again please? thanks
<Lauriat> Wilco's issue summary from github: https://github.com/w3c/silver/issues/40
<Lauriat> Language of page prototype: https://docs.google.com/document/d/18JyGF-AK8Qgq7DPyVlDYmxoj6814rORxuCf0l0oSb7U/edit
is that true when we're looking at outliers with disabilities? if 6/10 can understand and 4 can't - don't we need to ask why the 4 can't?
are we getting at methods or at verification of testing?
you're cutting out...
Shawn is cutting out
maybe start that sentence again
that paragraph
is there something in the testing that moves accessibility towards edge users? (or intersectional users)
if it is about 50+% pass, or even 80+% pass, who is being left out?
on the task-based assessment...
i also thought we were moving away from pass/fail...?
sorry task-based assessment applies to all my comments...
so that is tricky at least up here we have the concept of undue hardship in our human rights legislation.
i wonder with task-based assessment if this is possible to measure in other non-linear ways, such as - for brainstorming purposes - a heat map of barriers that arise during the task-based assessment?
what do you mean Charles? tasks accounting for functional needs...can you give an example?
with cognitive, it is tricky...there are so many kinds and not a clear functional need
task-based assessment and cognitive barriers is going to get tricky
so given this, could we use cognitive and intersectional cognitive-sensory disabilities as test our task-based assessment?
i still worry about marginalizing those who are most marginalized within disability community if we use majority-rules tests...just a thought for future when building task-based assessment.
and there is always the issue of how to verify user testing - is it replicable? was there external review? etc. (the issue of results fabrication or poor methodology)
<Charles> understood on marginalized. i think what we need to do is to determine if the task evaluation method works, then determine if it works for everyone. we have to start somewhere.
can we spend another session just on task evaluation?
<Lauriat> Probably a good idea to spend another session just on task evaluation, yes.
<Lauriat> trackbot, make minutes
<trackbot> Sorry, Lauriat, I don't understand 'trackbot, make minutes'. Please refer to <http://www.w3.org/2005/06/tracker/irc> for help.
<Lauriat> trackbot, end meeting
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Default Present: johnkirkwood, Charles, LuisG, JF, KimD, jeanne, kirkwood, Cyborg, mikeCrabb, Shawn, Lauriat, AngelaAccessForAll, Makoto, JanMcSorley Present: johnkirkwood Charles LuisG JF KimD jeanne kirkwood Cyborg mikeCrabb Shawn Lauriat AngelaAccessForAll Makoto JanMcSorley No ScribeNick specified. Guessing ScribeNick: Cyborg Inferring Scribes: Cyborg WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 22 Jan 2019 People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option. WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]