W3C

- DRAFT -

Silver Community Group Teleconference

19 Jul 2019

Attendees

Present
jeanne, Chuck, CharlesHall, Lauriat, shari, JF, jan, LuisG, Rachael
Regrets
Bruce, Makoto, Angela, Chris, KimD
Chair
jeanne, Shawn
Scribe
Chuck, JF, Rachael

Contents


<Chuck> scribe: Chuck

<JF> scribe: JF

JS: Nobody has provided any feedback on the Pros and Cons work

any rteason why?

JS: Also want to introduce Jim Allen - former chair of UAAG, and current co-chair of Low Vision TF

SL: seems Jim is in IRC, but not on the call
... as far as measuring the proposals to see if it meets our needs... go back to the requirements document

pick out what is required for the conformance model

and then draw up ways to test the conformance model to see how each meets those requirements

<CharlesHall> +1 for using the requirements as a framework to identify (at least some of) the pros and cons

SL: also feel there is a value of a Pros and Cons, to capture things that may not map to specific requirements we've identified

JS: We got a number of requirements spread over multiple work efforts

don't want to lose any of that

SAL: agreed, don't want to lose that, but if we go back to the initial documents we have a way to test each proposal

JS: Alastair also sent out a proposal - sent the link to the list previously
... we reviewed this document on Tuesday

<Lauriat> Silver conformance testing measures: https://docs.google.com/document/d/13A8mGMnQujfEVqcw_LmAUYT8DDq_qW0TNcHxmCHd0io/edit

JS: although we were light on the last bit, as we were runniing out of time
... What do others think? Should we return to the feasability work we did lst summer?

CA: sounds good to review this, as some members are newer than that effort

<Lauriat> Wiki conformance link: https://www.w3.org/WAI/GL/task-forces/silver/wiki/Main_Page#Silver_Conformance

JS: have a central location to track all proposals - see the main page of the wiki

JL: this is from last year

<Lauriat> 1. We took each of the measurement factors and agreed to a 1-3 ranking for each one. With only 3 choices, we felt we could set clear rules of what was a 1, what was a 2 and what was a 3. It avoided the extra confusion of trying to compare a factor that was being measured 1-3 with one that is being measured 1-10.

<Lauriat> 2. We wrote the rubric (rules) of what constitutes a 1, 2, or 3 for each measurement.

<Lauriat> 3. We ranked all the WCAG SC for each measurement (severity of user need, effectiveness, ease of implementation, ease of testing and a few others). With three people contributing, I think it took us an hour to do all the WCAG SCs.

<Lauriat> We learned a great deal from that exercise that we would not have done from picking 3 SCs. I thought that measuring the severity of user need would be a no-brainer, and I was wrong. It was actually very difficult to find any SC that was not critical to some group.

JS: we took all the things we could think of, and tgried to weight the SC

we laid them out in a spreadsheet - it was messy (it's on the wiki)

we took everything we could think of, and then we tried to rank them all

we didn't put a lot of time on it, but we asked what was more important

we found it very difficult to rank it by severity of user need, but found it easier when we looked at techniques, but variable there vs. remediation

LG: we tried to do it as a task-based activity, and found that even that could be complex

but then it gets difficult, because then what components are tested

<CharlesHall> i think the 1,2,3 came from this: https://docs.google.com/document/d/1ccKlaPMaVvazbSqMPgttvMesy9D0KAjGY01pAQES2K0/edit#heading=h.bkywq1b8i6xo

<CharlesHall> 1 = imposible; 2 = inconvenient; 3 = technically fail but not needed

<Lauriat> JF: A question about 1, 2, 3 and where it came from vs. 1-5 or something else. For me, it keeps coming back to multipliers for these sliding scale aspects.

JS: we tried to look at that, and create multipliers

and then we tried to apply that across some content, to see if it reflected reality - looking at underlying work of what could be measured and how?

LG: we looked at different factors, at one point we were looking at a P1 - P4 score

but narrowed it down to 3

asking the question , how would a company look at this in their ticketing system

CA: so the 3-point range was just a stick int he ground, so that we could start

JS: yes, we were trying to create real data. That's why I thought ranking by user-needs would be a no-brainer

<Lauriat> JF: Is it hard or is it impossible? Do you think it's impossible or that it would just take more time?

JS: we found that most SC was a P1 for one user-group, so it became a lot harder to do - so we [inaudable]

SL: we found it was also contextual

<CharlesHall> +1 to context. which is where the task-based idea originated

in some instances it may be easier, in others it might be harder

which I believe is why we want to try and tie this to tasks

LG: it's a little easier in a usability test, because it is what the user is doing

but testing this as a tester would be harder, and automated testing will be extremely difficult

you won't be able to have full automated testing

<Lauriat> JF: Any discussion with the ACT TF around rules?

SL: at a previous TPAC, we had a great session with the ACT TF

we worked through a number of issues for at least a couple of hours

it seemed extremely helpful

which is why we started to work on content migration by starting with tests first

SL: ws there anything else here, are we going to move to the next topic?

JS: as we go through these great ideas, we narrow things down and seperate things out, then we do the feasability evaluation, where we roughly run through the existing requirements (including some of the newer ones) to see if the results are rewareding what we want to reward
... we should do an internal impact test to see if what we have is realistic

sop let's do a quick and dirty with each (howevermany proposals) that make it to round 2

JS: we discussed this last week, and doing the pros and cons exercise before we do the quick and dirty

and then we can use Alastair's proposal, which has two parts

separating goals to measures, then test against real world examples

but we should always be checking back to ensure we are heading where we want togo

<LuisG> JF: Are we heading where we want to go?

JS: Alastairs propsal is complex, but it's a really good starting point

<LuisG> .. in an email from Charles that I replied to today, the question really is....where do we want to be?

<LuisG> answering that question, kind of answers other questions. Charles, it sounded like you intended the next version of the guidelines to be preventative in nature. We're teaching...and I agree that's a larger goal we share.

<LuisG> .. but some of that work is happening in education and outreach

<LuisG> .. we need a way of measuring progress

<LuisG> .. Charles said that he saw it as kind of being a way to prevent problems from being created in the first place. We're all tackling the problem from different angles. What angle are we attacking it from.

<LuisG> Lauriat: I think the point is very valid on how different people are looking at it differently

<LuisG> .. ultimately people need to remediate things that are completely broken as well as something that's already meeting guidance and making sure new products meet it

<LuisG> JF: I agree, but when I think of work ACT task force is doing is creating nuclear tests. The building blocks of what we're trying to evaluate.

<LuisG> .. when you look at some of the other activities, they're building new SCs for WCAG 2.2 by identifying needs and articulating a requirement. Right now, everything is pass/fail.

<LuisG> .. at some level, everything will be pass/fail

<LuisG> .. either you met the requirement or not met the requirement

<LuisG> .. to me, I want a collection of pass/fail requirements that roll up to what we want.

<LuisG> .. I understand that some of the work was to take subjectivity of scoring and apply a measure to it

<LuisG> Lauriat: When we worked with them, it seemed the atomic rules had a pass/fail, but some things still had subjectivity. Whether an image needs an alt text beyond empty string and if so, whether or not it provides equivalent.

<LuisG> .. how do you lead someone through testing something so it's done consistently by two people

<LuisG> JF: The nuclear building blocks could be a starting point

<LuisG> Lauriat: We need to define the users needs, the next step is "

<LuisG> .. "how do we make sure the user need is met"

<LuisG> .. the things they've done around different rules and steps that are reused across test is going to be needed in how we build up the tests

<LuisG> JF: If some tests are easy to do and others are hard, do we give more value to the hard test or the easy test?

<LuisG> Jeanne: That's one of the proposals. And we need to evaluate if doing that gets us where we want to go?

<LuisG> .. from my point of view, is that we want the conformance more consistent with the reality that users experience.

<LuisG> .. that's one of the goals

<LuisG> I actually need to take off for the airport soon if someone could take over

JS: my big picture is always at what are the issues identified for users, and are we addressing them?

different members have different vantage points

CH: prevention is also a measure we want to apply - that we apply that during the creaation phase, it's not just beginning and end

<Lauriat> JF: I agree with that. I hear that as content creation and launch, and it also has to apply post-launch.

<Lauriat> JF: I need the ability to go back six months later and re-verify. That's one of the reasons I've brought up date-last-performed documented as well.

JS: what we are talking about is how to evaluate the different proposals

CA: have a question about the evaluation - is this a self-evaluation?

JS: whoever wants to evaluate should - and it's not just pros and cons

each of us scan these, and based on proposals we like or don't like

we can combine the good stuff, and then we don't have to keep going back to discuss it

<CharlesHall> request: can we have one document with the name of and gist of the proposal, in which to list the pros and cons of each? perhaps with our initials?

so I think it will save us a lot of time to do this exercise

SL: to bring it to a more meta level, we want to define steps so that we can test the conformance proposals against out needs

but at least we'll have consistent tests

so that when we evaluate it's an 'equal' evaluation

CA: will try and do this exercise]

JS: it's important to understand what you don't understand, because then we can see that as a hole

<Rachael> scribe: Rachael

Lauriat: From here do you want to back to what is under goals to measures or was that covered tuesday?

<Lauriat> https://w3c.github.io/silver/requirements/

Jeanne: We discussed them but lets look at the requirements document to see if anything should be added to the list.

Lauriat: There are 3 different sections. I don't think we need to go through the conformance section.

<Lauriat> Support the needs of a wide range of people with disabilities and recognize that people have individual and multiple needs.

Lauriat: i will copy any in that I think are affected.

<Lauriat> Support a measurement and conformance structure that includes guidance for a broad range of disabilities. This includes particular attention to the needs of low vision and cognitive accessibility, whose needs don't tend to fit the true/false statement success criteria of WCAG 2.x.

<Lauriat> Be flexible enough to support the needs of people with disabilities and keep up with emerging technologies. The information structure allows guidance to be added or removed.

Jeanne: we can add to Alastairs list. It was a start but he's ok with us building it.

Lauriat: Having it be as simple as possible to understand still applies.

Jeanne: I think we will be a while till we get there.

Lauriat: I'm breaking these out into different sections in the list. Design principles that are applicable and requirements that are applicable.
... Creation process is about our process so not likely needed in the requirements. I think the one thing that applies is the date and evidence based.

rrsa make minutes

<Lauriat> trackbot, end meeting

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/07/19 19:03:38 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/former chair of ATAG/former chair of UAAG/
Succeeded: s/tg=his/this/
Succeeded: s/propsals/proposals/
Default Present: jeanne, Chuck, CharlesHall, Lauriat, shari, JF, jan, LuisG, Rachael
Present: jeanne Chuck CharlesHall Lauriat shari JF jan LuisG Rachael
Regrets: Bruce Makoto Angela Chris KimD
Found Scribe: Chuck
Found Scribe: JF
Inferring ScribeNick: JF
Found Scribe: Rachael
Inferring ScribeNick: Rachael
Scribes: Chuck, JF, Rachael
ScribeNicks: JF, Rachael

WARNING: No "Topic:" lines found.

Found Date: 19 Jul 2019
People with action items: 

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]