W3C

- DRAFT -

Silver Community Group Teleconference

25 Jun 2019

Attendees

Present
Lauriat, jeanne, Makoto, JF, Cyborg, AngelaAccessForAll
Regrets
Denis, Luis, KIm
Chair
Shawn, Jeanne
Scribe
Cyborg

Contents


<jeanne> s//zakim, order agenda 1,4,2,3

<jeanne> rrsgent, make minutes

<jeanne> scribe: Cyborg

Schedule (U.S. holiday)

Shawn: U.S. holiday on July 4, Canada on July 1
... next week is a wash - after this Friday, meet again on July 9 (Tues)

<JF> +1 to July 9

<jeanne> +1

Jeanne: we need to talk about milestones and timelines from meeting with Alastair, some agreement needed from this group before it goes in the Charter

<Zakim> JF, you wanted to comment on @ - Tests

Review milestones & timeline

Jeanne: welcome back Kelsey (spelling?) - very excited to get her back, a hard and prolific worker

Kelsey Callister: was at Baylor University, looking for work in UX

Jeanne: AGWG meeting with chairs on Monday - Alastair did a week 5 timeline to get to CR

<jeanne> https://docs.google.com/spreadsheets/d/1X5tc7HJ0jIY_u3bLj7iTbxO1fxCeX1-SigpYE42Vz90/edit?ts=5d10d93c#gid=0

<jeanne> By week schedule

<jeanne> Milestones for Silver https://www.w3.org/WAI/GL/task-forces/silver/wiki/Major_Milestones_for_Silver

Jeanne: updated Silver milestones based on that
... rather than editor's draft for Charter, will publish something we can get comments on with guidelines, methods, tests, so people can see how conformance would work with real examples - in October
... public editor's draft in Jan 2020
... developing new Silver content in March 2020, with next CSUN, and a year doing that. then candidate recommendations, maintenance and responding to comments
... a year for responding to comments.

Alastair: a couple of assumptions and explanations. it is in 3 columns: content is self-explanatory, conformance model on left, based on idea that we would have different people working on different things. we do need sample content to test sample conformance with. guidelines, methods to test conformance model. once we get past first editor's draft, it is difficult to say how long things will take, may involve working group past that point.
... this was a first pass to get something down in detail, and easier to answer questions now.

<Zakim> alastairc, you wanted to mention a few of the assumptions in the timeline

John: looking at Nov 25 and Dec 2, 2019 - don't believe there is enough time for that, we don't even have a model at this point. there are discussions and drafts and people interested in getting involved.
... what does Jeanne mean about cutting off work, if cognitive walkthroughs not completed are they jettisoned?

Jeanne: we would work on guidance same as AGWG, in short number at time, complete them, editor's draft, heartbeat drafts, anything that has not made it into working draft would be postponed to next version. next version starts right after CR, so they can start on that work immediately even before version goes to rec.

John: appreciate year to deal with comments, but that is very aggressive. migration from 1.0 to 2.0 was 18 months.
... crystal ball is cloudy.

Alastair: it will become clearer. we have 400 odd reasonable techniques in 2.x we could migrate those to Silver (6 months to a year).
... finger in the wind, but best detection we have available.

John: under promise, but over deliver.

Alastair: we have that discussion lined up in the next meeting.

Jeanne: we shouldn't promise any Silver dates beyond length of Charter

John: in Wiki page, depending on length of Charter, we have Silver rec in 2022.

Jeanne: if candidate recommendation goes longer, it goes longer.

John: agrees that we get to candidate recommendation and then some flexibility after that.

Kelsey: who are we anticipating comments from?

Jeanne: general public, W3C. 2.0 got 1000 public comments. we don't know what it will look like, but it could be a lot.

Review current conformance state

Jeanne: let's talk about conformance. take a look at what we currently have on conformance. a lot of people are repeating work we already did. some creative new ideas. would like to give everyone a chance to get caught up on work that has been done.
... 2 major phases working on conformance. a year ago last summer, subgroup did basic work on structure of conformance and in sept to november, once we had IA solid, and did more work on conformance, bringing the two together. here's what we have already done and to recap some of the proposals which have come out this week. one of the things Cybele and I worked on this weekend, boiling down 35 pages of emails on work we did last year
... to turn it into a digestible summary

<jeanne> https://docs.google.com/document/d/1wklZRJAIPzdp2RmRKZcVsyRdXpgFbqFF6i7gzCRqldc/edit#heading=h.sevi88jq0fiq

Jeanne: put a lot of work to keep Wiki main page up to date to always find things, including a lot of the old work. but this is the latest of the conformance design, nothing really changed, but polished and organized and easier to read
... so people not involved with Silver can get caught up in broad strokes
... set series of goals for conformance, coming out of design sprint - score cards and rubrics, solving problem of substantially meets, and where people with large sites can show Silver conformance
... access supported, flexible method of claiming conformance.
... number of issues still outstanding. but let's focus on point system that is ...

<jeanne> How do we set up a point scoring system that will be transparent, fair, and motivate or reward organizations to do more?

Jeanne: how do we maintain system that is current and protected from gaming? migrating? methodologies? lots of issues. plus others raised this week.
... in November, we put together IA and conformance prototype would work with it. flattening structure of 2.x to guidelines and methods. methods includes tests, examples, instructions. tagging engine to find more easily and API to extract info for own purposes
... migrating to Silver - WCAG principles become tags, guidelines and SC to guidelines, tech specific criteria will move to methods, techniques will move to methods, and understanding becomes part of the Guideline Explainer.
... A, AA, AAA levels deleted, Silver conformance overall for product or project, not specific SC or guideline
... auto and manual testing, rubrics and distance from mean, task completion, etc. Jeanne walking through the document.
... scoring system - when I wrote this, we had rough work and alternate proposals. from John and Bruce. will update that.
... levels not by SC, but overall for project or product as defined by org

John: I'll let you finish

Jeanne: what Cybele and I did this weekend, work we did on points system, bringing it up to date to show how point system can work

<jeanne> https://docs.google.com/spreadsheets/d/1sOQ6odaK43pV4VfHSFPXAV7Ry1KlcGDxbyWy2Vb1d-s/edit#gid=0

Jeanne: this is not something that the general public would see, this is done in background. public info, not hiding it, but not in the face of users. but on legal and regulatory side, there are people who are very interested in ensuring that system is transparent and fair, what goes on at particular levels are not as transparent and fair as what people are asking for
... a lot of work from last summer is how do we set up a point system that is transparent and fair

<jeanne> https://docs.google.com/document/d/1ccKlaPMaVvazbSqMPgttvMesy9D0KAjGY01pAQES2K0/edit#

Jeanne: here is the explainer for the spreadsheet
... the first thing we started with is to rank user needs, took headings content which is most complete starting guideline. we ranked user needs, and immediately ran into a problem, which is that all of them were critical. 3 is most important, 2 middle, 1 low. ways to move that to guideline points. one thing added is to look at reputation points, but we didn't work that in detail yet. but where we said we had a lot of power and were clear how to do this
... was when we got to individual methods. when I was trying to move the spreadsheet we did last summer into this one, took 4 tries. how do we express this. formulas. factors. many repetitions. realization was that every score has three components: information from guideline, information from method, and information from test (info is really math factor)

<Kelsey> What are reputation points...would that be a score for the organization itself, rather than the digital product we're scoring?

Jeanne: the guideline needs the most work. when we look at the methods, 3 aspects: ease of implementation (vs hard), effectiveness (does it work for all the user needs - can be full or partial and related to quality), and does it allow customization (bonus)
... method for customizing headings could be on browser side, Wayne Dick did presentation on how people with low vision use customized spreadsheets to remove white space around headings, that would be a browser method

John: as content author, no control over what browser I use

Jeanne: headings method for customization would not apply to content author, I will add that...
... we had a number of tests, proof of concept, to put numbers in here. took Bruce Bailey's idea that instead of numbering things as 1 worth 1, we use order of magnitude change between them, to amplify differences and make them more visible
... walking through spreadsheet...

John: lots of concerns with proposed scoring mechanism, rewarding methods as opposed to outcomes.

Jeanne: correct you, as result of tests, which are result of outcomes

Shawn: effectiveness is incorrect, as it is written and proposed, it looks like associated with methods

Jeanne: div with ARIA may not be correct

Shawn: the point around it is less than for that example than that example is for the first place

John: for screen reader user, not capturing if semantic structure correct

Jeanne: that is quality test, rubric that we did 2 weeks ago

John: how will that have an impact on scoring
... scoring seems to happen at page level, how do we bring that up to a master score. how can I get a 72, instead of a 100 or 1
... what if it works for some people but not all?
... addressing language of page is easy to do, but language of content inside of a page is harder to do.
... two SC, linked, one is hard to do, one easy to do, what is relationship between them in terms of points?
... we need to look at this in a wholistic way, when we assign points, where did 100 come from? how do we make that determination?

Jeanne: these are all really good questions, we need to work on that. i'm trying to bring you up to date on where we are.

John: I wasn't part of the work that happened, but the focus seems to be on methods and that is not the right thing to be testing and scoring.

Jeanne: we will discuss that in the conformance group
... we will address proposals today and Friday
... we spent a lot of time last summer to look at rubrics and formulas in order to look at ranking SC, but everything we tried, 35 pages of emails, any time we said this piece of guidance is more important than this, we ended up with a bias against the group that wasn't as important.

<Zakim> Lauriat, you wanted to touch on the point system and progress so far.

Jeanne: we tried to rank by priority of user need, but there is always some group for whom that is critical. and we did experimenting with it here, we did not see a way to rank needs because it wasn't fair to people with some disabilities.
... given the regulatory need for this, we couldn't rank needs.

<JF> From Jeanne's roll-up of comments and previous work: Techniques tell you if a particular solution was used, out of a possible infinite number of ways to do it, a technique describes one solution. That's fantastic if you're a developer and want to know common ways to do something correct. It's not useful if you want to know if something conformanc, because not using the technique doesn't actually tell you if something is non-conformant. Rules do ju

Alastair: there will be some kick-off for the conformance work, some people continue what we've worked on so far, if people have alternate idea, run with that for a few weeks and see what we are going to get to

<JF> (cont.) Rules do just that. If a rule is applicable it will tell you if (part of) an accessibility requirement is non-conformant.

<alastairc> Cybelle: Will we be able to get to google doc explainer? Have some concerns that we should test our assumptions before people work on various things.

<JF> A HUGE +1 for Cybele's p[oint re: assumptions

<Lauriat> +1

Kelsey: with outcome of not being able to rank criteria based on user needs, how do we avoid that? how is that practical at an organizational level?
... is that a roadblock?

Jeanne: having a filter or tag of priorities or things, using easy checks as a guide - here are the things you can do first because they are easy to implement, not about impact on disability, they have a big impact and are easy to do.

Kelsey: thought the new scoring is motivator for updates, how does that align with all criteria being equally prioritized

<Zakim> alastairc, you wanted to suggest conformance group starts with requirements overview, and then persues 2(ish) different models.

Jeanne: trying to get rid of inherent bias, but when you look at the report, this disability best served by A are people with no vision. low vision is AA, so says people with no vision are higher priority than people with low vision. that is the bias we are working against. trying to take what we did last summer, and move it from november
... where we got to in November in conformance.

Kelsey: that no vision vs low vision helpful to understand

Jeanne: kick off on Friday who is working on what guideline. if you know you are working on a guideline, to please work on it, we NEED that information for conformance, we DESPERATELY need the info at the guidance, test, method level
... to have real content to test

John: lots of assumptions here, and a lot concern me, questions raised that have not been directly answered.

Jeanne: once you see previously work laid out, you see assumptions, you realize how much we need data - guidelines, methods and tests, in order to test assumptions. instead of just debating
... this is why writing guidelines is PARAMOUNT
... to test assumptions

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/06/25 14:31:12 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

WARNING: Bad s/// command: s//zakim, order agenda 1,4,2,3
Succeeded: s/ ?me @alastair - conformance and "methods" (versus outcomes)//
Succeeded: s/acl al//
Present: Lauriat jeanne Makoto JF Cyborg AngelaAccessForAll
Regrets: Denis Luis KIm
Found Scribe: Cyborg
Inferring ScribeNick: Cyborg
Found Date: 25 Jun 2019
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]