Silver Conformance Meeting

13 Aug 2019


AngelaAccessForAll, Jeanne, Kim, janina, JF, Luis, Jennison
Peter, Bruce, Makoto


Continuing Goals to Measures (new proposals to review)

Jeanne: Technology Neutral
... Technology Neutral guidance that will be supported by methods and tests. Did that throughout requirements.
... The goal is the technology specific guidance will go into method, which is equivalent to technique. More neutral guidance will go into core guidelines.

JF: Where do existing criteria fit in?

Jeanne: Will become guidelines. Technology specific will go into methods. That's what we're calling core guidelines so it's easier to understand. Propose we take 4 existing, 2 from COGA and take through our process and have them give approval.

JF: Is that what the morning group is doing?

Jeanne: It is, but it's how we need to evaluate around conformance.

JF: When will we talk about scoring?

Jeanne: Until we know what we're scoring, it doesn't make sense to score. What are the issues, what do we have to do, how do we measure, what's on the issue list and what should be there. Then we can evaluate the different proposals.
... We haven't figured out things around the scoring system. Then it will be easier.

JF: WCAG 2.0, WCAG 2.1, eventually 2.2. We have bronze today since we want it to equal 2.1 A and AA. We need to capture and attach to scoring, but we don't have a unit of measure. What is the unit of measure? Can think about when migrating the existing criteria. Gives us a minimum to work off.

Jeanne: That's one way to address, but talked about it with others, and we need to address issues. If we score the wrong thing, we'll need to redo the point system. Need to address the bigger issues first. Then the unit of measure when we know what we're measuring, why, and the major issues we need to address.

<jeanne2> Take 4 existing success criteria and 2 new guidance proposals from COGA or Low VIsion Task Force through the content development process including Guideline and supporting Methods and tests.

<jeanne2> Demonstrate our work product with AGWG???, Low Vision and Cognitive Task Forces. If they give approval of our work to date around technology neutral, that will be considered a success.

Jeanne: Do we want approval from AGWG?

Janina: Don't know about formal approval. Communicating is smarter.

Jeanne: Definitely communicating. We report progress on our group. May be broader package that they address.

Kim: What would approval from them give us?

Jeanne: Disapproval may cost us time since we may have to redo things. We have to do anyway, since it's part of how we're meeting the requirements. There are people that think deeply about the issue. Maybe approach them individually.

Kim: Not sure what we have to share.

JF: The 4+2?
... I think it would be smart to socialize in a larger group before another 4+2. Make sure it looks and feels right. Don't see a downside to taking to larger group.
... More coming down the pike. Let's make sure what we've done with 4+2 is comfortable for everyone.

Jeanne: Maybe leave that as a question and move on.

Kim: Don't have a strong feeling, but wanted to know if pro or con either way.

Jennison: Maybe if no strong conviction, tend to agree with John: Give to them and let them take a look.

<LuisG> should include

Jeanne: Quick IRC poll: How many think we should include them, and how many think we shouldn't?

<LuisG> Jennison: should include

<jeanne2> JF: Should

<Kim> +0 - fine either way

<janina> +0

<jeanne2> +0


Jeanne: We'll put it in as a note to larger group, and make sure they agree as well.
... Are people okay with the rest of it?

<jeanne2> Proposed: Take 4 existing success criteria and 2 new guidance proposals from COGA or Low VIsion Task Force through the content development process including Guideline and supporting Methods and tests.

<jeanne2> Demonstrate our work product with AGWG, Low Vision, and Cognitive Task Forces. If they give approval of our work to date around technology neutral, that will be considered a success.

<jeanne2> +1

<LuisG> +1

<Kim> +1

Jeanne: Quick poll


3.5 Readability

<jeanne2> The core guidelines are understandable by a non-technical audience. Text and presentation are usable and understandable through the use of plain language, structure, and design.

<jeanne2> Proposed: Take the 4 existing success criteria and 2 new guidance proposals (from Test of 3.4 Technology Neutral)

<jeanne2> Demonstrate to Cognitive Task Force and ask for their feedback. If they give approval that will be considered a success.

Jeanne: Any comments or questions?

Luis: Seems like it's more about content rather than the method.

Jeanne: The conformance side is that the methods and how we test and score are understandable by a non-technical audience.

Luis: Procedure is the same. Doesn't sound like about conformance; more content.

Jeanne: Should we remove this one?

Luis: If covered somewhere else, it might be out of scope. If it is in scope, leave it in. If focusing on conformance, it seems out of scope.

Jeanne: Didn't make this list. Working with the work from the joint group. Noting that this is about content and that we should move on.

Jennison: We talk about readability. During calls, someone mentioned grade level. With plain language, is it assumed that there's a grade level attached to it?

Luis: That's just one way people analyze it.
... There's other ways of measuring it.

JF: I remember that the COGA task force were adamant that grade level isn't the way to measure it.
... They have a rigorous algorithm, but not the end-all, be-all way.


<jeanne2> Regulatory Environment

<jeanne2> The Guidelines provide broad support, including

<jeanne2> Structure, methodology, and content that facilitates adoption into law, regulation, or policy, and

<jeanne2> clear intent and transparency as to purpose and goals, to assist when there are questions or controversy.

Jeanne: Any questions on explanation of what the requirement is?

<jeanne2> Proposed:

<jeanne2> Demonstrate the point system proposal to a group of 5? Regulatory stakeholders including at least 3 countries and 2 accessibility legal experts.

<jeanne2> Measure success by general approval, with no objections. Not all suggestions for improvement need to be implemented, but there cannot be any “I can’t live with that” objections.

Jeanne: Is this enough to be credible?

JF: Five isn't a bad number, but the more feedback on regulatory and legal is better.

Jennison: When we choose countries, or explore areas, it wouldn't be US, Canada, and Europe. Maybe Japan, India, or somewhere else. At least one "developing" country that has accessibility legislation to make sure we're covering all situations.
... Don't need to nail down 5, if 6 or 7, it's not a bad thing.

Jeanne: Was thinking we could ask Makoto.
... Contacts in Korea and Brazil.

JF: Check with Lisa Seaman for Israel since they have strong regulatory there.

Kim: Perhaps say at least 5 but not more than 5 or 10. So we have more flexibility.

Jeanne: Good to get legal experts outside the US.
... I'd like to look at issues list.
... Anyone want to look at motivations and scope?

Luis: I could try. Need to check.

Jennison: Happy to help.

Reviewing the issues we need to solve in Conformance

Jeanne: I'd like to get started identifying the issues we still need to solve.

<jeanne2> https://docs.google.com/document/d/1wklZRJAIPzdp2RmRKZcVsyRdXpgFbqFF6i7gzCRqldc/edit#heading=h.32hyahymjpj8

Jeanne: This is the document we worked on in May that was a summary of where we were in conformance. This was before the conformance subgroup and before proposals.
... I wanted to make sure we have a robust list of issues. I don't expect everyone to know this week, but I want to look at what's there and see if we covered the major things we want to include.
... If you haven't reviewed this recently, I'd be grateful if you did. This is a summary of work from last November plus the CSUN face-to-face meeting.
... The first issue that the group at that time agreed that the meta-goal: How do we make the conformance better aligned with people with disabilities, knowing that they have different experiences?
... Comments that people can have conformant sites that aren't accessible.

<Kim> 2. Will the model of architecture we are proposing address the needs identified?

<Kim> 3. What measurements should we encourage?

<Kim> 4. How do we set up a point scoring system that will be transparent, fair, and motivate or reward organizations to do more? There is an experiment with a point scoring spreadsheet. That is not intended to be used by regular users, only accessibility policy experts, regulators, and lawyers. (Bruce recommends a proof of concept that is more exaggerated (order of magnitude) to develop the concept, then refine it later. )

<Kim> 5. How do we maintain a point system so it stays current, but is protected from "gaming"?

<Kim> 6. How do we set up methodologies for task-based assessment that can be used across a breadth of websites and products? The nuance of defining a task (granularity, paths, whether different multiple paths are more accessible to certain disabilities)

<Kim> 7. How do we migrate people from WCAG 2.x to Silver from a compliance viewpoint? (for example, should Bronze level equal WCAG 2.0 or WCAG 2.1?

<Kim> 8. How do we decide what are minimums that organizations must meet? Should that just be the non-interference success criteria of WCAG 2.x or are there more?

<Kim> 9. Should we require points be spread over categories of user needs? What list of user needs should we use?

<Kim> 10. How do we draw a line between "equivalent experience" and not identical experience? The example is a Map application where the complexity of the visual experience is too overwhelming to express in text equivalent.

<Kim> Exceptions These are use cases where organizations make a good-faith effort to make their site accessible and it still has problems. If we have a list of use cases, we can address them.

<Kim> - "Substantially conforms" came out of the Silver research where companies had a generally accessible site, but it was so large or updated so quickly that it wasn't possible to guarantee that it was 100% conformant. Facebook was an example of a site that was literally impossible to test because it was updated tens of thousands of times per second.

<Kim> - "Tolerance" is a different concept of a less-than-ideal implementation but no serious barriers. I think we could collect those "less than ideal" examples when we write the tests for the user need. How we would we flag them as "less than ideal" and refer people to better methods seems like a solvable problem.

<Kim> - "Accessibility Supported" is another slice of this problem, where organizations code to the standard, but it doesn't work because of some bug or lack of implementation in the assistive technology. We have discussed noting the problem in the Method, and then tagging the Method for the assistive technology vendors to know they have a problem, or make it easy for SME's to file bugs against the AT (or user agents, or platforms, etc.)

<Kim> - Where something conforms, but the users are still not able to go through the task or get the information they need.

<Kim> - Being dependent on an external vendor and you can't fix it until the vendor fixes it.

<Kim> - A Map application where the complexity of the visual experience is too overwhelming to express in text equivalent.

<Kim> - Global exceptions that currently in WCAG are on an SC-by-SC basis. For example, in 2.1 SC 1.1.1 there is an exception condition that: “If non-text content is a test or exercise that would be invalid if presented in text” but there is not a similar exception under 2.1 Guideline 1.2 for Time-Based Media. This is a problem since videos could also be used for testing, and adding captions or audio description might invalidate the test.

<Kim> -- Conformance in Silver should include a exception for situations where conformance would be a fundamental alteration or otherwise invalidate the activity.

<Kim> -- There might be a similar exception related to security concerns.

<Kim> -- There might be a similar exception related to real-time events.

Luis: User generated content? Third-party content: Even if you could analyze Facebook, there are things that users could post that the company shouldn't be held responsible.

Janina: I take that as implicit, but doesn't hurt to make explicit.

Jennison: Company X is using third-party vendor content that they can't make accessible.

<Kim> Potential addition: User Generated content: For sites like social media sites where users generate the content. The site can provide the tools to make it accessible, but should not be held responsible for the content generated by users.

Luis: Against that being an exception since that's a choice a company made.

<jeanne2> 3rd party content: When there are no other commercial options for the 3rd party content.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/08/14 00:24:44 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/present?//
Present: AngelaAccessForAll Jeanne Kim janina JF Luis Jennison
Regrets: Peter Bruce Makoto
No ScribeNick specified.  Guessing ScribeNick: AngelaAccessForAll
Inferring Scribes: AngelaAccessForAll

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]