[reviewing slides] and introduction to Silver for observers
<mikeCrabb> Hi, does anyone know what the call in details are for the TPAC meeting today?
<Lauriat> Presentation, starting at Slide 22: https://docs.google.com/presentation/d/1V_nYD27N6kx8gRha0rrdQK8aKyvg7kKXu6rs44We7IU/edit#slide=id.g44e0248110_0_0
<Charles> I have a meeting conflict for a while, so I am going to leave the audio on low in the background and keep IRC open. Lurking mode.
[conversation about ways that SIlver can include tests
Anne: ACT is moving away from tell people a test procedure and more about the rules for what the results of the test should be.
Amy: When we were doing the framework for digital musical instruments, we talked about functional needs -- that keeps people from only doing the items that are only for blind people.
Anne: The Monitoring decision was divided into seven functional disability areas.
Audrey: I disagree with splitting it by dividing it by disability or functional disability. People were asking what disabilities have a greater priority than other disabilities.
Amy: Accessibility has to be
    built in, rather than something bolted on. WHen you design with
    priorities for disabilities (A, AA, AAA) then it encourages
    accessibility bolted on at the end.
    ... be aware that you have to be able to disable motion,
    because people forget about this.
<Charles> wherever possible, we have tried to avoid naming a disability or disability category. instead, the hope is to reference the human need, like: “cannot see”. where necessary, the human needs combined, like “cannot see or hear”
THis is the write up on usability testing from the Silver Design Sprint
Amy: I used a scale for testing where if you drop it 10 times and it doesn't break, it scores more highly than something you drop once and it breaks.
Shawn: Did I do it well enough
    that I can move on? through the range of "I want my users to
    have an awesome experience"
    ... We are moving toward a task basis. We may have to rewrite
    everything to go there, to avoid the problem of having to do
    usability testing of every component. But today we have
    component level testing. We have to bridge the gap.
    ... Do we keep an element-focused conformance model, or do we
    rewrite everything to be task-based and let go of all the
    valuable existing guidance we have? I hope we can find a way to
    bridge this and not have to go to one extreme or another.
    ... the way we do it today is run the AXE library so that we
    get feedback at each step of the task.
Anne: But what about the other
    10,000 pages? You can test a maximum of 10 tasks.
    ... I want to see both -- people should also get points from
    passing an automated test of their 10k pages
Wilco: What is the problem we are trying to solve?
Shawn: Sites that have technical conformance but users with disabilities can't use them.
Anne: Will every task failure in a usability test be a violation?
Shawn: In the example we did at the Design Sprint, if it doesn't work for every user, then it would be more of a usability issue. I run into it often where a reported bug turns out to be a usability issue.
Anne: I worry that it will be
    watered down because we are doing usability. If the boundaries
    are not clear, then it will be a problem.
    ... I am worried about blurred lines.
Wilco: how do you expect this to work?
jeanne: we expect to have guideline for alt text. THen there would be methods that would have test results.
Wilco: Writing test results will be harder than writing procedures
Shawn: Example of alt text in Google docs which doesn't use a DOM that there are existing Techniques for.
Wilco: Alt text, for graphics, does it have an accessible name?
rrsgent, make minutes
Wilco: ACT rules format, we know how to write a rule and what it means
After the break, we want to see some of the ACT rules.
ACT Rules
<Wilco> https://w3c.github.io/wcag-act/act-rules-format.html
WIlco: Section 4 describes
    rules
    ... atomic rules that test specific parts of a web page
    ... composite rules that combine atomic rules
    ... accessibility requirements are for organizations that have
    to follow internal or local standards.
    ... In Silver, passing a rule might give you points. Mroe
    points at 100% and maybe less for partials conformance.
    ... aspects under test are things that you would have to
    test
Shawn: Do you include specific rules from other specifications in these rules?
Wilco: You have to declare your sources. If you don't have access to particular tools or if you are functionally disabled, you may not be able to do this test.
Shawn: The HTML spec requires an
    alt attribute, the ARIA spec has a role of menu and it can only
    have child elements (for example), so it doesn't need to be in
    the accessibility guidelines.
    ... Did you code it correctly?
Wilco: We don't do that.
Anne: We are discussing whether we should spellcheck for ARIA?
Wilco: When you develop a product, you should specify the accessibility for that platform
Shawn: We want to have the company developing the platform specify as much of the accessibility as possible and we should reference it.
Wilco: we want to test by a
    procedureal system and not a hierarchical system, which is why
    we don't have a composite of composites
    ... so a common use is that people set up either/or atomic
    rules and the composite rule is if it passes either or, it
    passes.
    ... applicability MUST be described objectively, unambiguously
    and in plain language
Anne: We prioritize, unambiguous and link out to definitions.
Wilco: We say "visible" and link
    to a very specific definition of "visible"
    ... because it is objective doesn't necessarily mean it is is
    automatable.
    ... what you should look at is objective, but the purpose of
    the element may be subjective
    ... all of the expectations must be true
    ... the logic of the expectation of composite rules must be
    spelled out.
    ... rules always have edge cases. The edge cases have to be
    included in the rules, so the rule may not be 100% accurate,
    and that's ok in many cases. It needs to be transparent.
    '
    ... if there are major accessibility support concerns, that
    should be included in the rule.
Shawn: How do you keep it up to date?
Wilco: Rules get out of date.
    THey are informative so they can be updated.
    ... I have been looking at a project that looks at tests that
    are being run actively on assistive technology..
Shawn: Can you link to the
    bugs?
    ... that would flag it so that it can be specific, targeted and
    can have a bug filed against it.
Wilco: Test cases
    ... accuracy. Accuracy is difficult, because things change all
    the time.
WIlco shows rule from auto-wcag.github.io for 4.1.2
Jeanne asks if we can link to it in our Silver demo.
Wilco: yes
Shawn: How do you score it?
Wilco: It maps to WCAG, how you score it is up to you. There are different reporting systems. Most of the rules only tell you if you fail, they don't tell you if you passed.
Shawn: How to write a rule that requires human judgement?
Wilco: Mostly we don't, but one person is working on a rule that has a human judgement.
[shows]
Wilco: WE have to make the problems as small as we can get them, and then resolve the interpretations.
Anne: One of the rules was written by the Norwegian government agency, and then started assessing fines for the organizations that didn't comply.
Wilco: Over lunch I talked about
    adding methods that could incur points for using accessible
    content management systems and IDE.
    ... If you can encourage browser vendors to make focus visible
    or to override the problems with single key shortcuts, then
    shift responsibility away from authors and toward user agents
    and AT.
Title: SIlver TPAC Meeting Day 1
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Present: Charles jeanne anne_thyme Wilco audrey RedRoxProjects_ shawn No ScribeNick specified. Guessing ScribeNick: jeanne Inferring Scribes: jeanne WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]