See also: IRC log
wilco: shadi, please walk us through the proposal
shadi: Wilco and I will be looking back at
previous work statement from the WCAG WG
... some alignment still needed on the deliverables section
... the deliverables description contains more details
... about the approach, the relations with other groups
... not much is new, we can look at the deliverables definition a bit
more
wilco: what is left to do on this? on communication?
shadi: yes, the question is about how to
reuse the CG mailing lists, or a task force list, etc. Cleanup is needed
... another question is the participants, who really want to participate
here?
... this has not been looked out by the WCAG chairs and group
judy: It w/b good for them to review it.
the key question will be resource allocation
... so far we still don't have sth clear, but no harm in asking feedback
... about the further description of the deliverables, I want to make
sure they're clear enough, as well as other sections of the statement
... the question of what layer becomes a Rec, etc.
... shadi, how do you think that maps to other Rec track practices?
shadi: we need to keep an eye on other groups, which may have different setup and different naming
judy: how this would map to resources from other groups?
<shadi> [[https://www.w3.org/testing/browser/]]
judy: I'm curious how this deliverables can map to all the resources that have been provided in the area of a11y testing (see the recent announcement from Microsoft)
<Judy> [Judy posting several links from Cynthia Shelley at Microsoft]
<Judy> Accessibility Test Automation using WebDriver and UIA went open-source today.
<Judy> Blog Post: https://blogs.windows.com/msedgedev/2016/05/25/accessibility-test-automation/
<Judy> Github: https://github.com/MicrosoftEdge/A11y
<MoeKraft> I have seen this.
<shadi> [[https://w3c.github.io/webdriver/webdriver-spec.html]]
judy: curious wether there are relationships or opportunities between the current draft and the various things already out there
shadi: seems compatible. from a conceptual perspective, having a spec that defines tests and test rules, with a framework doesn't seem to create issues
judy: a lot of demand I was hearing from
members is about an authority in the definition of rules
... if you look at this picture of what the normative layers are, do
think that it is OK?
<Ryladog_> +1 for authority in the definition of WCAG 2 (+) rules
moe: currently our tooling is based off the
OpenAjax ruleset with some extensions
... when I heard about this group, my first question was about how to
leverage the work of OpenAjax
<MoeKraft> https://fae.disability.illinois.edu/rulesets/
judy: yes, big question. I think originally we intended to leverage this work
katie: I think that's what the w3c can bring to the table, what is considered an authoritative set
judy: does this approch do that for you?
katie: yes
wilco: from my persperctive, the framewrok
will be the standard, but we need flexibility for the authoritative set
... we need to frequently test the accuracy of the rules
... we need the flexibility of not having them as specs themselves
... moe, is it goiing to fit with your organisation or do you expect
rules to become spec?
judy: it's been a pretty clear argument
that the rules w/b pretty fluid and w/b hard to be standardised
... I think this would fit
... you aked if this is something the WCAG should be looking at and I
think it's the right time
shadi: I think this may not be very clear
in the doc.
... regarding john Gyunderson comment we can do more about explaining
the vocabulary
... the ruleset will be technology dependent
... regarding your comment on authorativeness, the rules are approved by
the WG
... they conform to the definition spec and were validated
... we could have a snapshot list evevery 6 month or so, or we can have
a different way of signaling that on github
... the authoratitiveness has to be defined, how are they going to be
defined
... how to label the rules?
wilco: we can do without labelling for now
... but there will be some rules that have been vetted by the WG
judy: an auhoritative set that will evolve,
not a static set
... it has been done for the supporting techniques
... no reason why it wouldn't fit
... it requires a lot of discipline to keep publishing these updates
... wrt to John's stuff about technology specificity, that makes sense
to me
... broadly on the question about if it's feasible within w3c to develop
a standardised definition and fill it in with progressively vetted
things
... yeah, I believe this is consistent with stuff WCAG has already done
... the techniques are informative, but there is defined process on how
they are vetted
... that is a viable approach IMO
shadi: it's a similar process as with the
techniques and features
... the difference is that the development cana happen outside the task
force
... the work can be done externally, then can be demoed to the group for
approval
judy: can we call that question right now?
... we may need to do that on the list as well
wilco: OK, I will set it up
judy: does anybody here objects to the deliverables as outlined in the wiki subpage?
wilco: we can do that within the next week
<MoeKraft> I'm still digesting deliverables. I will respond to Wilco's email.
wilco: if there's any feedback we can still adress it, otherwise we can take it to the WCAG WG
<Ryladog_> +1
<MoeKraft> 0
<shadi> +1
+1
<Wilco_> +1
annika: I had a few comments on the wiki
... most of it are minor stuff or covered already
... like procedure of regular update to the rule set
<Judy> +1 specifically to Annika's request to specify regular updates
<annika> Question on table with responsibilities: Why is the benchmark tool developed by auto-WCAG and the Rules suite frontend by ACT TF? In my opinion it would make more sense to do it the other way around.
<shadi> https://www.w3.org/community/auto-wcag/wiki/Talk:ACT_Deliverables
wilco: I think it should be available as a
W3C resource, so it make sense to be developed by the TF
... as for the tool, it can be done as an open colllaboration, so
doens't have to be closed to the TF
... I don't want to be the implentor for the specification that I
develop myself
annika: than I misunderstood what the tool
was
... I thought the tool was part of the deliverable
wilco: the idea is that we would develop
the benchmarking method
... the tool would be an implementation of the method, that would
actually do the analysis of the rule
... it would pull a number of pages, run the pages evaluation
automatically, and the results would be checked manually
... you need a rule implementation and some way to manually compare the
results of the rules to the evaluation
annika: it's a way of testing an implementation?
wilco: yes
shadi: annika raises a good point.
... not sure if it needs clarification, but please send feedback
... wilco, not sure where the frontend has to be developped. It could be
developed in auto-wcag or somewhere else, as long as the TF oversees it
... we can leave it open
... also, it's good that you're talking to john gunderson, I'd like some
clarification on his email's question
wilco: if I understand correctly, they've gone beyond WCAG testing and have done testing for ARIA best practices
<Zakim> shadi, you wanted to ask about link to authoring practices
wilco: so there's other rules you could run
and other specs you can test against, doesn't have to be WCAG
... I think that it's his point
... our scope w/b WCAG, but you could apply it to other areas
... does it clarify it to you?
shadi: yes
... we can probably describe this a little more
... focus primarily on strict WCAG first, then best practices or other
areas later on
moe: Most of our rules align the OpenAjax;
we do have some rules for ARIA, and some rules about ARIA roles
... there are times when we find that for a given role some impropriate
state or properties are associated
... it's not only best practices
... about my 0, I just want some more time to read the document
judy: I'm reassured when people spend the
time to carefullly read and review it :-)
... really good to give it some scrutiny, everyone else is welcome to do
so!
wilco: annika, I will have a closer look at
your comment and update the deliverables doc accordingly
... shadi, you'll update the work statement
... after that, we can email the group to review it.
... final thoughts?
moe: good work! I'm happy to be part of the team
annika: no furhter comment ;-)
judy: once it gets under discussion in WCAG, we might see a brief lag for a few weeks, don't be discouraged by that, and thanks for all the work
shadi: wilco and I will be doing cleanup, thank you everybody and looking forward to comments
romain: I appreciate the good work ! looking forward to the reviews...
wilco: ok, thank you, talk to you next week for the technical meeting!