wilco: got a request from SiteImprove
... Romain says he’s not available
... [musings about whether kids is "considered harmful" or not]
... not sure if we should be moving the meeting
kathy: what about Wed or Fri?
wilco: qwe should set up a new survey
... it wasn’t super helpful the last time, but maybe things changed
skotkjerra: sure, would be good
wilco: Mary Jo, can you create a survey
shadi: should we go through point by point?
wilco: maybe a high level overview
shadi: the question is "how inviting do we really want to
be?"
... obviously we want to make it as easy as possible for people to
contribute
... but it’s not for the general public
... we want submitters to be able to carry through the whole process
... the current process might be a bit daunting
... there can be some light tweaks, maybe an illustration or two
... at the end of the day, we do want a process where people have to
commit when they want to submit new rules
... the other aspects are about missing things, like file name
conventions, implementation manifest, etc
... (which is a small JSON file that the submitter self-declares)
... more conceptually, the format for the manifest is not there yet
... we’re working on it in parallel, so our answer will basically be
"please be patient"
wilco: can we formulate a response
wico: 1. it should be a strict process to ensure quality
and 2. we’re working on it
... shadi: do you want to work on that?
skotkjerra: (missed by scribe)
shadi: it is a little bit high level, it’s a general
process. auto-wcag has a more detailed version of that
... it gives you more details, step-by-step
... this process is a little bit more decoupled from the actual tooling
(github or something else)
... this review process is expected to be implemented in a more detailed
manner by a specific group (like auto-wcag)
wilco: sounds fine
... I haven’t really looked at it from the perspective of an individual
organization
shadi: the work we’re doing in this TF is gonna raise
expectations
... but these are foundations for other groups to do the work
... there’s a bit of confusion about that the TF does, what auto-wcag
does, etc
... there’s an expectation that people will come here and know exactly
how to submit a rule
wilco: this describes what the TF should do when a group submits rules, right?
shadi: the review process says the WG is responsible, and
may delegate to a TF
... we need signoff by the WG
... we need to confirm with the AG WG. once rules go in the process,
there will be things coming in to sign off
... I’ll draft a response
wilco: this one is also from Shadi…
... you’ve already done a little update to the UI
... has everyone see the update?
<Wilco> https://w3c.github.io/wcag-act-rules/
shadi: the idea is that we’re starting to have some example
rules
... it seems good to have a nicer display
... what I wanted to get into with this issue isn’t just a styling
update
... when we look at the rules themselves, we have the rule number, the
rule name, the WCAG SC
... it’s a static liist
... we need to start displaying implementations, etc
... to see which tools or methodology implement a rule
... maybe some filtering functionality too
... it’s mostly whishful thinking on how to actually present this
information
... I was looking around at other groups, and the Web Platform Testing
stuff is interesting
... something like Web Platform Docs
... there can be many ways to organize the data (by rules, by
implementation, etc)
... we probably want a listing by rules by default
wilco: what is the data we need?
... we clearly need separate implementation files, where implementors
can say what they implemented
... we need to indicate which ones are draft (flag capability)
... we should start looking at groups as well
shadi: WDYM by "which is draft"? the status of the rule?
wilco: yes
shadi: on auto-wcag we have separate lists, 1 for drafts
and 1 for finalized
... my expectation is that we’ll only have finazized rules here
... maybe we just need separate lists (1 draft list, 1 obsoleted list, 1
master list, etc)
wilco: ok, makes sense
... I think we need a better way to do test cases as well
... right now you can‘t really track specific test cases
... I have 4 features we definitely want:
... 1. definition files
... 2. rule groups
... 3. lists
... 4. better test cases
shadi: I’m not exactly sure what you’re talking about, are you trying to design the data structure behind?
wilco: I’m trying to define the list of things we want, so we can create issues and start working on them
shadi: I’m not sure
... I don’t know what you mean by each of these items
... ok, so (1) is about definiing the JSON manifest?
skotkjerra: I’m confused too :-)
shadi: we start to develop a data structure
wilco: should I come up with a list of tickets that we need to work on, so we can review that next week?
shadi: +1
skotkjerra: +1
wilco: anything else about this?
shadi: who’s going to do that?
wilco: we can divide up the work
wilco: this one confuses me a little bit
... unfortunately Kasper isn’t here
skotkjerra: Kasper’s point is "how do we deal with the need
for additional input to be able to say if a test passes or not"
... let’s say you’re able to perform some steps automatically, but then
require more information
... Kasper is suggesting you can return a "failed" outcome + a list of
questions that can be exposed
... if we design it the way he suggests, you can still say it’s a failed
outcome if the list isn’t processed
... that probably doesn’t help clarify your confusion?
wilco: it sounds like we need a way to track which expecations are not completed?
skotkjerra: the alternative is to split this in ever more
granular rules
... so you’re never in the situation where you need input to later steps
... I think specifically Kasper is thinking of situation where you need
manual reviews of information gathered by automated checks
... but I’ll ask Kasper to clarify
shadi: I think I kind of get it, but this is gonna be very tool-specific
wilco: I agree, this seems too much into the details
shadi: we’re working right now on a rule on keyboard
trap...
... all elements can receive focus and navigate in them
... you can imagine a process where you check if focus is trapped, that
would be an example of tool+human cooperation
... or it could be imaginable automatically
... or completely manual
skotkjerra: this was inspired by what the Norvegians do
... in the keyboard trap case, the tool will present you an element and
let you navigate to it
... then ask you if the focus moved out of this element
... if yes/no, it prompts more questions, etc
... it’s a matter of approach
shadi: if you do just the checking if an element is focusable, that in itself isn’t an a11y check
skotkjerra: I see a boundary between tool-supported and semi-automated
shadi: the smalles possible a11y check is "go to the
element, check that the focus isn’t trapped"
... if you have a tool that only reports the element, without checking
the focus, the outcome is "cannot tell"
... this can be later refined by another tester, human or tool, that
will come to the conclusion "fail" or "pass"
skotkjerra: that’s Kasper’s point
shadi: is he advocating against the "cannot tell" value?
skotkjerra: no, he’s advocating for a list of questions that the tool will ask to get to the "fail" or "pass" conclusion
shadi: but this is tool-dependant
<scribe> [continued discussion on the "keyboard trap" example]
shadi: together with the outcome you can have info, do you want something more structured?
skotkjerra: yes, but I’ll ask Kasper
wilco: my only question is if this isn’t too implementation specific?
skotkjerra: good question, we don’t want to be too prescriptive
Kathy: not sure if this is related or not, but one thing I
was confused about was about rule/rulegroups
... where and how are we deciding to group rules?
wilco: that’s actually the next topic :-)
Kathy: wasn’t it what Stein Erik was defining? rules belonging to a same group
skotkjerra: rule groups don’t really work if the rule are too atomic
Kathy: the atomic rule would be an automated tool can identify all the elements that are focusable, and then we have to check the focus
wilco: right, and you shouldn’t break this down
... about the keyboard trap case, you could have a group:
... 1. with the tab key you can navigate to all the components in the
page
... 2. you need instructions on the page on how to get to all the
components in the page
wilco: I think we already sort of looked at it, I think yes
they do
... the reason I think is twofold
... there are properties that only exist for rule groups (like how many
rules need to pass to pass the group)
... that alone pretty much answers the question
... I think another thing I found when working on a rule for video is
that rules may apply to different SCs in a rule group
... to meet A level for video it’s suffficient to have a text
alternative in the file
... to pass AA, you need to have audio descriptions available
... to square that circle, you’d put both of them in a group, where the
audio description rule will pass both A and AA, and the text alternative
would pass A
... I think rule groups are starting to shape up quite well
... I need to start putting up a description for the rule group entity
... anything else we need to talk about related to rule groups?
... then I think we’re through the agenda!
<Wilco> https://github.com/w3c/wcag-act/issues/158
wilco: now that we’re using "applicability" and
"expectations", we no longer really have a procedure
... so I was suggesting to rename the section to "Test definition"
skotkjerra: I think it’s a good idea to change it, not sure about "Test definition"
wilco: I like "definition" since it’s a kwown testing term
... which I think well describes what this is now
skotkjerra: OK, I like it
shadi: +1
maryjom: +1
wilco: OK, I will create a pull request
... thanks everyone!