<Rachael> scribe+
scribe+
wilco: generic rules, from ACT
persepective, don't require a particular technology but the
examples need to be in something
... whether HTML or PDF or md
... being specific and clear requires technical
definitions
... there seems to be a consistent need for "generic"
testing
Rachael: there is a need for tech
specifics but there are places that aren't tech dependent
... example of wording or diacritics
... second, need the generic to sit above the tech-specific. It
will take time to write all the technologies
... one level up from the tech that is still very specific and
generic will be helpful
... third, coverage for all rules that is not strictly HTML
alastairc: do we need to use ACT
Rule format or adapt or take a different approach for
generic
... top-down test point of view like a user
... ACT rule is most granular level
... what is needed in between?
... partly an education thing for the group but think there is
a need for something not quite so strict as ACT rule
wilco: first topic, not everything we test requires a technology
<alastairc> To re-phrase one of my points a little, a lot of the group are used to testing top-down, as a user, rather than going through the details of the content / technology.
wilco: makes sense in this case
helen: we need a clear definition
of when we need to be device-specific
... example flashing depends on device, size for pass/fail
alastairc: want the process of
breaking down the provision but not be tech-specific
... or use ACT rules when tech-specific
... testing of content and assign pass/fail vs. testing with
AT
rachael: can we get to a middle
ground
... example "block of text" definition. can be written at a
higher level with flexibility that could be more detailed in
examples
... for wcag3 we need coverage
wilco: with enough examples get a good understanding
rachael: can say ACT is not going to provide coverage but still need some kind of test process
wilco: why are the requirements not generic, testable statements?
rachael: that would triple the number of requirements we have
alastair: in wcag2, we have techniques with little test procedures. what were the issues that led to ACT rules?
helen: people are seeing a
problem when they haven't started
... getting tied up in the weeds
... for starting point, focus on getting rules written and get
coverage
rachael: probably need to acknowledge if we're not going for full coverage but will get pushback
alastairc: there isn't full coverage for WCAG 2
<Wilco> scribe+
<Wilco> kathy: On text and wording, someone had already drafted an ACT rule
<Wilco> ... I think it was a good first rule. As we reviewed my comments there were technical difficulties. They tried to copy terms, but they said they didn't have the technical knowledge. Going through that it was something I didn't think it was in line with the requirement
<Rachael> +1 to this process drives better requirements
<Wilco> ... It stirred good conversation. I think because they tried to draft the rule it created good conversations, which was worth it
<Wilco> Alastair: My point was that the number of rules, we may be better to say 2 or 3 rules per provision, or timeboxed in some way, rather than trying full coverage
<Wilco> ... With WCAG 2 it was also hard to write complete failures
<Wilco> Kathy: The test instructions in WCAG 2 techniques weren't too helpful. They were too generic and it created a need to be more specific
<Wilco> ... I'd lean towards more specific rather than staying in the middle.
<Wilco> ... too generic leaves room for people to come up with their own interpretation. That's what lead to ACT
scribe+
wilco: would be happy to have the
group write very specifi rules to think through the edge
cases
... with definitions
... think that can work
<Zakim> Rachael, you wanted to ask the risk of a generic technique
wilco: a time box exercise in finding edge cases and taking generic language
rachael: time box exercise, maybe
1 or 2 rules per requirement
... WCAG 3 original design was to have a generic technique, an
HTML technique and then build more
This is scribe.perl Revision VERSION of 2020-12-31 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: s/tied up in the needs/tied up in the weeds/ Default Present: Sage, Rachael, Helen Present: Sage, Rachael, Helen No ScribeNick specified. Guessing ScribeNick: Kathy Inferring Scribes: Kathy WARNING: No "Topic:" lines found. WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: No "Topic: ..." lines found! Resulting HTML may have an empty (invalid) <ol>...</ol>. Explanation: "Topic: ..." lines are used to indicate the start of new discussion topics or agenda items, such as: <dbooth> Topic: Review of Amy's report WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]