<shadi> scribe: shadi
WF: agenda https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/TPAC_2018
<steve> what is CR?
WF: out of date because of CR issues that came up
CR = Candidate Recommendation
scribe: suggest to address these issues first
SAZ: Shadi, staff contact for ACT TF
Katie: Knowbility
KathyEng: AccessBoard
KathyWahlbin: TPG
Raf: AccessibilityOz
MJM: IBM
Wilco: Deque
Romain: Daisy
Kasper: Siteimprove
Anne: Siteimprove
Steve: W3C (observer)
Stephane: Orange (observer)
<Wilco> https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/TPAC_2018
Audrey: Access42 (observer)
<Wilco> https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/index.php?title=TPAC_2018&action=edit
[going over agenda and adjusting]
Anne: still some things on the list i sent that are not addressed
Romain: did we address comment from Alistair during the previous call
Wilco: think it relates to
existing issue
... we'll look at it
https://lists.w3.org/Archives/Public/public-wcag-act/2018Oct/0014.html
<romain> https://lists.w3.org/Archives/Public/public-wcag-act/2018Oct/0014.html
<anne_thyme> Anne's list if concerns for CR, sent October 17: https://lists.w3.org/Archives/Public/public-wcag-act/2018Oct/0014.html
Shadi: maybe if we have time,
look back at wo
... maybe if we have time, look back at work in Auto-WCAG CG
and what we can learn from it
... for example additional guidance in or around the spec
<anne_thyme> New section in spec: https://w3c.github.io/wcag-act/act-rules-format.html#accessibility-requirements
https://github.com/w3c/wcag-act/issues/290
Shadi: coming from the initial
idea of "outcome has to be consistent with the accessibility
rule"
... think adding logic into the mapping seems a different
approach
... we have definition of outcome
... think the mapping is a finite number of combinations
KathyWalhbin: definition of outcome does not address confidence levels of machine learning
Katie: Silver might also use thresholds
Shadi: shouldn't that be the expectation?
Katie: might not be the expectation but more the threshold
Wilco: disagree with using
expectation for threshold
... think it limits us from being able to aggregate
Kasper: encountered this issue
when developing the output of our tool
... list in our rules if it is a partial test
... if the rule is a fail, then the requirement fails
... if it passes, the requirement is cannot tell, if the rule
is a partial test
... not sure we need all these differentiators
... just a flag is the rule is a partial check
... failure of the rule is always a failure of the
requirement
Anne: like Shadi's approach
... can put it directly under the requirement
... and immediately see how it maps to the rule
https://github.com/w3c/wcag-act/issues/290#issuecomment-432943569
scribe: can help manual
evaluators to know what they need to do next
... without reading the details of the rule
Wilco: think this approach is
very restrictive
... for example video
... could be video embedded through other technologies
... need to know if other technologies
Katie: how would it then look like?
Wilco: for example, "passes if there is no additional technologies used"
KathyW: why can't this be as a sub-rule?
Kasper: this is exactly what
composite rules are for
... adding this to atomic rules would be mixing concerns
Wilco: disagree because composite
rules aren't at that level of aggregation
... can't do that level of aggregation
Kasper: we should have that functionality
Wilco: propose to have it there
Shadi: disagree with having that in the mapping
ack r<
scribe: we also have assumptions section
Katie: how would you account for other technologies?
Wilco: in the aggregation logic
Shadi: think that not clean
Kasper: opens the door for wrong use
KathyW: what if it is an atomic
rule of a composite rule?
... may mean rewriting the logic
Wilco: don't think we should do that
KathyW: maybe we need different types of composite rule
KathyE: may need a third-tier
rule "criteria rule", that combines different rules
... to map to a success criterion
... for example one composite and one atomic rule combined
together determine the success criterion
Anne: agree with that proposal,
this is also what Norwegian agency is asking for
... for tool writers it is written only once
... but for people doing manual checks they need this reminder
every time
<notabene> RRSAgent: mke minutes
<notabene> RRSAgent: make minutes
<Jey> whois - Jey
<Jey> anyone knows how to use `whois` on irc?
<scribe> scribe: Anne
<anne_thyme> scribe: anne_thyme
<scribe> scribenick: anne_thyme
Kathy: Can we come back to how to aggregate results for success criteria from rules. What is the big difference between Wilco's and Shadi's suggestion?
Wilco: What Shadi is proposing is
a set list of mappings between outcomes and aggregations
... What I am suggesting are a set of constrains for writing
mapping
... The constrains could be: The mapping has to be objective,
not use X, Y or Z. So limiting what information you can use, so
you cannot do additional testing
Kathy: Testing in automated tools
can be black and white, in manual testing there can be other
inputs you want, things that are not rule based
... so I think we have to be careful in saying we don't want
anything else
... I could be looking for human input, I could be looking for
patterns or context to get a level of confidence on whether
this passed or failed
Romain: Can we move the machine learning discussion up, so that we talk about this before the requirements mapping?
Kasper: This is not limited to machine-learning, though. It could be that I sent it to 5 people, and 4 says passed, 1 says failed
Kathy: If we start looking at the way a human is processing to find out if something passes or fails, they take in a lot of different information to find patterns. The same applies to machine learning. It is different from the black and white results in today's manual testing
Romain: The rule describes the perfect processing, with 100% confidence. The implementation might come up with a different result. It's an implementation decision, that could be documented for the result
Kathy: Making a satisfied/not
satisfied decision has to take a lot of input from many
different rules, whereof some might be manual reviews
... I am getting stuck when it comes to this level of using the
spec
Kasper: I think a third rule type would help formalise how to combine different inputs. A kind of "requirement rule"
Kathy: You can do this to a certain level, but for machine learning it will start to learn from all of the input data that we put in there, so we cannot write down the rules necessarily
Kasper: It could describe the
quality of the input data, doesn't have to describe the
process
... In the case of video elements, you can't in a composite
rule look for something that is different from the test
targets. You could look for something like this in a different
type of rule
Wilco: Why can't we have that in the requirements mapping? I think we need the flexibility
Kasper: I don't want Turing
complete atomic rules
... where the entire rule is implemented in the requirement
mapping
... The open requirements mapping seems to me like a hack to
cater for a specific type of rules more than a general
solution
Shadi: I hear from several
people, both the "machine learning side", the manual testing,
and even from Wilco, that there is a need for a mapping
... and from Kasper that there is a need for a birds-eye
view
... both Kathy's have mentioned that we need information on how
to aggregate to a requirements level
... I am not saying we need another level of rules, but we need
another level of logic
Wilco: Is this within
scope?
... And is this the question about composite rules using
composite rules, or is this something else?
... I am worried about adding complexity
... and about encoding procedures into rules
Kathy: There is nothing easy right now to help aggregate results. Maybe we need this documented. I think this would address everything we have talked about
Anne: I think this is related to the questions we got for how to aggregate in the Output Data Format. We took the Output Data Format out of the spec now, but I think the need for guidance is still there, and the same that we have heard voiced before
Wilco: I think that it is within scope to get to a satisfied/not satisfied result for a success criteria
Shadi: I think that it might have been there all along in the part about "consistency to a requirement", even though that came apart when we started unravelling it
Kasper: Our Section 2 states that we are only trying to test non-compliance
Kathy: A failed rule doesn't necessarily mean a non-satisfied SC
Kasper: With composite rules, this should actually be true
<shadi> https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/ACT_Overview_-_What_is_ACT
Kathy_: If you have composite rules that can include other composite rule, this logic is not necessarily there
Romain: The part of the spec Kasper was quoting has been changed
<romain> latest editor's draft: https://w3c.github.io/wcag-act/act-rules-format.html#scope
Shadi: I think the question we
are wrestling with is "where do we put this information about
how a rule maps to a SC"
... We need to look at what should go into this aggregation
logic
Wilco: I am concerned about that this would have to be something that has to be maintained someplace central
Anne: To me this mapping seems like the next logical step in harmonizing accessibility testing, since it's something that we all need to have anyway in our implementations
Romain: The benefit of separating it out into another layer like Kathy suggested would also be that you could have different mapping layers for different standards, but reuse the individual rules
Katie: Each organisation could then decide on which mapping to use
Kathy: This mapping seems like it
should be done as part of the SC documentation, and not be a
part of this framework
... This is a part that is missing in the SC definitions
today
Shadi: I agree that this is what
is missing. This work is trying to address that there is
missing information and differing interpretations
... and we are coming to this work from the bottom up
... I don't think the working group will make these
interpretations
... Coming to a decision on "this is a complete test for this
SC" will be a difficult consensus to reach, and there will have
to be interaction with AGWG on this
... and how do we phrase it so that we don't define the
internal workings of a tool
Kathy: That's why we do it on a
SC level
... We could still have gaps in the coverage of a SC
... if we have the individual rules and the SC level, it is up
to the tools to find out how to implement it
Katie: Are we taking in the fact that the non-interference have to pass for anything to be conformant?
Wilco: We are not looking at the
conformance level. We are looking at satisfying SC
... There is a little more to conformance than that. There is
also the "conforming alternative" part.
... Yes, the non-interference SC also have to be true on the
page even if there is a conforming alternative version.
Katie: ... and on the home
page
... These are higher level requirements.
Audrey: RGAA mostly doesn't look at alternative conforming versions
Kathy_: Trusted Testers looks firstly after a conforming alternative version, before testing the non-conformant content
Audrey: Clarification: RGAA tests for conforming alternative AND the non-interference SC
Mary Jo: In the VPAT there is no place to report these things. WCAG doesn't say anything on how the high-contrast mode should work.
Wilco: Satisfying a SC, as I
understand it, is that you try really hard to find a problem,
and if you can't then it's satisfied. WCAG however doesn't tell
you how hard you hae to try
... Learning WCAG testing, you learn more, and gradually learn
to find more errors
... I don't know if we can decide when you have tested enough
to know if a SC is satisfied. It depends on how much effort you
put into finding errors
... Is that how you understand it too?
Mary Jo: Yes
Kathy: Yes
... that is why I suggested including a SC layer - and for each
technology, e.g. HTML, PDF etc.
... it is hard, that is probably why it hasn't been done
Wilco: This gives me reason to
think that this is out of scope
... This is probably for AGWG to decide. I don't know if we can
answer this question
... The reason why we are coming back to this is because there
is a need, especially from manual testers, to know when you
have done enough testing
Kathy_: When we are testing, we
are using samples. We don't test every single element on every
single page
... I think sampling is a different concern
... Sampling would be different across different testers.
... But the outcomes from following our approach would lead to
knowing whether a SC is satisfied or not.
Wilco: It sounds to me like there is a level between the SC and the rules, by which you can decide whether you have tested enough. That level is specific to a technology. And it may vary between technologies.
And it may vary between technologies. --> And it may vary between implementations.
Katie: So it could be up to
companies e.g. to include tests for deprecated elements or new
ARIA features, and the ACT SC layer should be the minimum
set.
... Because all companies don't want to have the same tool,
they want to stand out
Kathy: I think what we are coming
back to the interpretation of the WCAG. All organisations have
their own interpretation, their own methodology, because it is
not clear how to meet the SC
... Like with the Eveluation Methodology, we could have another
document that is the interpretation
... I think having this other document would really help. When
e.g. developing Trusted Testers. It would have to be technology
specific.
... There were reasons not to include it for the SC.
<Kathy> q
Anne: I think the question of "enough" testing is two things: "Enough" rules, and "enough" content. I'm worried that if we don't define what "enough" rules are, how do we know if someone has properly implemented ACT rules...
Shadi: I start to think this
might be out of scope, though needed
... There is both the question of "how do I test for a SC?" and
then the "I fired all of these rules on my page, what do I do
with it, what do I miss still?"
... We could define some patterns. Rule mapping that could be
quite open, but building on Kasper's idea of focusing on the
patterns for failures.
... Example: Composite rule that tests for if there is a title
on the page and if it is descriptive, and a separate atomic
rule that tests that there are only one title. And then all of
it is collected another layer, mapping it to the SC
... So this mapping rule would have a small description and a
collection of test cases passing and failing and then
suggesting the rules that could be used for this.
... And it could have a Limitations section, e.g. for video
describing that "if there are other types of video than the
ones considered...". So we don't go the full way, but base it
on test cases, so that when you run your methodology, these are
the results you should get.
https://auto-wcag.github.io/auto-wcag/rules/SC1-1-1-image-has-name.html
Shadi: One proposal was to have a
"partial" flag
... Then there were a want for additional information in
this
... then there was the need for a place to put it
... then there was a concern that we were limiting
implementations too much
Kathy: Test cases should not be at the rule level. I need something at a higher level
Shadi: That's why it could be in
this layer
... Does this get us out of the issue?
Wilco: It does, but it sound to
me like major scope creep
... I don't think any of us ever set out to fully define a
SC
Shadi: I'm not saying that. It could also be a partial check
Wilco: We already do that
Kasper: No, not at a high enough
level
... We cannot combine disjoined test targets in composite
rules
<romain_> scribenick: romain
shadi: let's start again from the
beginning: do we have a concrete example?
... what are we missing?
... is that an exceptionnal case?
wilco: I'm saying rules are not enough to satisfy an SC
shadi: we were saying the outcome
has to be consistent with the a11y requirement
... "consistent" could include several things
wilco: I was thinking that if the
outcome is "fail" the SC cannot be satisfied
... and we don't know if the outcome is "pass"
shadi: so we're back to a partial
flag
... I thought you said it doesn't satisfy some use cases (like
the video example)
wilco: there's more to it, I'm more inclined to say we're not dealing with saying something about satisfying SC
shadi: so we don't need partial flag because everything is partial…
Kasper: we can test other stuff
than WCAG SC in ACT Rules
... for best practices in example, a failure of the rule may
mean you pass the criteria
shadi: I heard in the morning Wilco wanted more description
Wilco: I said partial doesn't make sense, as everything is partial
shadi: OK, everything is partial.
so we can define a direct mapping
... if a rule fails, the a11y requirements always is not
met
Wilco: not sure it's true
... if a rule fails, for WCAG it means the SC isn't satisfied,
but it might mean something different for instance in
Silver
... that's why I prefer "it needs to be consistent"
shadi: WCAG is the current model we know
Wilco: but our scope is broader
than WCAG
... I still have a strong preference for the way the last
published draft is written ("consistency")
... the proposal in issue #290 is a middle ground
shadi: what do people think about that proposal?
<shadi> https://github.com/w3c/wcag-act/issues/290#issuecomment-431038446
anne_thyme: I'm just not in favor
of another free-form text field
... I want to be able to take the output data and know what it
means without going back to the rule
... whether it's a full test or partial test or some kind of
mapping, I don't really care
... but it needs to be consistent in the output data
shadi: there's a need to a definitive map to requirements, but maybe it's out of scope for our group
Wilco: the proposal was a way to resolve some of your concerns of having a free-form mapping, without having a limited list
shadi: but what's the issue with a constrained list?
Wilco: because there are different things than WCAG, different ways to map to rules
shadi: can you give an example?
Wilco: I'm thinking of Silver
yatil: the idea is that you have
tests, and pass/fail the tests, or are there considerations
outside the test that makes you see the outcome of the test
differently?
... I'm trying to see what's the use case for this
Wilco: there might be
requirements that say like "80%" of your images need this or
that
... it can't be an expectation because expectations are at the
target level
anne_thyme: so a rule can be at the target level, and another rule looks at the page level
Wilco: not the way it is defined today
yatil: I think that's the crux of
the matter
... you should not fall in the trap that WCAG did, and try to
be as nuclear as you can to define the problems and the
tests
... then define how the SC is met at another level
... for me it feels like overthinking
... it should be possible to take the test and use them
differently
Kasper: if we can say in composite rules that "at least one" of the rule should pass for a test target, then we can also say "at least 80%"
shadi: I
... I'm hearing you say describing overall pass/fail for a SC
is out of scope
... but yet I feel that the proposal is trying to put that back
in again
Wilco: I'm not specially happy with this proposal, just trying to find a compromise
<maryjom> https://github.com/w3c/wcag-act/issues/290
<maryjom> https://github.com/w3c/wcag-act/issues/290
shadi: if Silver has a scoring system, Kasper is saying this might be solved by how composite rules are defined
Wilco: yes, we need to change the
way how composite rules reuse results from atomic rules
... because right now we're only allowing 2 layers
shadi: so checking an image is already a composite rule, so you'd need another composite rule to add the "80%" requirement
Kasper: you can always flatten the composite rules, you don't really need layered composite rules
shadi: one use case down!
romain: it can be a maintainability issue, but technically it makes sense
Wilco: in a way it contradicts
what expectations are supposed to do…
... because they're intended to be assertions on single
elements, not on a collection of elements
shadi: not in composite rules?
anne_thyme: [quotes the related part of the spec]
shadi: so should we have the discussion about composite rules first?
Wilco: the thing that would need
to change is that applicability of a composite wouldn't
necessarily be the union of the atomic rules
... atomic could test a target, and the composite would test
the collection of targets
... it feels kinda hacky, but I guess it works
shadi: but not more hacky than the proposal in #290…
Wilco: right
... do we want to explore this approach?
... we need to figure out how to reword applicability and/or
expectations for composite rules
... does that sound like the right way to go?
romain: we have to review these
sections anyway, since it's not clear right now
... the definition scope of composite rules is broader than the
definition scope of the atomic rules
... so applying an atomic component of a composite rule just
isn't defined for every target of the composite rule
Kasper: [gives an example]
anne_thyme: I'm also having a
hard time understanding this, let me try to rephrase
... would it work if we say something like "the rule passes if
the atomic rules are either inapplicable or pass"?
romain: yes, something like that
Kasper: if we consider the set of test targets rather than all individual test targets, how is that supposed to work?
Wilco: that's why I think the applicability has to change too
Kasper: right, makes sense
Wilco: we're opening up to other problems
anne_thyme: is it a problem with Silver?
romain: not really, we want to be compatible with any kind of a11y specification
maryjom: there has to be some
sort of aggregation rule
... where you take all of these composite rules and/or atomic
rules and puts it together to see whether it passes or
fails
... then you can gather the number and look at the percentage
of passes
Kathy: it can even be more than the page level, like sets of pages
Wilco: it looks like the very
first proposal we had
... having a very loose definition of what it meant to be
consistent to an SC
anne_thyme: the definition of "consistent" is different for different people, that can be confusing
romain: consider the use case of
checking that all the pages in a website or EPUB must have a
title
... you cannot do that as a composite rule today
anne_thyme: yes, you can define that as an atomic rule that is applicable to the set of web pages
romain: yes, but then you reinvent the wheel and cannot reuse the existing "has-title" atomic rule
Kathy: you want to use the result of atomic rule in a broader composite rule [gives an example with page titles], I don't think we can do that in ACT right now
[discussion on how ACT can implement the use case]
Kathy: right, so you can do that as an atomic rule, but not as a composite rule
Wilco: so you mean the atomic rule is an HTML page, and the composite rule applies to the collection of pages?
Kasper: yes
anne_thyme: how you implement a
rule, whether you use an atomic rule or not, is not really an
issue for the ACT format
... not sure there's an issue with that
Kasper: right, it's an issue of duplication
<Jey> taking over scribing :)
<shadi> scribe: Jey
<shadi> scribenick: Jey
wilco: do we need to re-write
applicability of composite rules
... creating an issue to tackle the above.
... problem statement - we want to be able to write an
expectation on a collection of outcomes, rather than individual
ones
everyone: various attempts on defining the problem statement...
shadi: perhaps we should rethink 'union'? why cant composite rules expectation and applicability be same as atomic
romain: we made it to keep the scope in constraint
shadi: [example on atomic and composite rule, to help understand the above constraint]
romain: perhaps, we could say the
applicability for composite rules is either a UNION or a
collection of test targets...
... a different perspective, more examples to narrow
down...
shadi: Q to romain, in `epub` is there a requirement to run a given check across a page and a book, which then can be represented as a composite and atomic rule combination...
kathy: points out example on plural in the requirement `webpages`.
wilco, romain, kathy, anne: [discuss example about set of pages vs page level tests]
kathy: makes a point that even for manual testing, the current representation of composite rule does not help.
anne & kasper: contend that it can be done with two atomic rule
anne: if there are more than a few ways of failing a SC, we do not have a composite rule, composite rules only represent combination of atomic rules to satisfy a SC.
shadi: [more examples]
... from wcag the expecation is that, you have to run through
each page...
romain: is SAD, that the spec does not allow for greater interoperability...
shadi: alistair also reckons that
the lego model to build on atomic would lead to more
complexity
... other option, is where we are at, and look at possibly a
re-write of a majority of rules when SILVER kicks in
romain: we should perhaps broaden the applicability and expectation of composite rules
kasper: from an automation perspective, if we are able to reuse results from each atomic rules to derive an expectation for a composite rule, it could work.
shadi: other than automation, is there any caveats?
kasper: in manual, perhaps it can be overhead to compare results and aggregate things manually
wilco: composite rules, by definition do not have test aspects. Is that enough to define applicability for composite rule?
kasper: disagrees...
... [examples]
... if we give access for composite rules to test aspects as
well as results for atomic rules, we should be able to solve
for the problem
kathy: clarification on how results are used between the composite and atomic rules?
kasper: [examples to answer the above q]
wilco: proposal - the
applicability for composite rules needs to be updated to not
just allow UNION, but detach it such a way that, we can define
what best works to define the applicability
... this solves most of the examples we discussed, and also
solves for how we map it to a SC.
... composite rules can only use the outcomes from atomic
rules
anne: not allowing to stack composite rules to map to SC is limiting
wilco: but flattening can be the answer
anne: this will only add complexity, as we flatten.
<inserted> scribenick: kathy_
<scribe> scribe: kathyeng
anne: composite rules for aggregating SC but composite rules can't use composite rules is a concern
romain: flatten complex composite rules may be more complex than composite of composite rules
kasper: use case of composite rule for page title and additional composite rule for SC epub could include that composite rule
romaine: increases re-usability
wilco: concern of building hierarchies but ok with it.
kasper: limit levels to 2 or 3
kathyw: clarify composite rules for pass and fail using atomic rules
anne: more examples needed
kathyw: example 13 is OR.
kasper: implicit AND between expectations
<Wilco> https://auto-wcag.github.io/auto-wcag/rules/SC1-3-5-autocomplete-valid.html
wilco: these are ANDs in
example
... between expectations. all must be true to pass
... expectation 1 is an OR.
kasper: it can be written in different ways
wilco: do what makes sense and is readable
kasper: separating expectations with AND helps readability
wilco: agrees clarification of AND needed. also need a new proposal for composite rules
kathyw: switch between 2.0 and 2.1
maryjo: can we use a non-specific version of wcag?
wilco: links go to a version. change to 2.1?
ok with group
shadi: informative notice
wilco: create clarity that rules are normative or informative
shadi: a rule is always
informative
... an informative rule is based on a best practice
maryjo: add that clarification
shadi: a rule is an informative resource to support the conformance checking to an acc std or a best practice
maryjo: second part is
confusing
... most rules are just one way to meet
shadi: is it based on wcag or a best practice
wilco: techniques are informative. even failure techniques.
shadi: if a company has a rule
for left - aligned text, flag it as a best practice not a
compliance issue. when based on a normative req, if rule fails,
fail wcag.
... second paragraph is a rule mapping.
... rules are always informative. add that to description
anne: is she asking for a disclaimer on each rule?
wilco: yes i think so
kathyw: glenda is asking which is clearly required by wcag and which is best practice (informative)
shadi: wilco will open an issue
wilco: is an action required for glenda
shadi: identify best practice rules with a label/flag
maryjo: "sufficient technique" or "advisory" to use wcag terms
anne: best practice rules - "related" to acc req
romain: what if a rule is used for more than one accessibility req?
wilco: acc req 1.1.1 - normative
shadi: unique page title example - epub (required), wcag (best practice)
romain: keep this separate from rule for easier maintenance?
shadi: better to have it in one place for review/approval
romain: for a company's rules, it would be easier if they only want wcag or a level of wcag
wilco: wcag doesn't have best practices
shadi: next to acc req list is it required or a best practice
anne: skip to nav is just one way, but norway requires it.
shadi: if a company changes an informative to normative, indicate the change
anne: sharing best practices is helpful
wilco: only show what's required and what's not required
anne: there is wcag compliance and great accessibility
kathyw: some best practices are not great for all
shadi: this framework is the
least possible threshold to share in a common format. next is
to get community to agree.
... community must agree to rules
wilco: there is a lot of confusion around accessibility. best practices linked to wcag is confusing
shadi: avoid sc's or reference an sc but label as not a requirement are the two options
wilco: object to identifying accessibility req in accessibility req section if it is not required
shadi: it is clearer if it is stated that it's not a req
anne: explains why rule exists
wilco: can it be identified in a different section?
anne: there is no section, people
will be inconsistent. there is no background section
... background is not clear of what goes there.
... for each sc, these are the required rules and these are
best practices
wilco: put related wcag somewhere
else
... separate from normative
shadi: propose a different section for related wcag
kathyw: 7.2 accessibility requirement would be blank for best practices?
wilco: yes
shadi: for unique page titles - nothing under acc reqs and acc mapping. under another section (related) list best practices
kathyw: many clients have best
practice and don't distinguish from wcag and will list their
acc reqs in the wrong section
... it would be helpful to be able to find the best practices
related to a SC
anne: best practices don't have to be used by everyone
shadi: list required vs optional clearly
wilco: outcome from rules impacts SC. for best practice, there is no accessibility mapping to SC.
anne: conformance testing and testing to improve accessibility are different. best practices are helpful for part 2.
kasper: what's stopping people from just listing their own requirements?
shadi: wcag advisory tech page title unique as a requirement of mine, i make it an acc req, is that ok?
wilco: techniques can be an accessibility requirement
shadi: really??
<Jey> :)
keep best practices in a separate place for conformance testing
wilco: for each company, that has their own accessibility reqs, list them as acc reqs
shadi: both glenda and stein erik have good points.
wilco: mixing best practices and accessibility reqs will get confused
maryjo: 'good practice' section
anne: autowcag gets confused when no acc req is listed. identify no req mapping for clarity.
wilco: confusion of what's wcag and what's not exists. do not reference wcag if it's not a requirement. be very clear.
kathyw: best practice atomic rules list no wcag req. clearer to enter 'best practice'.
wilco: that is ok.
<shadi> trackbot, end meeting
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: i/scribe: kathyeng/scribenick: kathy_ Default Present: maryjom, romain, Wilco, anne_thyme, EricE_observer, Kasper, Roy, audrey, kathy_, Kathy, shadi WARNING: Replacing previous Present list. (Old list: Kathy, steve, StephaneDeschamps, kathyeng, anne_thyme, romain, Katie_Haritos-Shea, Kasper, Wilco, MaryJo, Jey, Audrey, Raf, EricE_observing, maryjom) Use 'Present+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Present+ maryjom Present: maryjom romain Wilco anne_thyme EricE_observer Kasper Roy audrey kathy_ Kathy shadi Found Scribe: shadi Inferring ScribeNick: shadi Found Scribe: Anne WARNING: "Scribe: Anne" command found, but no lines found matching "<Anne> . . . " Continuing with ScribeNick: <shadi> Use "ScribeNick: dbooth" (for example) to specify the scribe's IRC nickname. Found Scribe: anne_thyme Found ScribeNick: anne_thyme Found ScribeNick: romain Found Scribe: Jey Found ScribeNick: Jey Found ScribeNick: kathy_ Found Scribe: kathyeng Scribes: shadi, Anne, anne_thyme, Jey, kathyeng ScribeNicks: shadi, anne_thyme, romain, Jey, kathy_ WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 25 Oct 2018 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]