See also: IRC log
csun plans - nothing formal has been confirmed
thurs afternoon at 4:20 - discussion of this work at csun
kathy will start an email identifying who is presenting and when.
this task force is partially funded and supported through WAI-ACT.
this is the fifth or so in a series of projects set up like this...a fairly common approach
typically there are invited experts, but the funding will mostly be under the W3C process
shadi can provide more info if requested, and info is available at the link just above in this thread.
should the end report include all of the techniques or just the results?
<Detlev> Has there been a recent version of the Methodology? If so, can you post the URL?
Eric: we don't want to rewrite wcag techniques,
but what do we want here?
... Do we want to check every page for every element, technique, component?
<kerstin> Detlev's Mail here: http://lists.w3.org/Archives/Public/public-wai-evaltf/2012Feb/0055.html
Eric: 4.2 clarifies this. Detlev proposes not to
select a particular order
... Once we have identified a problem, we don't need to identify it again on other pages
<Detlev> Here#s my proposal: http://lists.w3.org/Archives/Public/public-wai-evaltf/2012Feb/0055.html
Mike: would change wording to make it clear that if people are evaluating on an 'exception' basis, we make sure people know it is different."targeted" doesn't seem to get at that.
Shadi: depends on the aim of the evaluation. If just yes/no, then it's enough to identify an error on one page and then not address it again. However, if we want a more quantitative approach we would indicate how prevalent the issue was.
Shadi: If we want to talk about how close to compliant a page is, then we'll need more details in addition to the yes/no.
Kathy: Agrees with Shadi. We need to decide yes/no, but also need to decide on frequency, e.g., one missing alt text vs missing alt text throughout the site. When working with development teams, it's important to identify where the errors are occuring.
Eric: Earlier discussion - we want to know where errors are occurring.
Kerstin: On huge pages, evaluators should look
for more than a few errors. Finding one failure and then moving on to another
page isn't enough.
... Doesn't see why we should have to identify where every error occurs, e.g., navigation bars
<Mike_Elledge> In that instance we just say "All pages--Logo does not have alt text."
Eric: Believes we have agreement here. We can say in the report that evaluator should give several examples and indicate that this problem may occur elsewhere on the site.
Elle: Totally agrees, from a development team standpoint, errors should be specifically identified. However, they have an associated document specifying the location and extent of errors. Global vs. regional elements in the test cases.
Elle: Global and regional elements may also be referred to as "common" elements.
Vivienne: "Site wide issues" is another term for
items that occur on a number of pages.
... Usually points out where the problems are, but also how to correct the issues. Different levels of evaluation and recommendations.
Eric: This is important for audits and reporting.
... ISO testing schemes include what is wrong, but also what the repair possibilties would be.
Vivienne: Pointing to best practices is helpful. As practitioners we need to be proactive about recommendations.
Eric: Maybe this should go into the reporting section, rather than section 5.
<Elle> Vivienne, I agree totally, we request remediation support with all our audits (saves time) and I evaluate against our own corporate best practice
<Zakim> shadi, you wanted to say could have both approaches in the document
Vivienne: Maybe this should also be included in scope. For example, we are going line by line or highlighting various items on each page.
Shadi: Likes the concept of identifying the
global elements that would only be evaluated once. The target of the evaluation
should be clear from the beginning and in the reporting section.
... Options 1) yes/no - don't evaluate for that again if found on one page; 2) yes/no, plus more info about the nature of the error on more pages; and 3) adding recommendations for repair. We could describe all three in this document.
Kathy: Working with clients has similar
experiences to Vivienne. Have common elements and issues, and then refers back
to the original place where the issue was identied. That way the info is
associated with each page. Some common issues are applicable to a specific set
of pages, but not the whole site.
... Reporting options, those usually depend on the level of accessbility expertise within the development team. Repairs might not be needed, as long as the issues are identified, if the team knows how to address the issues.
Eric: It should be clear from the start what level of evaluation is needed.
<Elle> Kathy, great point
Kathy: Yes, this could save costs over time.
Kerstin: Agrees with Shadi and Kathy's add-ons. We should write use cases for the three options, for comparative testing purposes.
Detlev: Clarification of draft text - when issue is found multiple times, then we should have a place to note it, but we still need to make sure the page does or does not have that problem. Each page should be checked for everything to make sure it does/does not have that problem. For example, language element should be checked on every page, rather than assuming that the CSS would have that for every page.
<kerstin> i write in a mail
Eric: With suggesting repairs, it could be more
than the evaluator to handle...might be many ways to repair a site, and one
repair could cause a problem somewhere else.
... We could just point to the W3C website where the problem is described.
Vivienne: Don't have to suggest a repair, unless you know what it would be. But, best practices should be included from W3C. We shouldn't always be expected to be experts in development.
<agarrison> I would be more comfortable to point them back to WCAG 2.0 Techniques, rather than suggest repairs.
Elle: If a company site owner requests recommended repairs, the accessibility expert should set the expectation about the level of recommended repairs. Has had experience with one repair causing another problem.
Eric: Recommendations are usually expected, but pointing to W3C is good.
<Elle> excellent point, Kathy
<Elle> for what it's worth, our company depends on the development support found in remediation guidance with audits (makes a cheaper remediation project in the end as compared to teaching devs on the job)
Kathy: Often gives suggestions for repair, but Kathy has a development background. Since there's more than one way to fix a problem, suggestions are given in that context. Works with developers to come up with the best solution. Accessibilty experts might not have an answer, but they can reinterate what needs to be done to address the problem, but not the specific code to fix it.
Eric: Evaluation section - do we want to say that people need to go into the depths of the techniques or refer them to W3C?
<Elle> agree with kerstin
Vivienne: Recommends being as specific as
possible. Don't assume that people will know how to use the WCAG docs,
althought it is much more work for the evaluator.
... Example, titles vs labels - evaluator needs to identify sufficient techniques
Detlev: Even though sufficient techniques is the best, it may not be practical. These can't be checkpoints. Evaluator would ask whether all images have alternative text, but to report every technique would be too burdonsome.
Eric: Will work with Detlev's draft into the next version.
<Zakim> shadi, you wanted to say we should include guidance on how to *use* the techniques
<agarrison> For replicable results, we do need to all be checking the same thing - the easiest way to achieve this might be to base things on sufficient techniques and failure conditions.
Shadi: When indicating pass/fail, but this methodology should guide the user about what the problem is from an accessibilty standpoint and apply to the WCAG2 technique to fix it.
<Elle> agreed, Allistair