W3C

WCAG 2.0 Evaluation Methodology Task Force Teleconference

13 Jun 2013

See also: IRC log

Attendees

Present
Vivienne, Martijn, Detlev, Eric, Sarah, Liz, Richard, Peter, Mike, Katie, Moe, Shadi, Tim
Regrets
-
Chair
Eric
Scribe
Sarah

Contents


<ericvelleman> http://www.w3.org/WAI/ER/conformance/ED-methodology-20130611

<MartijnHoutepen> diff version: http://services.w3.org/htmldiff?doc1=http%3A%2F%2Fwww.w3.org%2FWAI%2FER%2Fconformance%2FED-methodology-20130219&doc2=http%3A%2F%2Fwww.w3.org%2FWAI%2FER%2Fconformance%2FED-methodology-20130611

eric: 'common functionality' to 'core functionality' discussion

<Vivienne> Eric, do you have a link to the disposition of comments?

http://www.w3.org/WAI/ER/conformance/comments-20130226 disposition of commetns

detlev: thinks that Step 3 does not include enough dynamic states

vivienne: additional comments in email from Ryladog(?) should be added to disposition of comments

<Vivienne> Sarah, that was comments from Giorgio Brajnik

eric: will add to comments doc

<Detlev> fine!

<MartijnHoutepen> +1

<Liz> +1

<Vivienne> +1

<ericvelleman> +1

<Mike_Elledge> +1

eric: proposing to add to comments even though public comment period is over

+1

<MoeKraft> +1

<Detlev> yes!

<richard> +1

<ericvelleman> http://www.w3.org/WAI/ER/conformance/comments-20130226

eric: Disposition of Comments id#3 replace 'common functionality' with 'core functionality' - also mentioned a couple of other places in comments; there will be a survey to decide this

peter: appreciates example of going from common to core. is the expectation that all comments will have a proposed resolution in the next survey?

eric: yes, proposed resolutions will be in the survey linked to the editor draft. Eric will group more of the comments for the survey.

<ericvelleman> : <http://lists.w3.org/Archives/Public/public-wai-evaltf/2013May/0046.html>

eric: agenda pt 3 Design support evaluation versus conformance evaluation

<Vivienne> http://lists.w3.org/Archives/Public/public-wai-evaltf/2013May/0046.html

detlev: three different items on goals (basic, detailed, and in-depth report); initial vs final evaluation for finished content. maybe reorder and rephrase report names.

peter: agrees with primary two uses, but having 3 choices are the wrong reports to have. Having use cases dictate the report type is useful.
... still concerned with the word, 'conformance' since we aren't looking at every page
... talk about sampling, problems found, and confidence in results found.

shadi: it would be an accurate representation of how well the site conforms overall.
... agrees with use case driven reports. how about when product owner wants to do an interim review to find out how the site is progressing.

<korn> +1 to that use case

shadi: +1 use case

<MartijnHoutepen> +1

<Detlev> "Define the Goal of the Evaluation" -> 1. Development support evaluation 2. Conformance evaluation?

vivienne: agrees with Detlev. might want to do a quick pass during development (basic report), but also could have an assessment of functionality after it's completed. wants to emphasize that this protocol is useful during development too.

katie: conformance discussion - it's ok that this protocol is associated with performance, but want to say that it's for the pages tested.

peter: concern is the language of WCAG conformance is that you can only make a claim if it's perfect. we need to have language that is specific to the parameters of the sample.

<Ryladog> I agree with this be associated with conformance for the pages tested. WCAG Conformance is per page

<ericvelleman> :-)

detlev: agrees with peter, not to focus so much on conformance. the value is showing that the site is pretty darn good. define the goal of evaluation and report types is a bit of a mismatch.
... development support evaluations and full/formal evaluations
... evaluation of a legacy site is another variation. they know the site is bad, but still want to do the test

vivienne: can't the product owner still say that the site was evaluated according to WCAG-EM with the specific sample?

<Detlev> Conformance clains according to WCAG 2.0 apply to individual pages anyway...

peter: asks for clarification of Vivienne's comment

vivienne: if i evaluate the website according to WCAG-EM, and we produce a report with the problems identified, then the site is retested a new sample later after the problems were fixed, can't they make a claim that the website for that sample was compliant with WCAG A, AA or whatever?

peter: can't claim conformance for the site from the collection of pages

<Ryladog> You can say for the pages tested on that date

<shadi> +1 to Eric

eric: can't ever say everything is perfect, but you can say that the pages tested are good.

peter: have to look at every page before conformance for the site can be claimed.

vivienne: when a very indicative sample of pages is chosen, then the testing/retesting is done, they should be able to make a claim about the site based on the sampling.

eric: we can say something like this, but it is all about the confidence in the sampling. maybe we need to adjust/clarify this in the draft

richard: agrees with parking this discussion since it's not on the agenda

shadi: need to provide a template where evaluators can indicate the number of pages, sampling x amount of pages, etc., that can be filled out. 50 pages vs 1 page review will give a strong indication of the confidence in the sampling.

eric: 'confidence' needs to be expanded in the document

detlev: basic and detailed evaluation report types; support evaluation could be used for legacy sites

<Zakim> shadi, you wanted to talk about when an evaluation is done, and how rigorous it is

shadi: use cases for evaluation - evaluations done at different types in time. is there another dimension regarding the 'depth' of the evaluation?
... agrees with detlev and peter that having use cases is good, but what about other variations other than just having a basic report. depth vs point in time when eval is taking place

detlev: true for different levels of depth or to show that the site is terrible, but normally the aim is to cover the main templates and functionality. want to avoid making this too complex or complicated. likes the idea of a score for level of conformance.

peter: likes shadi's third use case related to regression, i.e., this site has improved a little, a lot, or has gone backwards. likes the idea of showing progress.

eric: agenda #4. To sample or not to sample - <http://lists.w3.org/Archives/Public/public-wai-evaltf/2013May/0028.html>

vivienne: we cover it fairly well by saying that sometimes the sample is the whole website, but when it's not feasible, then a sample is appropriate.
... recommends researching the effect of sample size and it's agreement with results from evaluating every page.

sarah: i meant 'its' rather than 'it's'

martijn: this is covered well already in the document. likes the way it is in the doc right now.

richard: just came through a series of evaluations where it was cheaper to do the whole site, but there are formulae for determining the proper sampling size. evaluator has to do a cost-benefit analysis of conducting the eval on the whole site vs coming up with a sampling strategy.
... when doing a random selection, you'll end up with some pages that you had already pre-selected, which is okay.
... evaluator needs to decide on the sampling/whole site eval strategy

sarah: agrees with leaving the sampling up to the evaluator - and leaving the doc alone.

eric: will make a editor draft, survey, etc. for the next meeting, which will be on June 27 (not June 20). meeting in two weeks.

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2013/06/19 13:01:52 $