W3C

WCAG 2.0 Evaluation Methodology Task Force Teleconference

05 Jan 2012

Agenda

See also: IRC log

Attendees

Present
Shadi, Samuel, Martijn, Kathy, Liz, Eric, Denis, Detlev, Mike, Vivienne, Katie, Sarah, Tim
Regrets
Kerstin, Alistair
Chair
Eric
Scribe
Shadi

Contents


Welcome

EV: happy new year!

<scribe> ...new version of the methodology online

UNKNOWN_SPEAKER: changes to section 3 and 4
... addressed comments from the WCAG group
... getting more flesh but still needs more work
... want to specifically look at scope and size of sample today
... counting the minimum pages is 25 but could lead up to 5000
... 5000 is unrealistic but 5 may be too little

Methodology overall reactions

ME: confused by the use of the term "resource"
... also wondering about getting hung on trying to define a specific number of pages
... maybe better to define ranges
... because could vary depending on context

<Detlev> agree

ME: for instance goal and cost etc

EV: used "resource" but did not introduce it
... could discuss other options too

DB: we use the term "template", usually ~4 pages
... "template" because they are representative
... like pages with forms, multiple heading levels, etc

<Detlev> absolutely agree

DB: we try to find the most important issues for the developers to fix
... usually in an iterative process

<vivienne> I absolutely agree with Denis' approach, and I do much the same

DB: smaller evaluation first but a more complete one later on
... initial sample is small that can be done by manual evaluation too
... not dependent on large-scale automatic tool

<Mike_Elledge> +1 to Denis' comment

EV: have description of "template"
... but just counting those we end up with several pages already

<ssirois> I cannot disagree with dboudreau on that one since i do work the same way (at the same place! :P)

EV: we also do not clearly describe the role of tools
... for example to help select pages

DB: many of our clients may not be large websites
... thorough review is up to 2 hours per page
... for small websites can be only 4 pages but sometimes up to 8 pages
... because sets can be overlapping
... like data tables may be together with forms or such

EV: also have preliminary review
... could that cover first phase?

DF: also in a similar situation with types of websites
... vivienne had also mentioned the pages with overlapping content
... also add specific "elements" to a sample
... for example only evaluating the tables in a page because the other aspects have been already reviewed

<dboudreau> @shadi - this is exactly what we do when we have larger clients that can actually afford to start with a preliminary

SAZ: agree with what was said, leads to a dynamic approach
... but need to keep preliminary review separare from conformance evaluation

<Detlev> sure - states and elements have t be exactly determined

KW: agree with Denis and Detlev
... but with more complex sites, particularly more application type, we find that more task-based approaches are more useful
... maybe need different scenarios on the different page sizes rather than a single limit

EV: example of a scenario?

KW: for example a retail store, you could have scenarios like "user registering" or "registered users checking out" etc
... still have repetitvness of pages but often the content is different

ME: agree with everyone
... scenarios and tasks rather than a hard-limit on pages

KHS: need to have something measurable and repeatable
... maybe need a range to give people a goal to shoot for

EV: range seems to make sense

<dboudreau> adding one page for every x page makes no sense if we had 2000 pages of simple content form the same template

DF: with small websites it usually only needs a small number of pages
... also if you look enough you will surely find an issue somewhere
... need to focus on things that are vitally important

EV: looking at error margins later in the document
... will come back to it later on
... seems there is a tendency for less pages

<Detlev> blackboxing (local copies?) will often not work with dynamic sites

VC: wondering if need to consider proof of the state of the website at the time of evaluation

<Ryladog> date and hour of each page tested - not the site

VC: do people mirror the website?

<Mike_Elledge> Definitely need to say when the site/which version/etc. is tested

EV: pages can change very quickly
... may need to cover in the evaluation report

VC: wondering if we are going to do some form of proof of what was tested, maybe as part of the scope

SAZ: sample size should not only grow by size but also by complexity
... Detlev had mentioned adding "elements" like tables or forms etc to the sampel

DF: many pages could also impact the score, especially if the score is based on the number of pages tested
... for example by diluting a single instance of a significant barrier

EV: what about proof?
... should we damand that people mirror the pages they test?

DF: did not work very well for dynamic pages
... is of limited value

<dboudreau> +1 to detlev

DF: but may also use screenshots
... but essential is the date and time of the testing
... arguments over the result is not very frequent

DB: similar experience as Detlev
... do not always record the pages, hardly ever used
... but still good idea to suggest

EV: may be possible to suggest it but may not be easy to require it as part of the methodology

ME: we do not save the pages for the reasons mentioned
... but we do take screenshots to keep a sense of what the page looked like

EV: we save the code locally
... for occassional querries

<Kathy> mute me

KW: in addition to the screenshot also helpful to have the original code

Face-to-face meeting

http://www.w3.org/2012/10/TPAC/

SAZ: no critical mass yet
... but really important to consider TPAC

<vivienne> It's all far from Australia!

SAZ: could co-locate meeting with WCAG WG and/or other relevant groups

Methodology overall reactions

Section 4

EV: need to test alternative versions?

DF: at least some aspects of the alternative versions
... for example that you can get to the alternative version, or switch it on an off etc
... also need to ensure that the content is equivalent
... but not need to test the content twice

<vivienne> I think that Detlev's point of equivalency of content is the important aspect

TB: WCAG has a definition of "accessibility supported"
... how does that come into play?

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2012/01/17 07:47:23 $