08 Aug 2011


See also: IRC log


Jutta, T.
Jan Richards


JS: What we need is to produce report...
... of our CR implementations...
... whethere each implementor has met it or not met it
... We have a new testing framework....
... It will store test questions and then record yes or no

JR: Can we just put SC's in?

JS: Depends who is doing the testing


B.1.1.1 Content Auto-Generation After Authoring Sessions (WCAG):

Authors have the default option that, when web content is automatically generated for publishing after the end of an authoring session, it is accessible web content (WCAG). (Level A to meet WCAG 2.0 Level A success criteria; Level AA to meet WCAG 2.0 Level A and AA success criteria; Level AAA to meet all WCAG 2.0 success criteria)

Note: This success criterion applies only to automatic processes specified by the authoring tool developer. It does not apply when author actions prevent generation of accessible web content.

<jeanne> [discussion of the email]

Cherie: This really only covers implementor

<jeanne> Cherie: The testers aren't going to test to verify, they are going to test to find the flaws.

<jeanne> JR: It wasn't meant that way - I meant that you only have to test the SC that apply to your tool for the level you want to check.

<jeanne> ... Why do the extra work of getting AA test cases if you are only going to test for A.

Cherie: Don't test to verify

<jeanne> cherie: I will want to check for all so I know where we are.

Cherie: There is value in partial results....

Tim: Possible that implementor might test for a mixture

Cherie: Not going in assuming anything

<jeanne> Jeanne: We will need to test for all SC, since we need to have two implementations of every SC.

<jeanne> JR: Would you test it iteratively? Test A, then test AA.

Cherie: Certainly won't test them all at A then AA then AAA
... If I have it setup I might as well test A then AA then AAA on the same SC
... Might do hardest and back down or start with easiest

<jeanne> Cherie: If I test an example for A, I would test it again for AA right then, I would not go through the whole loop. I might do the hardest and back down, or start with the easiest. It will depend on the SC.

JR: What about test considerations?

Cherie: OK but very important not to step outside normative requirements

JS: WCAG had lots of diffs between evaluators that needed to be sorted out

<jeanne> Jeanne: The WCAG CR testing had to resolve every discrepency from the testers, so clarifications in advance would be helpful

Cherie: Also implementor would always start with atomic tests

<jeanne> ... feature by feature.

<Tim> http://www.w3.org/QA/WG/2005/01/test-faq#good - what makes a good test?


<jeanne> JR: The applicability conditions at the start of Part A and Part B and in the conformance level, so they are not close to the individual SC, so they may be overlooked.

Tim: Use of word "accessible web content" ...

JS: There has been a sub-group working on testing for a few months...
... They now have a beta version of a testing framework
... We give an HTML page with test instructions and links to test examples etc. then at the bottom are outcome buttons

Tim: Link to framework?

JS: Not yet

<scribe> ACTION: JR to To update the structure of the implementation report [recorded in http://www.w3.org/2011/08/08-au-minutes.html#action01]

<trackbot> Created ACTION-352 - Update the structure of the implementation report [on Jan Richards - due 2011-08-15].

Summary of Action Items

[NEW] ACTION: JR to To update the structure of the implementation report [recorded in http://www.w3.org/2011/08/08-au-minutes.html#action01]
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2011/08/08 21:01:58 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

No ScribeNick specified.  Guessing ScribeNick: Jan
Inferring Scribes: Jan

WARNING: No "Topic:" lines found.

WARNING: No "Present: ... " found!
Possibly Present: Alex Greg JR JS Jan Jeanne Microsoft Note P8 Sueann Tim Tim_Boland aaaa aabb cherie trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Regrets: Jutta T.
Agenda: http://lists.w3.org/Archives/Public/w3c-wai-au/2011JulSep/0037.html
Got date from IRC log name: 08 Aug 2011
Guessing minutes URL: http://www.w3.org/2011/08/08-au-minutes.html
People with action items: jr

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report

[End of scribe.perl diagnostic output]