W3C

- DRAFT -

#testing breakout @ TPAC 2011

02 Nov 2011

See also: IRC log

Attendees

Present
Wilhelm, PeterH, DavidB, JamesG, PLH, Kris, ArtB, Laszlo, EdO, Mark, MikeSmith, JohnJ, JacqueD, JanL, MattW, CyrilC, Russell
Regrets
Chair
Wilhelm
Scribe
ArtB

Contents


<scribe> Chair: James

Date: 2 November 2011

<scribe> Scribe: ArtB

<scribe> Agenda: TPAC Testing - a Practical Sesion

JG: welcome everyone
... start with current state

… in the various WGs

… I know about WebApps and HTML WGs

… want to look at testing formats

… and the procs for gathering tests

… Would like to know about areas for improvement

KK: how many are familiar with Hg/Mercurial?

… and submitting tests?

… And how to create W3C tests?

… [ Not very many ]

… Can talk about how to push test cases to W3C

… We don't @ Msft use Hg

… but it's pretty easy

… Can search for Mercurial

<JohnJansen> http://mercurial.selenic.com/

AB: http://www.w3.org/2008/webapps/wiki/WorkMode#CVS_and_Mercurial

KK: the HTML WG has a wiki of resources

… we use public-html-testsuite

<stearns> http://wiki.csswg.org/test/mercurial

… HTML wiki: http://www.w3.org/html/wg/wiki/Testing

… after getting Mercurial

… must create a ~/.hgrc file

<jgraham> http://www.w3.org/html/wg/wiki/Testing

… or ini file for other OS's

… May need to mess with proxy settings if behind a f/w

… after install, Hg should be in $PATH

… the verbs are pull and push

… to get a local copy, use pull

<krisk> http://www.w3.org/html/wg/wiki/Testing/Submission/

<jgraham> http://dvcs.w3.org/hg/

AB: mirror is: http://w3c-test.org//

KK: the Hg root has tests and specs

… so not just about tests

… after a test is `push`ed to server, it is stored in a src control system

… can do some complex stuff on backend e.g. PHP

<plh_> --> http://www.w3.org/wiki/TestInfra/goals Test infrastructure goals

… Several WGs are using it

… there is a `resources` folder

… it includes some sample tests

<jgraham> http://w3c-test.org/resources/

JG: there are some sample HTML5 API tests

… e.g. http://w3c-test.org//resources/apisample.htm

… designed to integrate with internal testing resources/frameworks/tools

… can make them talk to each other a bit

KK: expect people to pull tests and run them internally

… W3C server can't support lots of browser vendors running their tests on w3.org resources

Q: is it possible to select specific test cases for a spec?

… so not clear if I can find specific test cases

JG: for HTML test suite, the directory names give a hint

… the CSS WG is working on something more sophisticated

PL: we are working on a bug tracker for the tests

… want to support test case management

… includiing metadata

… want to link test cases to specific parts of the spec

… it is up and running now

… but still needs some work

Q: how about optional vs. mandatory parts of the spec?

PL: we want to support that

… we need to markup the spec to facilitiate test cases pointing to specific parts of the spec

… If want to browse the spec, want to be able to see what tests are available

… We also use an annotation system

… so that results from tests runs are available to someone browsing the spec

Mark: is there any type of overall plan?

… and some data about coverage?

… Also, is there some convergence for tools?

JG: yes, there is work toward tool convergence

… at least at the harness level

JG: re the 1st question, that's pretty complicated

Mark: want to understand the plan for ex, HTML5

KK: we try to facilitate, but no hard rules

… it's up to the WG participants re what will actually get done

… We have a structure to enable lots and lots of tests

… We have some features that have lots (1,000s) of tests and other features with more like 10's of tests

Mark: for CSS 2.1, is test suite done?

PL: one reason for the tests is to get to REC

… the other reason is to determine if everyone impleemtns the spec correctly

… we try to use a s/w development process

Mark: are there specific milestones?

PL: not really, we continue to improve

DB: as bugs are found, new tests are added

PL: as we hit milestones, we snapshot the test suite

… and add new tests if we need to

KK: we need to get a handle on how close the test suite is to the impls

… and that can be hard to determine

… F.ex, web workers is already supported in several browsers

… and mostly interoperable so there is a question about how much testing effort should be done

<plh_> --> q?

… The work done depends on the WG participants

PL: when a spec is early, it's a bit iffy to write tests

… but it would be useful too

JG: eventually must write tests so it makes sense to start early

LG: agree converging test frameworks is important

… need to eliminate outside dependencies

KK: yes, we ran into that with some old DOM tests (from NIST)

JG: we have a framework now

… that is getting convergence

… it is platform independent

… it does depend on JavaScript

… but no DOM dependencies

KK: we need to be careful about adding features to the framework

AB: how many WGs are using testharness?

PLH: HTML, WebApps, WebPerformance,

John: it was relatively easy for some tests

LG: I think DAPI agreed to use testharness

… but they don't have many tests yet

KK: I think we need to make sure WGs use testharness

JD: what about rendering tools?

JG: there are RefTests and manual tests

… testharness is a JS api

… reports results via a callback

PLH: we have a framework on the test server to collect test results (HTML WG)

PL: we can import results from other formats

… the format can be XML or whatever

… we also have an XHR API

JD: need a good format

John: testharness is really for JS APIs

… there are challenges for visual matches

KK: yes, e.g. font variability

s/Q?/Jan/g

KK: if there are any questions, please send them to public-test-infra

… that's the list for the Testing IG

[ James gives a quick tutorial on how to create a test with testharness.js ]

Russ: what about images?

JG: most use RefTests

… and then can compare rendering

KK: the biggest problem with rendering is fonts

… RefTests give expected output

… tests with "ref" in the test file name

… at runtime, if they render the same, the test passes

DB: a test in RefTest is an assertion of pixel equality

… want to exclude external factors like fonts, margins, etc.

<scribe> Chair: James

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2011/11/02 19:12:25 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/PeterH/PeterL/
Succeeded: s/ WebEvents,//
FAILED: s/Q?/Jan/g
Succeeded: s/Q2?/JD/g
Found Scribe: ArtB
Inferring ScribeNick: ArtB

WARNING: No "Topic:" lines found.


WARNING: Replacing previous Present list. (Old list: Wilhelm, PeterL, DavidB, JamesG, PLH, Kris, ArtB, Laszlo, EdO, MarkV)
Use 'Present+ ... ' if you meant to add people without replacing the list,
such as: <dbooth> Present+ Joh


WARNING: Replacing previous Present list. (Old list: Joh)
Use 'Present+ ... ' if you meant to add people without replacing the list,
such as: <dbooth> Present+ Wilhelm, PeterH, DavidB, JamesG, PLH, Kris, ArtB, Laszlo, EdO

Present: Wilhelm PeterH DavidB JamesG PLH Kris ArtB Laszlo EdO Mark MikeSmith JohnJ JacqueD JanL MattW CyrilC Russell
Found Date: 02 Nov 2011
Guessing minutes URL: http://www.w3.org/2011/11/02-testing-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report


[End of scribe.perl diagnostic output]