IRC log of testing on 2011-10-28

Timestamps are in UTC.

16:25:38 [RRSAgent]
RRSAgent has joined #testing
16:25:38 [RRSAgent]
logging to http://www.w3.org/2011/10/28-testing-irc
16:25:55 [plh]
plh has changed the topic to: Wait, we're trying to figure out the logistics
16:30:29 [plh]
plh has changed the topic to: Wait, we're trying to figure out the logistics. Current room is too small.
16:32:09 [MichaelC]
MichaelC has joined #testing
16:33:18 [JohnJansen]
JohnJansen has joined #testing
16:36:50 [MichaelC_SJC]
rrsagent, make log world
16:37:21 [MichaelC_SJC]
meeting: Browser testing meeting
16:37:22 [krisk]
krisk has joined #testing
16:37:29 [MichaelC_SJC]
agenda: http://lists.w3.org/Archives/Public/public-test-infra/2011OctDec/0014.html
16:37:46 [bryan]
bryan has joined #testing
16:37:49 [MichaelC_SJC]
chair: Wilhelm_Andersen
16:37:58 [bryan]
present +Bryan_Sullivan
16:37:59 [MichaelC_SJC]
scribeNick: MichaelC_SJC
16:39:12 [plinss]
plinss has joined #testing
16:39:28 [fantasai]
fantasai has joined #testing
16:39:57 [MichaelC_SJC]
present+ Wilhelm_Anderson
16:40:02 [MichaelC_SJC]
topic: Introductions
16:40:24 [MichaelC_SJC]
wa: testing helps everybody
16:40:30 [MichaelC_SJC]
figure out how to make best possible test suites
16:40:32 [plh]
Wilhelm: I'd like to figure how to make the best possible test suite, how to make the Web better
16:41:05 [MichaelC_SJC]
I work for Opera as testmonkey, test manager
16:41:17 [MichaelC_SJC]
in various parts
16:41:37 [simonstewart]
simonstewart has joined #testing
16:41:52 [MichaelC_SJC]
present+ James_Graham
16:41:56 [MichaelC_SJC]
jg: also work for Opera
16:42:36 [MichaelC_SJC]
<missed the rest>
16:42:39 [MichaelC_SJC]
present+ Elika_Etemad
16:42:46 [MichaelC_SJC]
ee: also known as fantasai
16:42:51 [MichaelC_SJC]
work on testing in CSS WG
16:43:03 [MichaelC_SJC]
present+ Jason_Leyba
16:43:07 [MichaelC_SJC]
jl: work on testing in Google
16:43:18 [MichaelC_SJC]
want to improve the ecosystem so it all works better
16:43:41 [MichaelC_SJC]
present+ Simon_Stewart
16:43:42 [dobrien]
dobrien has joined #testing
16:44:02 [MichaelC_SJC]
ss: created Webdriver, working Selenium
16:44:16 [MichaelC_SJC]
very aware of the differences between browsers, would love to sort it out
16:44:26 [MichaelC_SJC]
present+ Kris_Krueger
16:44:38 [MichaelC_SJC]
kk: worked in testing at Microsoft
16:44:44 [MichaelC_SJC]
more recently on Web standards
16:45:06 [MichaelC_SJC]
present+ John_Jansen
16:45:14 [MichaelC_SJC]
jj: also at Microsoft
16:45:19 [MichaelC_SJC]
interested in automation, test suites
16:45:26 [MichaelC_SJC]
present+ Peter_Linss
16:45:32 [MichaelC_SJC]
pl: co-chair of CSS WG
16:45:44 [MichaelC_SJC]
have contributed extensively to that test suite
16:45:59 [MichaelC_SJC]
and working on test shepherd for <missed>
16:46:05 [MichaelC_SJC]
present+ Mike_Smith
16:46:22 [MichaelC_SJC]
ms: work for W3C, staff contact to HTML WG
16:46:32 [MichaelC_SJC]
work on testing for HTML, extensive contributions to framework
16:46:46 [MichaelC_SJC]
present+ Alan_Stearns
16:46:48 [MichaelC_SJC]
as: working for Adobe
16:47:11 [MichaelC_SJC]
interested in tests working across browsers
16:47:27 [MichaelC_SJC]
present+ Narayana_Babu_Maddhuri
16:47:31 [MichaelC_SJC]
represent Nokia
16:47:44 [MichaelC_SJC]
nm: learn what's up
16:47:54 [MichaelC_SJC]
present+ Duane_O'Brien
16:48:05 [MichaelC_SJC]
do: <missed>
16:48:15 [MikeSmith]
https://browserlab.adobe.com/en-us/index.html <- Adobe BrowserLab
16:48:17 [MichaelC_SJC]
present+ Charlie_Scheinost
16:48:26 [MichaelC_SJC]
cs: represent adobe
16:48:36 [TabAtkins_]
TabAtkins_ has joined #testing
16:48:53 [MikeSmith]
RRSAgent, make minutes
16:48:53 [RRSAgent]
I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html MikeSmith
16:48:53 [simonstewart]
Ken_Kania
16:49:09 [MikeSmith]
RRSAgent, make logs public
16:49:19 [MichaelC_SJC]
present+ Ken_Kania
16:49:28 [MichaelC_SJC]
kk: work for google, Webdriver
16:49:36 [MichaelC_SJC]
bs: AT&T, mobile data services
16:49:57 [MichaelC_SJC]
interoperability in various fora
16:50:08 [MichaelC_SJC]
want to understand the challenges browser vendors have in automation
16:50:16 [MichaelC_SJC]
and how to leverage tools in repeatable continuous framework
16:50:33 [MichaelC_SJC]
to certify new devices as they come out, get updated, etc.
16:51:00 [MichaelC_SJC]
present+ Jeff_Hammel
16:51:05 [MichaelC_SJC]
jh: Mozilla, test automation
16:51:15 [MichaelC_SJC]
present+ Clint_Talbert
16:51:27 [MichaelC_SJC]
ct: Mozilla, testing
16:51:36 [MichaelC_SJC]
present+ Tab_Atkins
16:51:41 [MichaelC_SJC]
ta: Google, work on Chrome
16:51:58 [MichaelC_SJC]
not as closely involved in testing, but have worked in CSS on some
16:52:11 [plh]
present+ Michael_Cooper
16:52:43 [plh]
involed in WAI. zstaff contact for PF, developping ARIA> we're struggling in testing. hoping to contribute to the test framework
16:52:55 [plh]
... we have reuirements that we'd like to bring as well
16:53:27 [MichaelC_SJC]
present+ Philippe_Le_H├ęgaret
16:53:44 [MichaelC_SJC]
plh: W3C, Interaction Domain, lots of your favourite groups
16:54:00 [jhammel]
jhammel has joined #testing
16:54:15 [MichaelC_SJC]
want a common framework, common way to write tests
16:54:27 [MichaelC_SJC]
topic: Agenda Overview
16:54:39 [MichaelC_SJC]
wa: first, want browser vendors to introduce how they do testing
16:55:04 [MichaelC_SJC]
then, presentations of a few testing approaches
16:55:32 [MichaelC_SJC]
finally, discussion of how to write tests for different types of functionality
16:55:46 [MichaelC_SJC]
90% of tests cover how something rendered to screen in a particular way
16:55:56 [MichaelC_SJC]
or script returns an expected result
16:56:04 [MichaelC_SJC]
or user fills out form and certain result
16:57:02 [MichaelC_SJC]
topic: WebDriver API
16:57:54 [MichaelC_SJC]
ss: WebDriver is an API for automation of WebApps
16:58:03 [MichaelC_SJC]
developer-focused, guides people to writing better tests
16:58:10 [MichaelC_SJC]
Merged with Selenium a couple years ago
16:58:41 [MichaelC_SJC]
fairly simple, load page, find element, perform actions like focus, click, read, etc.
16:59:07 [MichaelC_SJC]
kk: does it simulate user input at driver level, or elsewhere?
16:59:21 [MichaelC_SJC]
ss: in past user interactions were done by simulating events in DOM
16:59:30 [MichaelC_SJC]
but browers inconsistent in how they handle those
17:00:01 [MichaelC_SJC]
when they do what etc.
17:00:12 [MichaelC_SJC]
so events at script level not feasible
17:00:17 [MichaelC_SJC]
so do events at OS level
17:00:31 [MichaelC_SJC]
that is high fidelity but terrible machine utilization
17:00:46 [MichaelC_SJC]
and wastes developer's time
17:01:14 [MichaelC_SJC]
so now, allow window not to have focus and send events via various OS APIs
17:01:31 [MichaelC_SJC]
but OS not designed to send high fidelity user input to background window
17:01:47 [MichaelC_SJC]
so now, Opera and Chrome pump events into event loop of browser
17:01:56 [MichaelC_SJC]
<scribe not sure that was caught right>
17:02:23 [MichaelC_SJC]
Webdriver has become a de facto standard for browser automation
17:02:30 [MichaelC_SJC]
most popular open source framework
17:03:03 [MichaelC_SJC]
as can be seen by job postings requiring familiarity with it
17:03:11 [MichaelC_SJC]
has reasonable browser support
17:03:26 [MichaelC_SJC]
Opera, Chrome, and Android add-on, Mozilla starting
17:03:34 [MichaelC_SJC]
uses Apache2 license
17:03:40 [MichaelC_SJC]
business-friendly license
17:04:13 [MichaelC_SJC]
nm: tried on mobile browsers?
17:04:21 [MichaelC_SJC]
ss: yes, in various <lists>
17:04:37 [MichaelC_SJC]
it's a small team
17:04:48 [MichaelC_SJC]
covering wide range of browsers and platforms
17:04:57 [MichaelC_SJC]
see 3 audiences for automation
17:05:06 [MichaelC_SJC]
1) App developers are vast majority
17:05:19 [MichaelC_SJC]
need to test applications
17:05:34 [MichaelC_SJC]
hard to get developers to write tests, and can only get them to write to one API when you get it at all
17:05:47 [MichaelC_SJC]
first audience for WebDriver
17:05:50 [MichaelC_SJC]
2) browser vendors
17:06:26 [MichaelC_SJC]
desire to automate their testing as much as possible
17:06:40 [MichaelC_SJC]
bs: how does Webdriver related to qunit <sp?>
17:07:00 [MichaelC_SJC]
ss: <didn't catch details>
17:07:12 [MichaelC_SJC]
bs: so Webdriver isn't a framework, it's an API for automating events
17:07:20 [MichaelC_SJC]
ss: clearly a browser automation API
17:07:35 [MichaelC_SJC]
e.g., understand Opera runs 2 million tests / day with this
17:07:44 [MichaelC_SJC]
3) Spec authors
17:07:56 [MichaelC_SJC]
some specs can be articulated entirely in script
17:08:00 [MichaelC_SJC]
and tested that way
17:08:09 [MichaelC_SJC]
others need additional support, this provides that
17:08:39 [MichaelC_SJC]
ee: more spec testers than authors?
17:08:55 [MichaelC_SJC]
ss: yes, those focusing on test aspects
17:09:09 [MichaelC_SJC]
ss: user perspective
17:09:23 [MichaelC_SJC]
it's a series of controlled APIs
17:09:37 [MichaelC_SJC]
to interrogated DOM
17:09:44 [MichaelC_SJC]
execute script with elevated priveleges
17:10:08 [MichaelC_SJC]
and provide APIs to interact, so not just read-only
17:10:47 [MichaelC_SJC]
jj: <question missed>
17:11:06 [MichaelC_SJC]
ss: <answer missed>
17:11:28 [MichaelC_SJC]
jj: avoids cross origin vulnerability?
17:11:30 [MichaelC_SJC]
ss: yes
17:11:43 [MichaelC_SJC]
bs: good, some complicated scenarious
17:11:50 [MichaelC_SJC]
ss: implementer view
17:12:07 [MichaelC_SJC]
neutral to transport and encoding
17:12:09 [MichaelC_SJC]
provide JSON
17:12:24 [MichaelC_SJC]
which bring clients that can handle immediately
17:12:33 [MichaelC_SJC]
also have released JavaScript APIs
17:12:41 [MichaelC_SJC]
ss: Security
17:12:44 [JohnJansen]
My question was regarding the bypass of the x-origin security restriction
17:13:04 [MichaelC_SJC]
ss: automation and security are opposite concerns
17:13:10 [JohnJansen]
answer: the jscript still honors that restriction, though webdriver itself ignores it.
17:13:24 [MichaelC_SJC]
generally, build support into browser
17:13:31 [MichaelC_SJC]
and enable it via an additional component
17:13:52 [MichaelC_SJC]
or command line features
17:14:39 [MichaelC_SJC]
ss: Demo
17:15:26 [MichaelC_SJC]
<shows short script, then executes>
17:16:30 [MichaelC_SJC]
kk: how Opera?
17:16:39 [MichaelC_SJC]
ss: Water on top of WebDriver
17:17:10 [MichaelC_SJC]
ss: API designed to be extensable
17:17:32 [MichaelC_SJC]
expose capabilities via a simple interface or casting
17:17:49 [MichaelC_SJC]
jj: How are visual verifications handled?
17:18:09 [MichaelC_SJC]
ss: can take a screenshot, platform-dependent
17:18:28 [MichaelC_SJC]
Opera has extended with ability to get hash of the screenshot
17:19:07 [MichaelC_SJC]
attempt to capture entire area described by DOM, not just viewport
17:19:35 [MichaelC_SJC]
deals with difficulties like fixed positioning etc.
17:19:43 [MichaelC_SJC]
but very browser specific
17:19:54 [MichaelC_SJC]
jj: human comparison mechanism?
17:19:54 [charlie]
charlie has joined #testing
17:19:59 [MichaelC_SJC]
ss: in google, teams of people do that
17:20:13 [MichaelC_SJC]
we just provide the mechanism
17:20:31 [MichaelC_SJC]
don't want to over-prescribe how to process images, as state of the art continually changes
17:20:39 [MichaelC_SJC]
bs: to compare layout between different browsers
17:20:47 [MichaelC_SJC]
capture screens, or query position of elements?
17:20:49 [MichaelC_SJC]
ss: can do both
17:20:54 [MichaelC_SJC]
can get location of an element
17:20:57 [MichaelC_SJC]
and size
17:22:01 [MichaelC_SJC]
bs: how about different screens sizes
17:22:18 [MichaelC_SJC]
interested in specifically how things rendered in various circumstances
17:22:59 [MichaelC_SJC]
ss: the locatable interface can provide various types of measures
17:24:20 [MichaelC_SJC]
kk: differences among browsers are wide for many reasons
17:24:29 [MichaelC_SJC]
it's part of the landscape
17:24:51 [MichaelC_SJC]
ss: was able to use same tests using same APIs
17:24:59 [MikeSmith]
q?
17:25:22 [MichaelC_SJC]
at rendering level can be different
17:25:41 [MichaelC_SJC]
plh: platform AAPIs use similar services
17:25:52 [MichaelC_SJC]
hope e.g., ARIA can use WebDriver
17:26:04 [MichaelC_SJC]
ss: have looked at AAPIs, can look at elements by ARIA role etc.
17:26:20 [MichaelC_SJC]
on relationship to AAPIs
17:26:25 [MichaelC_SJC]
sometimes they're enough, sometimes not
17:26:46 [MichaelC_SJC]
one of the next big things in hybridized apps, part native and part Web
17:26:49 [MichaelC_SJC]
may need to use AAPIs to test
17:27:07 [MichaelC_SJC]
plh: think ARIA can be tested using this
17:27:59 [MichaelC_SJC]
ss: have applied Webdrive to native app testing using AAPIs
17:28:22 [MichaelC_SJC]
kk: there has been a path starting with MSAA
17:28:25 [Zakim]
Zakim has joined #testing
17:28:29 [MichaelC_SJC]
rrsagent, do not start a new log
17:28:50 [MichaelC_SJC]
ss: AAPIs are extremely low-level
17:29:02 [MichaelC_SJC]
q+
17:29:20 [MichaelC_SJC]
e.g., a combobox is represented as a few different controls together
17:29:35 [MichaelC_SJC]
kk: developers create all kinds of crazy things
17:30:28 [MichaelC_SJC]
so UI automation allows patterns
17:30:33 [MichaelC_SJC]
mc: can speak to AAPI from WebDriver
17:30:41 [MichaelC_SJC]
ss: Webdriver sits on top of AAPI
17:31:02 [MichaelC_SJC]
but because of script interface, could talk back and forth a bit
17:31:19 [MichaelC_SJC]
wa: Opera has a layer "Water" on top of WebDriver
17:31:45 [MichaelC_SJC]
<shows sample>
17:32:13 [MichaelC_SJC]
test file looks like a manual test, e.g., a human could interact with it
17:32:23 [MichaelC_SJC]
<demos manual execution of test>
17:32:40 [MichaelC_SJC]
<that can also be executed using the script showed previously>
17:33:07 [MichaelC_SJC]
for each test file, there's a block in the automation script
17:33:40 [MichaelC_SJC]
ss: Webdriver simlilar
17:33:51 [MichaelC_SJC]
nm: <missed>
17:34:17 [MichaelC_SJC]
ss: <answer related to webelement.gettext>
17:34:22 [MichaelC_SJC]
jj: why wrapping in Water
17:34:33 [MichaelC_SJC]
wa: was done before projects had merged
17:34:43 [MichaelC_SJC]
now doesn't matter as much
17:34:56 [MichaelC_SJC]
plan to submit Opera set of tests to HTML WG for official test suite
17:35:07 [MichaelC_SJC]
but want them in a format other browser vendors could use
17:35:18 [MichaelC_SJC]
Opera uses Ruby bindings, Mozilla uses Python bindings
17:35:33 [MichaelC_SJC]
need to automate in all browsers, Webdriver seems way to go
17:35:43 [MichaelC_SJC]
for official W3C tests, question of what language binding to use?
17:35:50 [MichaelC_SJC]
ss: Javascript is hugely known
17:36:14 [MichaelC_SJC]
Python is the other one being explored by Mozilla and Chrome
17:36:31 [MichaelC_SJC]
also is "politically unencumbered"
17:36:57 [MichaelC_SJC]
vs some other candidates out there
17:37:08 [MikeSmith]
I vote for Javascript
17:37:09 [MichaelC_SJC]
wa: how complete are JS bindings?
17:37:39 [MichaelC_SJC]
js: still finalizing
17:37:57 [MichaelC_SJC]
kk: <something detailed>
17:38:07 [MichaelC_SJC]
js: API stable
17:38:33 [MichaelC_SJC]
loading script within browser is the part that still needs working on, to get around sandbox
17:38:50 [MichaelC_SJC]
it's usable now, but have debugging etc. to do
17:39:32 [MichaelC_SJC]
ss: so maybe Python preferable?
17:39:50 [MichaelC_SJC]
jg: having dependency on core could be a big stability issue
17:39:58 [MichaelC_SJC]
<^ not sure that's scribed right>
17:40:14 [MichaelC_SJC]
kk: dangerous to build on things that are changing
17:40:32 [MichaelC_SJC]
otoh, need bindings to be something that's available on all targets
17:40:59 [MichaelC_SJC]
ss: normally test and browser communicate like a client / server
17:41:07 [MichaelC_SJC]
can do over a web socket
17:41:23 [MichaelC_SJC]
and run test on machine independent of browser
17:41:42 [MichaelC_SJC]
wa: was able to test a mobile device on a different continents this way
17:42:15 [MichaelC_SJC]
plh: if we set up a test server on W3C site, could you allow it to just run tests at you?
17:42:23 [MichaelC_SJC]
ss: can connect from browser to a test server
17:42:26 [MichaelC_SJC]
so in theory, this works
17:42:29 [MichaelC_SJC]
but security concerns
17:42:37 [MichaelC_SJC]
need a manual intervention to put browser in testing mode
17:43:38 [MichaelC_SJC]
mc: have to trust W3C server from security POV
17:43:47 [MichaelC_SJC]
how we allow tests to be contributed needs to be careful
17:44:11 [MichaelC_SJC]
<general view of usefulness of this approach>
17:44:36 [MichaelC_SJC]
as: <missed>
17:44:53 [JohnJansen]
as: is there support for IME? how good is it?
17:45:04 [MichaelC_SJC]
ss: support varies by platform as we prioritize development
17:45:40 [MichaelC_SJC]
<mentions wherefores and whynots>
17:45:49 [MichaelC_SJC]
do support internationalized text input
17:45:59 [MichaelC_SJC]
for testing I18N but could be used to test other stuff
17:46:27 [MichaelC_SJC]
do: how well documented is JS API?
17:46:32 [MichaelC_SJC]
ss: fairly extensive
17:46:39 [jhammel]
http://code.google.com/p/selenium/wiki/JsonWireProtocol
17:46:57 [krisk]
krisk has joined #testing
17:47:08 [MichaelC_SJC]
Facebook developed PHP bindings using this documentation
17:47:25 [ctalbert_]
ctalbert_ has joined #testing
17:47:27 [MichaelC_SJC]
Selenium stuff hosted under software freedom conservancy
17:47:57 [MichaelC_SJC]
can use w/o the open source stuff, but also handy to use the open source stuff
17:48:09 [MichaelC_SJC]
wa: Just started browser tools and @@ WG
17:48:31 [jhammel]
http://www.w3.org/2011/08/browser-testing-charter
17:48:33 [MichaelC_SJC]
primary goal is to standardize Webdriver API at W3C
17:48:38 [jhammel]
(i think)
17:48:47 [MichaelC_SJC]
welcome you all to join to make this happen
17:49:15 [MichaelC_SJC]
also want to explore whether all browser vendors can handle official test suites using Webdriver API
17:49:27 [MichaelC_SJC]
ss: aware of support from Google, Opera, Mozilla
17:49:40 [MichaelC_SJC]
explicit non-support from Microsoft, Apple, Nokia, HP
17:50:07 [MichaelC_SJC]
also support from RIM
17:50:22 [MichaelC_SJC]
plh: would Microsoft be able to accommodate tests using this?
17:50:25 [MichaelC_SJC]
kk: depends
17:50:40 [MichaelC_SJC]
standardization of the API will help a lot
17:51:03 [MichaelC_SJC]
<Another link for the WG is http://www.w3.org/testing/browser/>
17:51:46 [MichaelC_SJC]
also need tests structured in certain ways we can work with
17:51:51 [fantasai]
kk: having the tests be self-describing is very important. If I was a TV browser vendor that doesn't support webdriver, I would want to be able to leverage the W3C tests as well
17:52:10 [MichaelC_SJC]
jg: tests always structured so you could run manually, though would be ridiculous to do so with them all in practice
17:52:24 [MichaelC_SJC]
ms: first thing we need is a spec
17:52:49 [MichaelC_SJC]
doesn't matter where editors draft hosted, can do at W3C
17:52:58 [MichaelC_SJC]
IP commitments kick in when we publish a Working Draft
17:53:18 [MichaelC_SJC]
ss, wa: ready to move right away on that
17:53:27 [MichaelC_SJC]
kk: W3C would own code?
17:53:33 [MichaelC_SJC]
ss: W3C would maintain spec
17:53:38 [MichaelC_SJC]
and a reference implementation
17:53:52 [MichaelC_SJC]
but there could be other implementations
17:54:19 [MichaelC_SJC]
mc: reference implementation doesn't necessarily have to be W3C
17:54:25 [MichaelC_SJC]
plh: spec is most important for W3C
17:54:39 [MichaelC_SJC]
ss: all Google testing in some way related to Webdrive
17:55:38 [MichaelC_SJC]
bs: supported in mobile?
17:55:42 [MichaelC_SJC]
ss: chrome and android
17:55:51 [MichaelC_SJC]
wa: also opera for mobile
17:56:01 [MichaelC_SJC]
bs: so other platforms is just lack of implementation?
17:56:11 [MichaelC_SJC]
ss: right; Nokia and Apple haven't implemented
17:56:20 [MichaelC_SJC]
just need a driver
17:57:06 [MichaelC_SJC]
kk: support IE6? want to get rid of that
17:57:15 [MichaelC_SJC]
ss: drop support when usage drops below a certain level
17:57:40 [MichaelC_SJC]
plh: support from Microsoft for Webdriver API will help HTML WG a lot
17:58:17 [MichaelC_SJC]
jj: even if Opera submits tests and HTML adopts, they're self-describing so still testable manually
17:59:30 [MichaelC_SJC]
plh: what does Nokia think?
17:59:41 [MichaelC_SJC]
nm: Nokia not really interested
17:59:58 [MichaelC_SJC]
focused on Webkit stuff
18:00:07 [MichaelC_SJC]
today is first time hearing about it
18:00:13 [Tim]
Tim has joined #testing
18:01:05 [MichaelC_SJC]
ss: it's not just about testing a spec, it's about ensuring users can use content in your browser
18:01:35 [MichaelC_SJC]
so that market force should drive interest even if internal interest is elsewhere
18:01:53 [MichaelC_SJC]
nm: how is performance?
18:02:03 [MichaelC_SJC]
ss: rapid on Android, but slow on emulator
18:02:16 [MichaelC_SJC]
Iphone is fast directly and in emulator
18:02:25 [MichaelC_SJC]
<something else> fast
18:02:30 [MichaelC_SJC]
nm: <missed>
18:02:41 [jhammel]
^ pixel verification
18:02:42 [MichaelC_SJC]
ss: haven't seen a lot of pixel verification on mobile devices
18:03:59 [MichaelC_SJC]
<scribe having a hard time hearing or understanding remainder of discussion>
18:04:23 [MikeSmith]
agenda: http://lists.w3.org/Archives/Public/public-test-infra/2011OctDec/0014.html
18:04:39 [dobrien]
Could we get the minutes updated again as well please?
18:05:01 [MichaelC_SJC]
jj: propose not requiring webdriver in first version of test suite
18:05:34 [MichaelC_SJC]
rrsagent, make minutes
18:05:34 [RRSAgent]
I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html MichaelC_SJC
18:05:38 [bryan]
Scribenick: bryan
18:05:59 [bryan]
Topic: Testing IE
18:06:38 [bryan]
kk: To walk thru testing of IE
18:07:15 [bryan]
... shows slides "Standards and Interoperability"
18:07:39 [fantasai]
IE testing diagram: Standards, Customer Feedback, Privacy, Accessibility, Performance, Security
18:07:57 [fantasai]
(these are pictured as hexagrams around a centra "Internet Explorer" label)
18:08:11 [fantasai]
s/centra/central/
18:08:26 [bryan]
... IE testing has various chunks as shown on the slide (slides to be shared)
18:08:27 [fantasai]
"Internet Explorer Testing Lab" w/ photo
18:08:31 [fantasai]
IE5 -> IE10
18:08:36 [fantasai]
948 Workstations
18:08:37 [fantasai]
119 servers
18:08:42 [fantasai]
1200 virtual machines
18:08:45 [fantasai]
remotely configurable
18:08:52 [fantasai]
152 versions of IE shipped every "Patch Tuesday"
18:09:03 [fantasai]
Green Lab Initiative saves ~218 tons of CO2/Year
18:09:24 [bryan]
... IE testing lab using a lot of machines with a lot of IE versions tested every week
18:09:57 [fantasai]
"Standards Engagement"
18:10:00 [fantasai]
ECMA
18:10:05 [fantasai]
TC39 (Ecmascript 5)
18:10:06 [fantasai]
W3C
18:10:10 [fantasai]
- CSS
18:10:12 [fantasai]
-WebApps
18:10:14 [fantasai]
-HTML
18:10:15 [fantasai]
-SVG
18:10:15 [simonstewart]
Slides for the webdriver notes: https://docs.google.com/present/edit?id=0AVrYfCxRNKUGZGc5Nm1ocGhfNzFnaGd2bmZnYw
18:10:17 [fantasai]
-XML
18:10:32 [fantasai]
cycle diagram: Testing -> spec editing -> implementations -> (loop back to Testing)
18:11:00 [fantasai]
"Standard Contributions"
18:11:02 [fantasai]
- Spec editing
18:11:04 [fantasai]
-co-chairing
18:11:11 [fantasai]
-test case contributions w3c and ecma
18:11:13 [bryan]
... encourage standards engagement and participation in various groups
18:11:17 [fantasai]
-- 14623 tests submitted
18:11:25 [fantasai]
-- across IE9/IE9/IE10 features
18:11:29 [fantasai]
- hardware (Mercurial server)
18:11:33 [fantasai]
- IE Platform Preview Builds
18:12:42 [bryan]
... have contributed a lot of tests and hardware
18:13:13 [bryan]
... preview builds allow early access and feedback
18:13:20 [fantasai]
"IE10 Standards Support"
18:13:49 [fantasai]
CSS2.1 , 2D Transofrms, 3D Transforms, Animations, backgroudns and Borders, Color, Flexbox, Fonts, Grid alignment, hyphenation, image values gradients, media querie,s multi-col, namespaces, OM Views, positioned floats, selectors, transitions Value sand Units
18:13:58 [fantasai]
DOM element traversal, HTML, L3 Core, L3 Events, Style, Traversal and Ragne
18:14:01 [fantasai]
ECMASCRIPT 5
18:14:02 [fantasai]
File Reader API
18:14:04 [fantasai]
FIle Saving
18:14:05 [fantasai]
FormData
18:14:07 [fantasai]
Geolocation
18:14:17 [bryan]
... IE 10 will support a lot of standards CSS, HTML5, Web APIs, ... http://ietestdrive.com
18:14:25 [fantasai]
HTML5 appcache, asycn cavnas, drag and drop, forms and validation, structure clone, history API, parser sandbox, selection, semantic element,s video and audio
18:14:28 [fantasai]
ICC Color profiles
18:14:31 [fantasai]
Indexed DB
18:14:32 [fantasai]
Page Visibliity
18:14:35 [fantasai]
Selectors API L2
18:14:37 [fantasai]
SVG Filter Effects
18:14:41 [fantasai]
SVG standalone and in HTML
18:14:42 [bryan]
... also look at the IE blog
18:14:43 [fantasai]
Web Sockets
18:14:45 [fantasai]
Web Workers
18:14:46 [fantasai]
XHTML/XML
18:14:49 [fantasai]
XMLHttpREquest L2
18:14:54 [fantasai]
"Items for Discussion"
18:15:03 [fantasai]
* WG Testing Inconsistent
18:15:09 [fantasai]
- when are test created? Before LC? CR?
18:15:11 [fantasai]
- Whena re tests reviewd?
18:15:13 [fantasai]
- vendor prefixes
18:15:22 [fantasai]
- 2+ impl passing test srequired for CR/
18:15:22 [MichaelC_SJC]
q?
18:15:25 [fantasai]
* Review Tools (none)
18:15:25 [bryan]
... issues are inconsistent testing across WGs
18:15:26 [MichaelC_SJC]
q-
18:15:36 [fantasai]
Note -- that's not quite true anymore, plinss wrote one for csswg :)
18:15:54 [bryan]
... when tests are created e.g. related to last call or earlier
18:16:15 [bryan]
... soft rules for how a spec is allowed to progress are maybe not enough
18:16:44 [bryan]
plh: these are soft rules currently
18:16:51 [MichaelC_SJC]
q+ to say I now believe tests need to be ready by Last Call
18:17:55 [bryan]
jj: test tools recently developed have helped with consistency, flushing our remaining inconsistencies is a goal
18:18:32 [bryan]
... different test platforms result in different tests as submitted to W3C
18:19:09 [bryan]
Michael_Cooper: experience has convinced that tests should be available by last call
18:19:29 [bryan]
Kris_Krueger: why would this not be a rec across W3C?
18:19:37 [bryan]
plh: its not easy to enforce
18:19:46 [bryan]
... some WGs will complain
18:20:09 [bryan]
jj: amping the expectations on testing will help
18:20:25 [bryan]
mc: it should be the rule, with exceptions allowed
18:20:25 [MikeSmith]
q?
18:20:50 [MichaelC_SJC]
ack me
18:20:50 [Zakim]
MichaelC_SJC, you wanted to say I now believe tests need to be ready by Last Call
18:21:05 [bryan]
Elika_Etemad: implementations are needed to see how tests are working
18:21:19 [bryan]
James_Graham: the process does not map to browser development reality
18:22:00 [bryan]
Elika_Etemad: its difficult to say when spec development is done thus making a hard deadline
18:22:23 [dobrien]
@
18:22:23 [dobrien]
Mhmv @7
18:22:35 [bryan]
John_Jansen: problems often cause the specs to move backward
18:22:41 [dobrien]
Sorry about that.
18:22:59 [bryan]
Elika_Etemad: CR is test the spec phase, not fixing bugs in browsers
18:23:40 [bryan]
... having to move CR back due to bugs is an issue, we need an errata process to allow edits in CR
18:23:56 [bryan]
plh: we are not here to fix the W3C process
18:24:34 [bryan]
John_Jansen: the more times you go thru the circle (edit/implement/test) the better, and also the earlier
18:24:59 [bryan]
James_Graham: when we implement we write the tests... test suites should not be closed
18:25:19 [fantasai]
James_Graham: The state of the spec is irrelevant to when we write tests
18:25:56 [bryan]
Mike_Smith: the Testing IG is scoped broadly perhaps too much so. The IG will decide what its products will be, e.g. a best practice on when test suites are developed.
18:26:19 [bryan]
... writing this down even if we do not fix the process will help others avoid the same mistakes of the past
18:26:29 [bryan]
... it will still have some value
18:26:54 [bryan]
Wilhelm_Anderson: how do you run tests, what is automated, is development inhouse
18:27:05 [bryan]
Kris_Krueger: write our own tests
18:27:12 [bryan]
plh: from JQuery?
18:27:41 [bryan]
Kris_Krueger: no, customer feedback is also considered
18:28:04 [charlie_]
charlie_ has joined #testing
18:28:08 [bryan]
... e.g. Gmail support provides feedback
18:28:42 [bryan]
... have a lot of automated tests, ship every Tuesday, and get quick feedback from users/developers
18:29:41 [bryan]
Narayana_Babu_Maddhuri: is there any review of the test cases to determine is the test a valid test, validation of the test results?
18:30:09 [bryan]
plh: the metadata of the test log should clarify what is being tested
18:30:29 [bryan]
Kris_Krueger: pointing to where the test relates to the spec is helpful
18:30:57 [bryan]
plh: we cannot force metadata into tests, but we can encourage this info to help ensure test value clarity
18:31:20 [bryan]
Narayana_Babu_Maddhuri: good reporting would be helpful
18:32:10 [bryan]
plh: knowing e.g. what property works across devices and platforms is a goal, and matching tests to specs would support that
18:32:56 [bryan]
James_Graham: knowing why something is failing is sometimes difficult, dependencies are not clear and why the test failed is unclear
18:32:57 [plh]
[lunch]
18:33:00 [MichaelC_SJC]
== Lunch break is 1 hour ==
18:44:21 [dobrien]
dobrien has joined #testing
19:06:49 [jimevans]
jimevans has left #testing
19:25:46 [shepazu]
shepazu has joined #testing
19:29:46 [charlie]
charlie has joined #testing
19:35:54 [plh]
plh has joined #testing
19:35:55 [stearns]
stearns has joined #testing
19:36:39 [MikeSmith]
MikeSmith has joined #testing
19:37:52 [krisk]
krisk has joined #testing
19:38:06 [MichaelC_SJC]
MichaelC_SJC has joined #testing
19:38:19 [plinss]
plinss has joined #testing
19:38:58 [jhammel]
jhammel has joined #testing
19:39:07 [ctalbert_]
http://people.mozilla.org/~ctalbert/automationpresentation/Automation.html
19:39:14 [JohnJansen]
JohnJansen has joined #testing
19:39:19 [plh]
Topic: Testing Firefox
19:39:22 [simonstewart]
simonstewart has joined #testing
19:39:33 [krisk_]
krisk_ has joined #testing
19:39:45 [krisk_]
Firefox Testing Presentation
19:40:02 [krisk_]
clint: Tools automation lead at Mozilla
19:40:23 [krisk_]
Clint: overview of their testiong
19:40:40 [krisk_]
Grown over the years
19:40:52 [krisk_]
Test Harnesses
19:41:10 [fantasai]
"Automation Structure: Test Harnesses"
19:41:14 [fantasai]
- C++ Unit
19:41:21 [krisk_]
C++ Unit testing, XPCShell, no too intresting for this group
19:41:22 [fantasai]
- XPCShell (javascript objects)
19:41:25 [fantasai]
- Reftest
19:41:26 [fantasai]
-Mochitest
19:41:30 [fantasai]
-UI Automation Frameworks
19:41:34 [fantasai]
- Marionette
19:42:20 [krisk_]
Mochitest - tests dom stuff
19:43:01 [krisk_]
New UI automation framework - Marionette
19:43:20 [krisk_]
Reftest drill down
19:43:25 [bryan]
bryan has joined #testing
19:43:56 [fantasai]
"Reftest: style and layout visual comparison testing"
19:44:09 [fantasai]
Reference: <p><b>This is bold</b></p>
19:44:18 [fantasai]
Test: <p style="font-weight: bold">This is bold</p>
19:44:49 [fantasai]
clint: The test and the reference create the same rendering in different ways.
19:44:58 [fantasai]
clint: Then we take screenshots and compare them pixel by pixel
19:45:22 [fantasai]
clint: Mochitest is an HTML file with some javascript in it.
19:45:30 [fantasai]
clint: One of the libraries it pulls in is the SimpleTest library.
19:45:45 [charlie]
charlie has joined #testing
19:45:47 [fantasai]
clint: It has the normal asserts: ok, is, stuff to control whether asynchronous or not
19:46:03 [fantasai]
clint: This other file here (in this example) turns off the geolocation security prompts
19:46:18 [fantasai]
clint shows a geolocation test
19:46:38 [jhammel]
^ http://mxr.mozilla.org/mozilla-central/source/dom/tests/mochitest/geolocation/test_allowWatch.html
19:46:55 [fantasai]
plh: How does this route around the security checks?
19:47:03 [fantasai]
clint: uses an add-on
19:47:28 [fantasai]
clint: has a special powers api
19:48:23 [fantasai]
"Marionette: Driving Gecko into the future"
19:48:40 [fantasai]
This is a mechanism we can use to drive any gecko-based application either by UI or by inserting scrit actions into its various script contexts.
19:48:43 [fantasai]
How it works -
19:48:46 [fantasai]
1. socket opened from inside gecko
19:48:54 [fantasai]
2. Connect to socket from test harnes, either local ro remote
19:49:00 [fantasai]
3. Send JSON protocol to it
19:49:07 [fantasai]
4. Translates JSON protocol into browser actions
19:49:09 [simonstewart]
uses webdriver json protocol streamed over sockets directly
19:49:13 [fantasai]
5. Send results back to harness in JSON
19:49:21 [jhammel]
wiki page: https://wiki.mozilla.org/Auto-tools/Projects/Marionette
19:49:29 [jhammel]
(WIP)
19:50:35 [fantasai]
clint: We run all of these test on every check in every tree we build on.
19:50:46 [fantasai]
clint: Goes into a dashboard
19:51:01 [fantasai]
slide: shows screenshot of TinderboxPushLog
19:52:15 [fantasai]
wilhelm: Can we steal your Mochitests? What do we need to do to do so?
19:52:23 [fantasai]
clint: Check them out of the tree and see how well they run in Opera
19:52:37 [fantasai]
clint: Some of the stuff we did, e.g. special powers extension,
19:52:53 [fantasai]
clint: but it's now a specific API (used to be scattered randomly throughout tests)
19:52:56 [TabAtkins_]
TabAtkins_ has joined #testing
19:53:08 [fantasai]
clint: If you had something similar and named it specialpowers, then you could use that to get into your secure system
19:53:12 [fantasai]
clint: So should be possible.
19:53:26 [fantasai]
clint: A lot of tests we have in the tree are completely agnostic; don't do anything special at all, should work today
19:53:35 [jhammel]
mochitests are at http://hg.mozilla.org/mozilla-central/file/tip/testing/mochitest
19:53:40 [fantasai]
wilhelm: Are there plans to release these tests to geolocation wg?
19:53:48 [fantasai]
clint: I think they already did. guy wrote tests is on that wg
19:54:07 [fantasai]
kk: ... they're hard-coded to use the Google service. If you don't use it, they don't run...
19:54:11 [fantasai]
kk: Not too many though
19:55:30 [MichaelC_SJC]
MichaelC_SJC has changed the topic to: Browser testing face-to-face meeting 28 October 2011 http://lists.w3.org/Archives/Public/public-test-infra/2011OctDec/0014.html
19:55:49 [fantasai]
some discussion of sharing tests
19:56:02 [fantasai]
Alan: I think WebKit is using some Mozilla reftests, but not using them as reftests
19:56:24 [fantasai]
kk: I'm fine w/ reftests. But of course won't work for everything.
19:56:24 [bryan]
bryan has joined #testing
19:56:43 [fantasai]
kk: CSS tests we wrote are self-describing.
19:56:56 [fantasai]
Alan: do you have automation?
19:56:59 [fantasai]
kk: Yes
19:57:34 [fantasai]
rakesh: Do you run the tests every day?
19:57:39 [fantasai]
clint: Every checkin
19:57:46 [fantasai]
clint: Different trees run different numbers of tests.
19:58:06 [jhammel]
https://tbpl.mozilla.org/
19:59:02 [fantasai]
clint: Our goal is to have test results back within 2 hours. Right now we're averaging 2.5hrs
19:59:44 [fantasai]
fantasai: You're responsible for watching the tree and backing out if you broke something.
20:00:22 [fantasai]
discussion of test coverage
20:01:38 [fantasai]
discussion of subsetting tests during development
20:02:14 [fantasai]
wilhelm: How much noise do you have?
20:02:21 [fantasai]
clint: Don't know about false positives
20:02:37 [fantasai]
clint: Probably not many; once we find one, we check for that pattern elsewhere
20:03:01 [jhammel]
orange factor, for tracking failures: http://brasstacks.mozilla.com/orangefactor/
20:03:05 [fantasai]
clint: Thing we really have is intermittent failures
20:03:15 [fantasai]
clint: We're trying really really hard to bring it down
20:04:00 [fantasai]
clint: Used to be on every checkin you'd get, on average, 8 intermittent failures
20:04:06 [fantasai]
clint: we pushed it down to 2
20:04:11 [fantasai]
clint: And then we added the Android tests
20:04:21 [fantasai]
clint: trying to bring it down again
20:04:32 [fantasai]
duane: Can I instrument Marionette today in FF7?
20:04:43 [fantasai]
clint: No, code we're depending on now is landing currently on Nightly
20:04:49 [fantasai]
clint: Released probably... May?
20:04:59 [fantasai]
clint: Depending on work done by Developer Tools group
20:05:08 [fantasai]
clint: They have a remote debugging protocol they're implementing
20:05:26 [fantasai]
clint: Will be really nice; decided this would be great to piggyback on. Don't need two sockets in lower-level Gecko.
20:05:33 [fantasai]
clint: So won't be available until that's released.
20:05:54 [fantasai]
clint: Currently in a project repo... land in Nightly in ~2.5 weeks
20:06:14 [fantasai]
plh: Marionnet is only for Fennec, not for desktop version?
20:06:27 [fantasai]
clint: For Fennec right now. Planning to go backwards and use for Desktop as wel.
20:06:33 [fantasai]
clint: My goal is to move all our infrastructure towards that
20:08:14 [fantasai]
kk asks about reducing orange
20:08:42 [fantasai]
clint: It's mostly a one-by-one effort of fixing the tests
20:09:21 [simonstewart]
Interesting comment about avoiding using setTimeout in tests
20:09:49 [fantasai]
kk: Are you going to take Mochitests into W3C? Anything preventing you?
20:10:10 [fantasai]
clint: Nothing right now. We'd have to clean them up and make them cross-browser. Good for everyone, not opposed, just a matter of finding people and time
20:10:49 [fantasai]
jgraham: there's a bug on making testharness.js look like Mochitest to Mozilla
20:11:47 [fantasai]
wilhelm tries to find his slides
20:11:52 [fantasai]
"This looks vaguely familiar"
20:12:10 [fantasai]
wilhelm: Say a few words about testing at Opera
20:13:00 [fantasai]
jgraham: We have a mainline, which is supposedly always stable, and then when we're developing a feature, it gets branched and at some point tests start passing (that's the yellow, b/c out of sync with mainline) and then we merge and that becomes mainline
20:13:08 [fantasai]
diagram shows mainline with six green dots going forward
20:13:16 [fantasai]
branch goes off, two red dots, one yellow
20:13:22 [fantasai]
arrow from mainline to green dot on feature branch
20:13:28 [ctalbert_]
The wiki page we(mozilla) wrote that details our "lessons learned" from fixing intermittently failing tests is here: https://developer.mozilla.org/en/QA/Avoiding_intermittent_oranges
20:13:29 [fantasai]
arrow from green dot back to green dot on mainline
20:13:46 [fantasai]
jgraham: ...
20:13:56 [fantasai]
jgraham: Our setup's a bit different
20:14:15 [fantasai]
jgraham: All the tests are in subversion in their own repository that's separate from the code. It's just a normal webserver: apach, php
20:14:29 [fantasai]
jgraham: When you ask for tests to be run, they get assigned from the server and we send them out to a couple hundred virtual machines
20:14:36 [fantasai]
jgraham: not quite MSFT's setup
20:14:42 [fantasai]
jgraham: And then we store every result of every test
20:14:57 [fantasai]
jgraham: I think you just store did all the tests past.. we store, in this build this test passed.
20:15:03 [fantasai]
jgraham: We have a huge database of this information
20:15:16 [fantasai]
jgraham: Theoretically we can delete stuff, but we store everything.
20:15:32 [fantasai]
jgraham: In a mainline build from yesterday, we ran quarter of a million tests
20:15:50 [fantasai]
jgraham: That's not quarter million files -- it's 60,000 files, some of which produce multiple results
20:16:03 [fantasai]
jgraham: e.g. some tests from HTML5 test in W3C, one file might produce 10,000 results
20:16:22 [fantasai]
jgraham: Typically it's a JS thing and it just runs a bunch of code and at the end it has some results
20:16:27 [fantasai]
jgraham: Dumps them to the browser in some way
20:16:37 [fantasai]
jgraham: The way we do that right now is pretty stupid, so I won't talk about it
20:16:54 [fantasai]
slide: Visual tests, JS tests, Unit tests, Watir tests, Manual tests :(
20:17:01 [fantasai]
jgraham: System was designed 7 years ago or sth
20:17:13 [fantasai]
jgraham: For visual tests, you just take a screenshot, and then we store the screenshot.
20:17:22 [fantasai]
jgraham: Someone manually marks whether that screenshot was a pass or fail.
20:17:26 [MichaelC_SJC]
s/Water/Watir/g
20:17:35 [fantasai]
jgraham: Don't do that. You have to do it once per test, and then once any time anything changes very slightly
20:17:55 [fantasai]
jgraham: e.g. introduce anti-aliasing test, have to re-annotate all tests
20:18:02 [fantasai]
jgraham: this format is deprecated
20:18:17 [fantasai]
wilhelm: We have 20,000 tests on 3 different Opera configurations...
20:18:41 [fantasai]
wilhelm: We want to kill these tests and use reftests instead
20:18:49 [fantasai]
jgraham: Oh, reftests should be on that list too
20:19:04 [fantasai]
jgraham: Recently we implemented reftests, and we're actively trying to move tests to reftests.
20:19:22 [fantasai]
jgraham: You can't test everything with reftest, but when you can it's much better
20:19:40 [fantasai]
Alan: Do you keep track of when the reference file bitmap changes?
20:20:31 [fantasai]
Alan: What if both the reference and the test change identically such that the test should fail but doesn't?
20:21:00 [fantasai]
plinss: In the case of the CSSWG when we have a fragile reference, we have multiple references that use different techniques
20:21:25 [fantasai]
jgraham: We have a very lightweight framework we used to use for JS tests. Only allowed one test per page.
20:21:46 [fantasai]
jgraham: Easy to use, but required a lot of convoluted logic for each pass/fail result.
20:21:49 [dawagner]
dawagner has joined #testing
20:21:51 [fantasai]
jgraham: For new test suites, we're using testharness.js
20:22:02 [fantasai]
jgraham: similar to Mozilla's MochiKit
20:22:14 [fantasai]
jgraham: Unit tests are C++ level things not worth talking about here
20:22:23 [fantasai]
jgraham: When things need automation, we use Watir -- discussed this morning
20:22:31 [fantasai]
jgraham: When all else fails, we have manual tests
20:22:38 [fantasai]
wilhelm: Notice that the monkey looks really unhappy
20:22:58 [fantasai]
jgraham: For the core of Opera, we schedule a test day and just run tests
20:23:05 [fantasai]
plh: How many manually tests do you have?
20:23:15 [fantasai]
wilhelm: around 2000 before, less now...
20:23:25 [fantasai]
wilhelm: Probably spend about a man-year on manual tests per year
20:23:43 [fantasai]
wilhelm: Say some things about challenges we have, things we need to take into account when writing tests internally and for W3C
20:23:50 [fantasai]
wilhelm: First thing is device independence
20:24:10 [fantasai]
wilhelm: We run 3 different configurations of Opera: Desktop profile, Smartphone profile, and TV profile
20:24:26 [fantasai]
wilhelm: Almost every time someone requests a build, it will be tested on those three profiles
20:24:55 [fantasai]
wilhelm: We notice that if you have a static timeout in your test, e.g. wait 2s before checking result, that will break on stupid profile with low resources
20:25:24 [fantasai]
wilhelm: On some platforms we automatically double or triple it, and we hope it works, but it's not really good solution
20:25:49 [fantasai]
jgraham: How do you deal with ... ?
20:26:07 [fantasai]
clint: we time out our tests after a set time period and mark it as failed
20:26:27 [fantasai]
jgraham: Most assumption is don't depend on device size or speed -- test will randomly fail.
20:26:34 [fantasai]
s/jgraham/wilhelm/
20:26:39 [fantasai]
wilhelm: Brings me to the next problem: random
20:26:54 [fantasai]
wilhelm: If you have so many tests and even small percentage fail randomly, going to spend man-years investigating those failures
20:27:26 [fantasai]
wilhelm: When we add new configurations, when we steal tests from source of unknown quality, we spend many man-years stamping out randomness in the tests
20:27:35 [fantasai]
wilhelm: The more complex the test, the more likely to randomly fail
20:27:45 [fantasai]
wilhelm: Simplest tests are JS.
20:27:54 [fantasai]
wilhelm: For imported tests from random sources, could be very bad
20:28:02 [fantasai]
wilhelm: Then comes visual tests
20:28:12 [fantasai]
wilhelm: Sometimes complexity is needed, but if can simplify will do that
20:28:31 [fantasai]
wilhelm: We have a quarantine system: run 200 times on test machines first to make sure its good
20:28:38 [fantasai]
wilhelm: Still, sometimes things slip through.
20:28:45 [fantasai]
wilhelm: We steal your tests. Thank you.
20:28:54 [fantasai]
slide: jQuery, Opera, Chrome, Microsoft, mozilla, W3C
20:29:11 [fantasai]
wilhelm: Keeping in sync with the origin of the test is difficult
20:29:25 [fantasai]
wilhelm: When someone updates a test elsewhere, w don't automatically get that
20:29:42 [fantasai]
wilhelm: When we muck about the test to get it to work on our system, we have to maintain patches
20:29:44 [Zakim]
Zakim has left #testing
20:29:53 [fantasai]
wilhelm: If we fix bad tests, sometimes easy to contribute back, but sometime not
20:30:10 [fantasai]
wilhelm: Automating tests to use our Watir scripts, can also become a problem.
20:30:13 [Zakim]
Zakim has joined #testing
20:30:16 [fantasai]
wilhelm: Our current approach is not usable
20:30:24 [fantasai]
wilhelm: need a better way for us all to keep in sycn
20:30:47 [fantasai]
kk: This is why we have submitted and approved folders
20:31:02 [fantasai]
jgraham: The problem from our POV is really .. part of it is version control problem on our
20:31:05 [fantasai]
end
20:31:11 [fantasai]
jgraham: Don't have a good way to keep our patches separate from upstream changes
20:31:29 [fantasai]
jgraham: If we have w3C tests, and we pull new version, don't have a way to say "these are bits we changed ot make it work on our version"
20:31:43 [fantasai]
jgraham: ... reporting and script file separate
20:32:11 [fantasai]
jgraham: if we pull some tests from Mozilla, say, and they're JS engine tests andthey update them, if we try and merge them.. someone has to work out how to do that by hand. It's kindof a nightmare.
20:32:19 [fantasai]
wilhelm: Last thing about randomness, esp imported
20:32:27 [fantasai]
wilhelm: Some tests rely on external tests.
20:32:31 [fantasai]
wilhelm: Great when we only had a few tests
20:32:39 [fantasai]
wilhelm: But now it's a problem. Servers go down, etc.
20:32:52 [fantasai]
wilhelm: Conclusion there is: don't do that. :)
20:32:55 [fantasai]
wilhelm: That's it!
20:33:33 [fantasai]
jhammel: Wrt upstream tests, standardizing on formats and standardizing on process
20:33:41 [fantasai]
wilhelm: We set up time at 3:15 today to discuss this exact issue
20:33:53 [fantasai]
mc: You say you have to fix tests to work on your product.
20:34:07 [fantasai]
mc: Question is how do you separate fixing test to be not random, vs. making them work on a particular product
20:34:20 [fantasai]
jgraham: When we pull in tests, we try not to change anything to do with the test.
20:34:32 [fantasai]
jgraham: We don't require the tests to pass to be in our system.
20:34:41 [fantasai]
jgraham: The thing we need to change is, can this test report back to our servers.
20:34:47 [fantasai]
jgraham: But external tests are usually not designed that way.
20:35:00 [fantasai]
wilhelm: I think testharness.js approach is good, because those are separated.
20:35:27 [krisk_]
That is the end of Opera
20:35:36 [MichaelC_SJC]
's presentation
20:36:02 [krisk_]
The next person up is peter from HP on css wg update (10 minutes)
20:36:25 [krisk_]
Then a discussion on rendering tests for about 1 hour
20:36:34 [MichaelC_SJC]
topic: Testing in the CSS WG
20:37:32 [MichaelC_SJC]
s/wilhelm tries to find his slides/topic: Testing Opera/
20:37:44 [krisk_]
test.csswg.org
20:37:55 [krisk_]
has lots of information on CSS WG testing
20:38:23 [krisk_]
Tests are 'built' from xml into multiple formats - html, xhtml, etc...
20:39:46 [krisk_]
Test harness is a wrapper around the tests that are loaded in an iframe
20:40:07 [krisk_]
It loads the tests that have the least number of tests
20:41:11 [krisk_]
The harness has a filter for spec section, etc..
20:41:55 [krisk_]
The harness has meta-data description for each of the tests
20:42:01 [stearns]
test format requirements: http://wiki.csswg.org/test/css2.1/format
20:44:04 [krisk_]
The harness also has test results that can be shown for each of the browser/engine versions
20:46:26 [krisk_]
Build process has requirements that will be improved overtime - meta data, ref test, title, etc...
20:47:06 [krisk_]
Adding meta-data helps review process, though most submitters don't like to add this data
20:48:00 [krisk_]
Multiple refs for the same test exist and a negative ref test as well
20:49:12 [krisk_]
You can have two ref tests if the spec has two different results - for example margin collapsing
20:49:55 [krisk_]
If a ref test can't be used then in some cases a self-describing test works
20:50:25 [plinss]
http://test.csswg.org/annotations/css21/
20:50:43 [krisk_]
Spec annotations are used that map back to the annotated spec
20:51:54 [krisk_]
The annotated spec has total tests and results for each section of the spec
20:51:59 [ctalbert_]
ctalbert_ has joined #testing
20:52:03 [krisk_]
Now on to the test review system
20:52:32 [krisk_]
http://test.csswg.org/shephard/
20:53:20 [krisk_]
Very tight coupling to the css test metadata
20:54:09 [krisk_]
Tracks history and other information about a test case
20:55:31 [krisk_]
jgraham: is this tied to the test file?
20:55:50 [krisk_]
peter: no it's possible to have this information in another file
20:56:28 [krisk_]
jgraham: can this handle a case when multiple files are used to create alot of tests
20:57:09 [krisk_]
peter: yes we have the same issue for the media query test cases
20:57:36 [krisk_]
Wilhelm: So does css still use visual non-ref tests?
20:58:05 [krisk_]
fantasi: for css3 we require ref-tests, so no
20:58:37 [Alexia]
Alexia has joined #testing
20:58:42 [Alexia]
http://b39b5112.thesegalleries.com
21:00:25 [plh]
s|http://b39b5112.thesegalleries.com||
21:01:52 [krisk_]
peter: The system is built to save time and automate parts
21:02:11 [krisk_]
peter: for example when a test is approved it is moved from submitted to approved
21:02:47 [krisk_]
Michael: Does the system have access control checks for approval?
21:02:50 [krisk_]
peter: yes
21:03:28 [MichaelC_SJC]
topic: Testing Chrome
21:03:28 [krisk_]
Ken: Chrome Testing Information
21:03:36 [simonstewart]
kk: works on the chrome automation team
21:03:52 [simonstewart]
kk: not an automation group in the same sense as mozilla
21:03:56 [simonstewart]
chrome depends on webkit
21:04:08 [krisk_]
kk is not krisk
21:04:09 [simonstewart]
webkit layout tests, pixel-based tests
21:04:16 [simonstewart]
kk == ken_kania
21:04:28 [simonstewart]
kk: dom dump tree tests
21:04:44 [simonstewart]
kk: not got a lot of insight into the specifics of the webkit tests. Focuses mainly on the chrome browser
21:04:50 [simonstewart]
kk: couple of layers of testing
21:05:02 [simonstewart]
kk: lowest layer is the c++ browser tests
21:05:34 [simonstewart]
kk: probably more than other browsers do. Special builds of chrome which will run C++ in the ui thread
21:05:41 [simonstewart]
kk: relatively low level, though
21:05:56 [simonstewart]
kk: beyond those, there are the ui test framework. Based on the automation proxy (AP)
21:06:06 [simonstewart]
kk: ap is pretty old, but is an ipc mechanism
21:06:11 [simonstewart]
kk: very much internal facing
21:06:23 [simonstewart]
those tests are still fairly low level, depsite being called ui tests
21:06:42 [simonstewart]
kk: higher than that, Ken's team work on something called the chrome bot
21:06:50 [simonstewart]
kk: runs on real and virtual machines
21:07:19 [simonstewart]
kk: cache of a large number of sites in a cache. Often used for crash testing. Also include tests that perform random ui actions
21:07:30 [simonstewart]
kk: a little bit smarter than pure random, but that's the gist
21:08:08 [simonstewart]
kk: qa level tests. Tests that are done by manual testers. Piggy back off the ui test automation framework. things ilke creating bookmarks, installing extensions, etc
21:08:34 [simonstewart]
kk: break down manual testing to test parts. First app compat. Push a new release of chrome it continues to work, and testing chrome at the ui level
21:08:40 [simonstewart]
Most of the ui is "based on the web"
21:08:49 [simonstewart]
For the chrome specific native widgets there are manual tests
21:08:59 [simonstewart]
kk: app compat depends on webdriver
21:09:16 [simonstewart]
kk: lots of google teams depend on webdriver to verify that sites work.
21:09:50 [simonstewart]
kk: guess that at a high level, the testing strategy tends to be developer focused.
21:10:06 [simonstewart]
kk: devs should write the tests in whatever tool and harness is most expedient for their purpose
21:10:28 [simonstewart]
kk: piggy back a lot on the fact that chrome does rapid releases. 4 channels release to users (canary, dev, beta, stable)
21:10:37 [simonstewart]
kk: different release schedules
21:10:47 [simonstewart]
kk: depend a lot on user feedback from the canaries
21:11:10 [simonstewart]
kk: that's the gist of it
21:11:17 [simonstewart]
tab: sounds good to me
21:11:30 [simonstewart]
jhammel: do chrome do performance testing?
21:11:45 [simonstewart]
kk: we do. Using the AP and the ui testing framework mentioned earlier
21:11:50 [simonstewart]
http://build.chrome.org
21:11:55 [simonstewart]
to view the tests that have been run
21:12:21 [simonstewart]
plh: do we run jquery tests
21:12:30 [jhammel]
^ correction: http://build.chromium.org
21:12:37 [simonstewart]
kk: not really. webkit guys might, and we pick that up
21:13:00 [simonstewart]
krisk_: do you create tests and feed them back
21:13:08 [simonstewart]
TabAtkins: we don't do much, but we do
21:13:16 [simonstewart]
krisk_: is that because it doesn't fit with the systems
21:13:35 [simonstewart]
TabAtkins: the ways we write and run tests isn't really compatible with the existing w3 systems.
21:13:43 [simonstewart]
TabAtkins: would like to change that!
21:14:14 [simonstewart]
TabAtkins: some tests are html/js. which might be used where possible. Doesn't ahppen that regularly
21:14:23 [simonstewart]
krisk_: how do you know that you're interoperable?
21:14:42 [simonstewart]
TabAtkins: in terms of webkit stuff, it's a case of testing being done by different browser vendors
21:15:10 [simonstewart]
kk: lots of c++ tests that are specific to chrome
21:15:10 [jhammel]
simonstewart: np :)
21:15:14 [simonstewart]
krisk_: v8?
21:15:24 [simonstewart]
TabAtkins + kk: v8 team live in europe. Who knows?
21:15:53 [simonstewart]
wilhelm: also has legacy stuff for opera. New tests written in a way that (in theory) is usable outside. Can chrome do the same thing?
21:16:09 [simonstewart]
TabAtkins: will agitate for that. Involved in spec writing rather than active dev, so might be tricky
21:16:28 [simonstewart]
wilhelm: This is a great forum to raise those issues. Opera happy to share with Chrome if Chrome does the same :)
21:16:41 [simonstewart]
krisk_: do chrome try and pass a bunch of the w3c test suites?
21:16:59 [simonstewart]
TabAtkins: yes. Some of the might be integrated into the chromium waterfall. Some of them might be run by hand
21:17:21 [simonstewart]
?? does anyone know about webkit testing
21:17:38 [simonstewart]
TabAtkins: the people who'd I'd like to ask aren't around
21:18:18 [simonstewart]
webkit does seem to take in test suites from mozilla. They're running against a bitmap that's different from the moz rendering
21:18:33 [simonstewart]
TabAtkins: we don't have a good infrastrcuture for ref tests
21:18:51 [simonstewart]
TabAtkins: the test infrastructure people _do_ want to fix that
21:19:10 [simonstewart]
TabAtkins: every time a new port is added to webkit, there are more pixel tests. Provides pressure to do better
21:19:26 [simonstewart]
plh: any other questions?
21:19:37 [simonstewart]
15 minute break coming up
21:20:11 [simonstewart]
RRSAgent, make minutes
21:20:11 [RRSAgent]
I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html simonstewart
21:20:49 [bryan]
bryan has joined #testing
21:20:55 [bryan]
Info available from webkit: https://trac.webkit.org/wiki
21:21:05 [bryan]
also see http://www.webkit.org/quality/testing.html
21:31:12 [plinss]
plinss has joined #testing
21:31:18 [JohnJansen]
JohnJansen has joined #testing
21:31:18 [dobrien]
dobrien has joined #testing
21:31:29 [charlie]
charlie has joined #testing
21:35:29 [krisk_]
Next agenda Item jgraham talking about testharness.js
21:35:32 [MichaelC_SJC]
scribe: testharness.js
21:35:38 [MichaelC_SJC]
scribe: krisk_
21:35:55 [MichaelC_SJC]
topic: krisk_
21:36:02 [MichaelC_SJC]
topic: testharness.js
21:36:12 [MichaelC_SJC]
s/topic: krisk_//
21:36:21 [fantasai]
scribenick: fantasai
21:36:24 [TabAtkins_]
TabAtkins_ has joined #testing
21:36:30 [fantasai]
jgraham: testharness.js is something I wrote to run tests.
21:36:36 [fantasai]
jgraham: It runs JS tests specifically
21:36:48 [fantasai]
jgraham: It's a bit like MochiTest or QUnit which JQuery uses, or various things
21:36:54 [plh]
--> http://w3c-test.org/resources/testharness.js testharness.js
21:36:56 [fantasai]
jgraham: Every JS framework has invented its own testharness
21:37:01 [fantasai]
jgraham: This has slightly different design goals
21:37:15 [fantasai]
jgraham: The overarching goal is that it's something we can use to test low-level specs like HTML and DOM
21:37:23 [fantasai]
jgraham: So it can't rely on lots of HTML and DOM :)
21:37:47 [fantasai]
jgraham: The design goals were to provide some API for writing readable and consistent tests
21:37:50 [fantasai]
in JS
21:38:00 [fantasai]
jgraham: Our previous harness at Opera, as I mentioned, didn't resul in very readable
21:38:04 [fantasai]
tests
21:38:13 [fantasai]
jgraham: The other is to support testing the entire DOM level of behavior
21:38:24 [fantasai]
jgraham: There are 2 test types : asynchronous tests and synchronous tests
21:38:32 [fantasai]
jgraham: second us purely syntactic sugar
21:38:35 [fantasai]
s/us/ is/
21:38:52 [fantasai]
jgraham: Another design goal was to allow possibility of the test to have multiple assertions, and all have to be true for test to pass
21:39:04 [fantasai]
jgraham: typical example might be checking that some node has a set of children.
21:39:18 [fantasai]
jgraham: Might want to first test for any children before testing that 4th child is a <p>
21:39:41 [fantasai]
jgraham: Multiple tests per file was a requirement; learning from Opera's 1/file, which was painful for test writers and discouraged many tests
21:39:50 [fantasai]
jgraham: ... runs everything in try-catch blocks
21:39:57 [fantasai]
jgraham: One feature of that is that every bit of the test is like a function, basically
21:40:03 [fantasai]
jgraham: it tries to handle some housekeeping.
21:40:16 [fantasai]
jgraham: if you have 1000 tests in a file, nice if you can time out those tests individually
21:40:31 [fantasai]
jgraham: Uses settimeout(); can override that if you want, e.g. if running on slow hardware
21:40:44 [fantasai]
jgraham: and a design goal was easy integration with browsers' existing test systems
21:40:55 [fantasai]
jgraham: Should be easy to use on top of MochiKit or whatever you use for reporting results
21:41:01 [fantasai]
jgraham: next thin I thought I'd do is go through creating a test.
21:43:19 [fantasai]
jgraham's text editor:
21:43:26 [fantasai]
<script src="resources/testharnessreport.js"></script.
21:43:36 [fantasai]
<script src="resources/testharness.js"><script>
21:43:40 [fantasai]
<div id="log"></div>
21:43:59 [fantasai]
jgraham: By default testharnessreport.js is blank. It's for you to integrate into your testing system.
21:44:13 [fantasai]
jgraham: the order is not at the moment relevant
21:44:35 [fantasai]
jgraham: we might later check in testharness.js that testharnessreport.js was included
21:44:47 [fantasai]
added to file:
21:44:54 [fantasai]
(at the top)
21:45:02 [fantasai]
<title>Dispatching custom events</title>
21:45:05 [fantasai]
(at the bottom)
21:45:06 [fantasai]
<script>
21:45:15 [fantasai]
var t = async_test("Custom event dispatch");
21:45:19 [fantasai]
</script>
21:45:34 [fantasai]
jgraham: Each test has a number of tests, and each step is a function that gets called
21:45:58 [fantasai]
jgraham: It gets called inside a try-catch block, and we can check if the test failed. We don't put anything as top-level code.
21:46:03 [fantasai]
(added at the bottom)
21:46:07 [fantasai]
t.ste(function() {
21:46:23 [fantasai]
(ok, that's too much to type)
21:46:37 [fantasai]
jgraham: Here it's adding an event listner before the second step
21:46:57 [fantasai]
jgraham: When it gets called, it'll cal lthis other function here, which will run this other step, which is another function. Can get a bit verbose.
21:47:20 [fantasai]
jgraham: There's a convenience method that will make this easier.. all documented in testharness.js
21:47:41 [fantasai]
jgraham: Simple assert_equals() with value we get, value we expect, and then you can optionally have a string that describes what it is you're asserting.
21:47:53 [fantasai]
jgraham: At this point everything we want done is done, so we say t.done();
21:48:19 [fantasai]
jgraham: If you load this in a browser, because we have div#log, it will show whether it passes or fails and what assert failed
21:48:25 [plh]
--> http://w3c-test.org/webapps/ElementTraversal/tests/submissions/W3C/Element-childElementCount.html Example of testharness.js
21:48:35 [fantasai]
jgraham: That's all
21:49:07 [fantasai]
jj: Is there an id on the steps, so that you can say you failed step 4 of test foo?
21:49:18 [fantasai]
jgraham: If there's demand, there could be a second argument there.
21:49:37 [fantasai]
jj: would be nice to know where it failed so I can set a breakpoint there
21:49:51 [fantasai]
jgraham: If you get a huge number of tests per file, it's usually auto-generated
21:50:14 [fantasai]
jgraham: if it's failing in an assert, then it'll tell you which assert failed
21:50:51 [fantasai]
plh shows his example
21:51:13 [fantasai]
plh: everything shown here is generated by testharness.js
21:51:53 [fantasai]
jgraham: There's a failure in this, and it seems everyone fails that.
21:51:58 [fantasai]
plh: Bug in testharness.js
21:52:19 [fantasai]
jj: Easiest way to debug the test. Is there an error in the test, error in testharness.js, or error in browsers
21:52:31 [fantasai]
jgraham: There are various types of assertions. Usually corresponds to webIDL
21:52:37 [fantasai]
jgraham: But what's in webIDL isn't always the same
21:52:52 [fantasai]
kk: It's pretty well-written, only 700 lines or so
21:53:51 [fantasai]
clint: If it's synchronous, you don't have to do t.step()
21:54:00 [fantasai]
jgraham: A test that is synchronous implicitly creates a step
21:54:17 [fantasai]
wilhelm: Opera currently uses this tool for all the new tests that we write. Can others use this?
21:54:20 [fantasai]
clint: Yeah, I think so
21:54:28 [fantasai]
kk: There use to be some nunit or something that W3C had
21:54:33 [jleyba]
jleyba has joined #testing
21:54:34 [fantasai]
kk: Was in IE, but some browsers couldn't run it.
21:54:39 [fantasai]
kk: Very complicated
21:55:22 [fantasai]
[server problems]
21:58:53 [fantasai]
plinss: Are tests grouped by section into files?
21:59:23 [fantasai]
jgraham: In this case, it checks reflection section, plus section of each part of the spec that defines a reflected attribute
22:00:24 [fantasai]
topic change
22:00:39 [fantasai]
wilhelm: plh wanted to talk about test harness, fantasai wanted to talk about syncing problem
22:00:50 [MichaelC_SJC]
topic: How should we organize public test suites so that they are as easy as possible to contribute to and reuse?
22:03:34 [fantasai]
htp://w3c-test.org/framework/
22:03:52 [fantasai]
MikeSmith: This is an instance of the framework peter demoed
22:04:15 [fantasai]
Mike: I'm going to show you what has been added here to make it easier for test suite maintainers to add data to the system.
22:04:25 [fantasai]
Mike: There's this area called Maintianer Login
22:04:42 [fantasai]
Mike: It'll give you an http_auth, which authenticates against W3C's user database
22:04:52 [fantasai]
Mike: Email me if you want access to the system
22:05:08 [fantasai]
Mike: Once you go in there you'll see 2 options: add metadata, change metadata
22:05:17 [fantasai]
Mike: Can add a specification
22:05:33 [fantasai]
Mike: one early piece of feedback I got was they have tests they want to run that are not associated with a spec.
22:05:47 [fantasai]
Mike: So in this instance of the system, it's not a requirement to have a spec for your test suite
22:06:01 [fantasai]
Mike: You can give it an arbitrary ID as long as not a duplicat
22:06:05 [fantasai]
Mike: Title of the spec
22:06:08 [fantasai]
Mike: URL for the spec
22:06:18 [fantasai]
Mike: It expects you'll point it to a single-page version of the spec
22:06:31 [fantasai]
Mike: If you have a multi-page spec, don't point it at the TOC. You need the full version of the spec.
22:06:39 [fantasai]
Mike: Could change later, but initially set up this way 'cuz easier
22:06:45 [fantasai]
Mike: This will get added to the list here
22:07:13 [fantasai]
Mike: Next thing you can do is needed if you want to do what Peter was demoing earlier, which was associating testcases with specific sections of the spec -- or specific IDs in the spec
22:07:22 [fantasai]
MIke: Structured around idea that you put your IDs per section
22:07:38 [fantasai]
Mike: But some WGs like WOFF WG they're putting assertions at the sentence level
22:07:53 [fantasai]
Mike: They don't actually have section titles, so needed to accommodate that too
22:08:02 [fantasai]
Peter: Alan and fantasai did some work on that, too.
22:08:14 [fantasai]
Peter: Shepherd tool will be able to parse out spec to find test anchors
22:08:26 [fantasai]
Peter: and then can report testing coverage of the spec, so this is something we will automate
22:08:59 [fantasai]
Alan: What fantasai and I worked out was based on WOFF work, but will be simpler for spec editors. A bit harder to automate, though
22:09:08 [fantasai]
Mike: This part add spec metadata.
22:09:16 [fantasai]
Mike: Instead of a form to fill out, it lists existing specs in the system
22:09:27 [fantasai]
Mike: once you go here, if there's already data in the system, will show you data in the system alread
22:09:35 [fantasai]
Mike: otherwise it'll show you generated data
22:10:06 [fantasai]
Mike: This parses the spec and pulls out the headings. If it looks ok, you press submit
22:10:13 [fantasai]
Mike: It'll put these section titles into the database.
22:10:25 [fantasai]
Mike: If you have IDs below the section title level, then you'll have to use a different way to get it into the DB
22:10:31 [fantasai]
Mike: You might have to get me to do it for now :)
22:10:45 [fantasai]
Mike: Those steps are optional right now.
22:10:53 [fantasai]
Mike: What is necessary is going in and giving info about the test suite itself.
22:10:57 [fantasai]
Mike: you can give it an arbitrary ID
22:11:05 [fantasai]
Mike: Title, longer description
22:11:12 [fantasai]
Mike: to explain better thet est suite
22:11:33 [fantasai]
Mike: base URL of where your test suites are stored
22:11:47 [fantasai]
Mike: Difference from CSS is, that one requires format subdirectories
22:11:50 [fantasai]
plinss: it's optional
22:12:05 [fantasai]
Mike: This one doesn't expect subdirectories. Expects all tests in this one directory
22:12:16 [fantasai]
Mike: If you have separate subdirectories...
22:12:32 [fantasai]
Mike: Need to make different test suites or ...
22:12:39 [fantasai]
Mike: Simplest case you have all tests in one directory
22:12:51 [fantasai]
plinss: The code's actually a lot more flexible wrt formats. We'll talk offline.
22:13:11 [fantasai]
MikeSmith: Then you have contact information for someone who can answer questions about test suites
22:13:16 [fantasai]
MikeSmith: Then you indicate format of the test suite
22:13:56 [fantasai]
MikeSmith: Then you have a list of flags, you can select which ones indicate optional tests
22:14:09 [fantasai]
MikeSmith: There are ways to add flags to the system
22:14:16 [fantasai]
MikeSmith: No ui for it, so contact me
22:14:23 [fantasai]
MikeSmith: Last thing you then do is upload a manifest file
22:14:28 [fantasai]
MikeSmith: You have to have a test suite
22:14:33 [fantasai]
MikeSmith: You select a test suite
22:14:48 [fantasai]
MikeSmith: and then what I have it do right now is that you need to point it to the url for a manifest file, and it'll grab that and read it in
22:14:59 [fantasai]
MikeSmith: Right now two forms of manifest files that it will recognize
22:15:27 [fantasai]
MikeSmith: second one here is just a TSV that expects path/filename, references, flags, links, assertions
22:15:37 [fantasai]
MikeSmith: links are the spec links
22:15:56 [fantasai]
MikeSmith: The other big change is, I was talking with some people e.g. annevk and ms2ger
22:16:09 [fantasai]
MikeSmith: the format they're using is just listing the filenames
22:17:14 [fantasai]
MikeSmith: it marks support files as support files
22:17:32 [fantasai]
kk: Mozilla guys wanted to know what files were needed to pull to run a test case
22:17:58 [fantasai]
plinss: In the CSSWG, the large manifest file with metadata -- that gets built by the build system
22:18:09 [fantasai]
MikeSmith: This form expects the full filename, not just the extensionless filename
22:18:15 [fantasai]
MikeSmith: Because that's what they had
22:18:25 [fantasai]
MikeSmith: Once you have that, you should be able to get your test cases into the test database
22:18:31 [fantasai]
MikeSmith: and it'll show up on the welcome page
22:18:40 [fantasai]
MikeSmith: Long way to go on this.
22:18:56 [fantasai]
MikeSmith: Goal when I started on this was to get it to the point where I didn't have to manually do INSERT in SQL to get specs into the database
22:19:06 [fantasai]
MikeSmith: What would be really nice is if ppl start using this and getting more test suites in there so that we can ..
22:19:20 [fantasai]
plinss: But right now only limited set of ppl can contribute to that code
22:19:26 [fantasai]
MikeSmith: I created two groups in our database
22:19:37 [fantasai]
MikeSmith: I created a group for developers -- anyone who wants to contribute to framework
22:19:48 [fantasai]
MikeSmith: That'll give you write access to hg repo for the source code for this
22:19:58 [fantasai]
MikeSmith: Take a look at source code and see problems, send me patches or I'll give you direct access
22:20:17 [fantasai]
MikeSmith: Second thing is if you want to have access to use this UI to submit test suite data, I'll have to add you to a particular group
22:20:32 [fantasai]
fantasai: how is this code related to plinss's code?
22:20:36 [fantasai]
MikeSmith: It's forked from that.
22:20:42 [fantasai]
MikeSmith: I've just been pulling the upstream changes
22:20:48 [fantasai]
MikeSmith: been able to merge everything without it breaking.
22:20:58 [fantasai]
MikeSmith: Think it's in good enough shape that we could port it back upstream
22:21:08 [fantasai]
plinss: This system and the Shepherd share a lot fo the same base code
22:21:26 [fantasai]
plinss: Lots of things I was going to port Shepherd system back into this system, and then pull your stuff in too
22:21:51 [fantasai]
plinss: Mike also has code that ties into the testharness.js code, and will automatically submit results from that
22:22:08 [fantasai]
MikeSmith: If you go to enter data, it gives you some choices about whether you want to run full test suite or not
22:22:32 [fantasai]
MikeSmith: There's a button here that will pull automatic results where possible
22:22:54 [fantasai]
MikeSmith: Be careful, this will submit the data publicly!
22:23:18 [fantasai]
jgraham: Not saying it's a bad idea, but from our POV, we're not going to use it offline.
22:23:31 [fantasai]
(Brian was talking about trying out the system privately offline)
22:24:10 [fantasai]
plinss: The system tracks who's submitting the data. By login if you're logged in, by IP if not
22:24:23 [fantasai]
Brian: Privacy is useful
22:24:34 [fantasai]
plinss: goal is for pulling data from as may sources as possible
22:25:01 [fantasai]
wilhelm: fantasai wanted to talk about keeping things in sync
22:26:06 [dobrien]
Is someone scribing? I can't keep up on the iPad
22:26:16 [ctalbert_]
This is the writeup that we are planning to set up at Mozilla for the CSS tests specifically: https://wiki.mozilla.org/Auto-tools/Projects/W3C_CSS_Test_Mirroring
22:28:48 [krisk_]
Mozilla has a way to move tests from mozilla -> w3c -> mozilla
22:29:39 [ctalbert_]
wilhelm: how will this cope with local patches?
22:29:45 [krisk_]
fantasi: The master copy only lives in one place...
22:30:02 [ctalbert_]
jgraham: probably not a problem with the css tests
22:30:03 [krisk_]
fantasi: approved is the master in w3c
22:30:28 [dobrien]
dobrien has joined #testing
22:30:30 [krisk_]
fantasi: submitted is the master for submissions
22:30:41 [ctalbert_]
jgraham: opera is thinking of having the master from w3c which is intact, and our checkout from that master will have the local patches, and when we pull we'll rebase our patches atop the w3c master
22:30:59 [ctalbert_]
this should be possible now that hg is in the w3c side and our (opera) side
22:31:11 [ctalbert_]
fantasai: we'll probably have to do something similar
22:31:16 [krisk_]
wilhelm: how does this handle local patches?
22:31:33 [ctalbert_]
jhammel: is there a technical limitation to not have people editing the w3c tests
22:31:36 [ctalbert_]
fantasai: no
22:31:44 [krisk_]
fantasi: this is only for css which don't seem to have this problem
22:31:51 [ctalbert_]
jgraham: probably make it a commit hook
22:31:57 [ctalbert_]
ctalbert_: agreed
22:32:39 [ctalbert_]
peter: if someone pushes to the approved directory without actually being approved then the system just automatically denies them
22:32:48 [ctalbert_]
that may be incorrect ^ (scribe error)
22:33:28 [ctalbert_]
wilhelm: might be an idea to split test suites down at lower granularity levels so that you can have test suites with differnt levels of maturity
22:33:37 [ctalbert_]
jgraham: don't think that would make a difference tbh
22:34:00 [ctalbert_]
peter: our repo would keep all the data from all the suites in the repo so that our build system could build any version of them from any suite
22:34:17 [ctalbert_]
wilhelm: are there other things we can do to make it easier to contribute test suites?
22:34:44 [ctalbert_]
fantasai: one problem on the mozilla side - there's no place to put tests that should go to the w3c - we depend on a manual process to sort out which should be submitted and then it is done later
22:34:55 [ctalbert_]
fantasai: these tests just sit in a random place and are forgotten
22:35:11 [ctalbert_]
fantasai: once we have a directory that goes to w3c and we tell the reviewers, then it will help quite a bit.
22:36:08 [ctalbert_]
fantasai: the basic idea is to make the process obvious what developers need to do with that test to indicate that it is appropriate and ready for w3c then it should "just happen"
22:36:46 [ctalbert_]
jgraham: we have a similar problem. it's hard to surface those tests and bugfixes without a policy and a place for those tests
22:37:31 [ctalbert_]
peter: if we have a standard format among the test writers then it will be easier to help developers to upload the tests to the w3c. If the developers have to convert the tests it's too difficult and people won't expend the effort to make it happen
22:38:09 [ctalbert_]
krisk_: sometimes it depends on the editors as to when they allow tests into the spec, and you find that tests sometimes lag the spec by quite a bit
22:38:58 [ctalbert_]
fantasai: we found that with the css - the person writing the spec is often nominally tasked with also writing the test suite but because the skill sets are different and the spec editor is usually swamped, then the tests get neglected
22:39:16 [ctalbert_]
fantasai: we really need a dedicated person to manage these tests and testing effort for each spec
22:39:30 [ctalbert_]
MikeSmith: is there some way to motivate people to do that?
22:39:51 [ctalbert_]
MikeSmith: maybe we should publicly track the testsuite owner?
22:40:12 [ctalbert_]
fantasai: we can do that, but the burden is on getting resources for that, really.
22:40:34 [ctalbert_]
MikeSmith: yeah, the question is how do you encourage the managers allow their people to spend times on w3c work
22:41:14 [ctalbert_]
MichaelC_SJC: you might be able to convince your company to do that, but we also need to have the working group chairs understand that this needs to happen
22:42:06 [ctalbert_]
jgraham: if we have them already in an interoperable format then it's pretty easy, but for our existing tests that are in a different format, we aren't going to spend the time to convert them
22:42:33 [ctalbert_]
fantasai: we might just have a place at w3c to take those tests, and just post them publicly and have someone else do the conversion work
22:42:52 [ctalbert_]
jgraham: I suspect that's a wide problem
22:43:03 [MichaelC_SJC]
q+ to ask how much should there be a "W3C format" vs how much does W3C framework need to format (nearly) any format?
22:43:40 [ctalbert_]
krisk_: if you getin the habit of submitting stuff as you're doing development, tat seems reasonable.
22:44:12 [ctalbert_]
krisk_: keeping things not super complex is a win, and being consistent will pay dividends
22:44:32 [fantasai]
fantasai^: Because for Opera it may not be valuable to do the conversion, but e.g. Microsoft might want those tests, and decide that the cost of converting is less than the cost of rewriting tests from scratch, so to them it'll be worth it to do the conversion
22:44:49 [ctalbert_]
fantasai: thanks, I'm not too good at this :/
22:44:58 [ctalbert_]
(scribe note ^)
22:46:10 [ctalbert_]
wilhelm: the more I think of this, the more I realize that facilitating the handover of tests is a full time job
22:46:46 [MichaelC_SJC]
ack me
22:46:46 [Zakim]
MichaelC_SJC, you wanted to ask how much should there be a "W3C format" vs how much does W3C framework need to format (nearly) any format?
22:46:53 [ctalbert_]
wilhelm: if we could get every browser vendor to commit one person to do this work on their team then that would be good.
22:47:12 [ctalbert_]
fantasai: the problem we're at now, people havne't adopted the w3c ofrmats internally
22:47:20 [ctalbert_]
it will be less work once that happens
22:47:39 [ctalbert_]
it's not w3c's responsibility to convert your tests to w3c
22:47:53 [ctalbert_]
fantasai: you can write a conversion script to convert your test to w3c format
22:48:07 [ctalbert_]
better to do that than to have w3c to accept all differnt formats
22:48:45 [ctalbert_]
jgraham: the problem is that many of these harnesses are not built for portability
22:49:10 [ctalbert_]
MichaelC_SJC: the problem with a common format (and I may be wrong) is that you run into things you can't test
22:49:36 [ctalbert_]
jgraham: if we run into that, then in that case maybe we can find some lightweight format for those tests, or in that case maybe we use a different type of harness
22:49:56 [ctalbert_]
scribe: ctalbert has to step out
22:49:58 [ctalbert_]
fantasai: ^
22:50:09 [fantasai]
...
22:50:25 [fantasai]
kk: If you can write it with testharness.js, do that. If not, try reftest, if not, try self-describing test
22:50:57 [fantasai]
kk: In your case you have the difficulty of needing a screenreader or something
22:50:58 [fantasai]
...
22:51:15 [fantasai]
jgraham: If you can get ppl to contribute in one format, at least you solve the problem once per platform rather than once per test
22:51:24 [fantasai]
mc: I think there's a hierarchy of goodness
22:51:41 [fantasai]
mc: The framework should have at least thepossibility of hooking in new formats
22:51:44 [fantasai]
general agreement
22:52:14 [fantasai]
wilhelm: For the Watir cases, we noticed areas where we'd want to addtests for something very obscure and specific. What we've done is add support at a low level in Opera and use an API
22:52:20 [fantasai]
wilhelm: Such things could be later added to WebDriver
22:52:23 [MichaelC_SJC]
s/I think there's/I can agree with the idea that/
22:52:43 [fantasai]
Alan: For tests where there isn't a w3c version, but browsers have something, is there a list of most-wanted specs that need tests on the w3c site
22:52:57 [fantasai]
fantasai: All of them? :)
22:53:27 [fantasai]
Alan: We were talking about poking ppl, committing ppl to translating browser tests to w3c tests
22:53:28 [bryan]
bryan has joined #testing
22:53:36 [fantasai]
Alan: Would be more successful to getting resources if we have a specific list of things we need
22:53:41 [fantasai]
jj: Also possibility to ask specific people.
22:53:50 [fantasai]
jj: Rather than saying, please call all submit tests for HTml5
22:53:58 [fantasai]
jj: Say, can you submit tests for WebWorkers
22:54:10 [fantasai]
jj: need a specific ask to get things done
22:54:49 [fantasai]
jj: It might not cause immediate surge in test submissions, but for me from outside to inside, the idea of submitting tests was impossible to me. Didn't know where to submit them, figured they'd be rejected, didn't know what a reftest was, etc.
22:54:54 [fantasai]
jj: So process was hard, and not being specific
22:55:01 [fantasai]
jj: Better way to get things done is asking
22:55:11 [fantasai]
jj: Would like Opera to submit WebWorker tests
22:55:20 [fantasai]
wilhelm: Can I get that in writing so I can show it to my manager?
22:55:38 [fantasai]
Alan: Identify the tests, see who has those tests, then request them
22:56:02 [fantasai]
plh: We've been corrsponding on testing framework a little bit, but part of task is also going out there in the wild and finding tests and getting them to W3C
22:56:18 [fantasai]
plh: Need to get to point where we have framework and start on asking tests
22:56:27 [fantasai]
Alan: Use framework to identify areas, since it annotates the spec
22:56:42 [fantasai]
jj: We have no idea how much coverage those 47 tests have -- number isn't meaningful from a coverage perspctive
22:56:53 [fantasai]
jj: 1 is better than 0, but maybe 100 is needed not47
22:57:13 [fantasai]
?: Test coverage is a negative covered only know when something is not covered, not how well something is covered
22:57:31 [fantasai]
jj: Even if you say you have 100% on that normative statement, still doesn't tell you if you got all the edge cases
22:57:40 [fantasai]
jgraham: At the moment for HTML we have nothing, though.
22:57:54 [simonstewart]
^^ simonstewart: test coverage is a negative thing. It'll only say what's not covered, not how well the covered areas are tested
22:58:02 [fantasai]
jgraham: We have our tests organized by section in the repo, but it's not explicit
22:58:17 [fantasai]
jgraham: Being able to say per normative statement, do we have a test for this, is pretty nice
22:58:26 [plh]
--> http://www.w3.org/2011/10/timer.html (annoying) timer
22:58:39 [fantasai]
jgraham: If you look somewhere, there's an annotation per sentence in the spec showing tests for section X
22:58:48 [fantasai]
jgraham: But that's really complicated, because spec isn't marked up to make that easy
22:59:01 [fantasai]
jgraham: and testing dozens of disconnected statements
22:59:22 [fantasai]
kk: The problem we're struggling with is not that how do we get perfect coverage. There's a spec, and there's no coverage.
22:59:33 [fantasai]
kk: Browsers all have this feature, and they don't work the same. So having some is a good start.
22:59:54 [fantasai]
Bryan: If you look at most of WebAPIs near LC or at LC, only 1/3 have tests available
23:01:38 [jhammel]
fantasai: setup a process for getting tests from *your* organization to w3c, and *going forward*, you should write w3c-submittable tests *and* submit the tests. Once that is in place, we can go back and convert legacy tests
23:01:57 [plh]
s/corrsponding/working/
23:02:17 [jhammel]
fantasai: we need to get the webkit people to commit to this
23:02:36 [jhammel]
fantasai: you can require that when checked into repo, they become reftests
23:03:01 [jhammel]
fantasai: plan going forward is to convert to reftest
23:03:33 [jhammel]
jgraham: if you're comparing to something bitmap-based, it may take 2x time, but it will save time going forward
23:03:40 [fantasai]
fantasai^: Because then the number of legacy tests that are not w3c-formatted stops growing, and we can work on making that number smaller
23:06:54 [MichaelC_SJC]
topic: Additional Items
23:07:40 [fantasai]
example of a test that has to be self-describing: This tests that the blurring algorithm produces results within 5% of a Gaussian blur
23:07:43 [fantasai]
http://test.csswg.org/source/contributors/mozilla/submitted/css3-background/box-shadow/box-shadow-blur-definition-001.xht
23:08:06 [cion]
cion has joined #testing
23:09:00 [cion]
cion has left #testing
23:09:43 [fantasai]
bryan: We developed a number of specs for device APIs
23:10:05 [fantasai]
bryan: We recognize these APIs are quite sophisticated, an it'll take some time, but we're continuing the development of these capabilities for web runtimes
23:10:29 [fantasai]
bryan: We have developer program, global ... ecosystem
23:10:41 [fantasai]
bryan (from AT&T): wanted very briefly ...
23:10:55 [fantasai]
bryan: show you these links to the specs, the APIs, but more importantly the test framework
23:11:03 [fantasai]
bryan: Test framework is based on QUnit
23:11:43 [fantasai]
bryan: Pulls in a file from a test directory, which has the list of test associated with this particular API.
23:11:50 [fantasai]
bryan: Tests individual JS filesin the same directory
23:11:57 [fantasai]
bryan: will run them one by one
23:12:08 [fantasai]
bryan: This is packaged up as a widget file, whcih is available for download
23:12:17 [fantasai]
bryan: So we can run all the tests for example using this widget framework.
23:12:34 [fantasai]
bryan shows pie charts of resutls
23:12:44 [fantasai]
bryan: Automatically uploaded and made available to vendor
23:12:54 [fantasai]
plh: Say 1000 tests for core web standards?
23:12:57 [fantasai]
bryan: No for APIs
23:13:24 [fantasai]
bryan: What comes for underlying platform is inherently tested by that community
23:13:30 [fantasai]
bryan: We need to cover device variation
23:13:42 [fantasai]
bryan: identify things that we reference
23:13:49 [fantasai]
bryan: We have individual tests for these, test scripts
23:14:07 [fantasai]
bryan: this is more than acid level test, but not what we hope to see from W3C in long run
23:14:20 [fantasai]
bryan: We don't want to develop and maintain this level of detail in WAC. Want to leverage W3C test suites
23:15:17 [fantasai]
bryan: If you look at the tests, you can see for example the geolocation test suite, which we reference.
23:15:25 [fantasai]
bryan: We want to auto-generate the tests as widget
23:15:37 [fantasai]
jj: So if hte test suite changes, do you update your widget?
23:16:51 [fantasai]
bryan: Our goal is to create frameworks where we can pull in tests and run them in this runtime environment without havng to necessarily maintain the tests ourselves
23:16:59 [fantasai]
bryan: We would benefit from a common test framework
23:17:02 [mouse]
mouse has joined #testing
23:17:10 [fantasai]
bryan: What exactly these tests are is basically just a JS procedure
23:17:44 [fantasai]
bryan: We test existence of methods, call qunit functions for pass/fail, not necessarily married to this format, but it was the most common one at the time we developed this.
23:18:05 [fantasai]
bryan: So to summarize our goal is to have the scalability to support this widget-based ecosystem across dozens of devices across the world
23:18:10 [fantasai]
bryan: So we have to have scalability
23:18:20 [fantasai]
bryan: To depend on the core standards as something we don't spend a lot of effort on
23:18:22 [mouse]
mouse has left #testing
23:18:26 [fantasai]
bryan: Duplicate things that eventually come from W3C.
23:18:37 [fantasai]
bryan: We'd like to see this developed at W3C so we can directly leverage it.
23:20:13 [fantasai]
fantasai comments on how this shows having a few common formats is better than having w3c accept many similarly-capable formats -- it better supports reuse of the tests
23:20:50 [MichaelC_SJC]
rrsagent, make minutes
23:20:50 [RRSAgent]
I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html MichaelC_SJC
23:21:48 [fantasai]
Topic: Conclusions and Action Items
23:21:55 [fantasai]
1. Vendors commit to running W3C tests
23:22:07 [fantasai]
2. Vendors push internally to adopt W3C test formats
23:23:14 [fantasai]
plh says W3C should make ti easier for vendors to import suites
23:23:27 [fantasai]
fantasai: what does that entail?
23:23:52 [fantasai]
plh: make guidelines for WG
23:24:01 [fantasai]
jgraham: I feel the problem is more on our side than on W3C side
23:24:16 [fantasai]
wilhelm, jgraham: but of course, using hg instead of cvs is important for tests
23:24:30 [fantasai]
wilhelm: W3C should commit resources to get tests from vendors
23:24:41 [fantasai]
plh: start with webapps
23:24:58 [fantasai]
wilhelm: Any conclusions on WebDriver discussion?
23:25:12 [fantasai]
wilhelm: We commit to work on the spec, and get that into our browser
23:25:19 [fantasai]
plh: MS and Apple should look into that
23:25:52 [fantasai]
Mike: normal people at apple are interested, but they're not the ones who sign off on things
23:26:25 [fantasai]
kk: Using testharness.js seems to me a very low-hanging fruit, rather than writing a whole bunch of APIs
23:26:26 [jhammel]
"not buy Apple" would be more effective
23:26:50 [fantasai]
wilhelm: There should be a spec that talks about it, for the IP stuff, we need to get a spec out so there's less risk for those implementing
23:27:05 [fantasai]
jgraham: There was some discussion, but no decision, about which bindings W3C would accept tests in
23:27:09 [fantasai]
wilhelm: I'd list that as an open issue
23:28:05 [fantasai]
MikeSmith: We want to follow up with testing IG , [other grou]
23:28:09 [fantasai]
s/grou/group/
23:28:30 [fantasai]
MikeSmith: Spec discussion would go to [... mailing list ...]
23:28:44 [fantasai]
wilhelm: Dumping ground for non-W3C-format tests
23:29:30 [fantasai]
kk: You can put whatever you want in submitted folder
23:29:48 [MikeSmith]
public-browser-tools-testing@w3.org
23:29:53 [fantasai]
jgraham: It would be nice if ppl dump random test suites in random formats, to separate those out from thing sthat would be approved in roughly their current form
23:29:55 [MikeSmith]
http://lists.w3.org/Archives/Public/public-browser-tools-testing/
23:30:23 [fantasai]
kk: We should have an old_stuff directory
23:30:29 [fantasai]
jgraham: And encourage people to dump stuff there
23:30:32 [MikeSmith]
for the Testing IG, http://lists.w3.org/Archives/Public/public-test-infra/ and public-test-infra@w3.org
23:31:20 [fantasai]
plh: We can associate a repo with the testing IG, and then anyone in that IG can push to the repo
23:32:01 [plh]
ACTION: Mike to create mercurial repositories for Web Testing IG and Browser Tools WG
23:32:03 [fantasai]
fantasai: Should be clear that dumping things here is not the same as submitting to an official W3C test suite
23:32:26 [fantasai]
bryan: Should also have a wiki that documents what's there
23:32:33 [ctalbert_]
TabAtkins_: I accidentally locked myself on the patio, could you come rescue me?
23:32:50 [fantasai]
jj: Right, should be clear these are not submitted for review; they're there, and someone can take them and convert them and submit them
23:32:59 [MikeSmith]
http://www.w3.org/wiki/Testing
23:33:00 [fantasai]
jgraham: Come up with a prioritized list of things that need tests
23:33:05 [fantasai]
jj: anything that's in CR? :)
23:33:10 [fantasai]
plh: I'll take an action item to do that
23:33:19 [fantasai]
ACTION plh: make a list of things that need tests
23:33:32 [fantasai]
bryan: Need a list of what's available, what are the key gaps, what do we need to get there
23:33:41 [fantasai]
kk: Identify specs that are in a bad situation.
23:34:22 [fantasai]
fantasai: Also want to track not just what needs testing, but ask vendors whether they have tests for any of these.
23:34:29 [fantasai]
fantasai: Can then go pester people to submit those tests
23:35:15 [fantasai]
ACTION MikeSmith: Create repos for testing IG and testing framework group
23:35:46 [fantasai]
plh: Need places to dump tests for groups that don't have repos atm
23:35:54 [fantasai]
plh: more and more groups have their own test repo
23:36:39 [plh]
ACTION: plh to convince the geolocation WG to use mercurial for their tests
23:36:56 [fantasai]
3. Vendors commit to finding a person to facilitate submission and use of W3C tests
23:37:48 [fantasai]
wilhelm: need to make a formal request to each organization
23:38:04 [fantasai]
bryan: Someone should pull together format descriptions and include the guidelines
23:38:23 [plh]
--> http://www.w3.org/html/wg/wiki/Testing/Authoring/ Authoring Tests
23:38:33 [bryan]
bryan has joined #testing
23:39:53 [fantasai]
dicussion of where to collect this information
23:39:56 [plh]
--> http://www.w3.org/testing/ Testing
23:40:42 [fantasai]
jgraham: should be in a place not specific to a given working group
23:41:57 [fantasai]
...
23:42:07 [fantasai]
plinss: There's a lot to be gained by standardizing metadata
23:42:32 [fantasai]
jgraham: hard to do the CSS way for an HTML test
23:42:40 [fantasai]
jgraham: Could have n ways to do it, where n is a small number
23:43:05 [fantasai]
Alan: It would be nice to have everything on a wiki so we don't have to go through a staff member
23:43:12 [fantasai]
Alan: What if this page was a redirect to a wiki?
23:43:26 [fantasai]
jgraham: Could have that page be a link to a wiki
23:43:40 [fantasai]
MikeSmith: I like redirect idea, minimizes work I have to do :)
23:43:47 [bryan]
bryan has left #testing
23:44:35 [fantasai]
wilhelm: So when should we meet again?
23:44:45 [fantasai]
jj: I think we should definitely make this a regular meeting.
23:44:48 [bryan]
bryan has joined #testing
23:44:56 [fantasai]
jj: Seems like everyone in every WG is going to be solving the same problems
23:45:48 [fantasai]
...
23:45:54 [fantasai]
plh: WebDriver will be under browser tools WG
23:46:12 [fantasai]
mc: Who's "we"?
23:46:17 [fantasai]
wilhelm: I don't know, but this crowd is great.
23:46:24 [fantasai]
plh: We can put under the IG
23:46:44 [fantasai]
fantasai: We can say at last meet again next TPAC
23:46:54 [fantasai]
plh: Would be in France next year
23:48:18 [fantasai]
fantasai: Since not everyone will be travelling to TPAC, would we want to do another place at at different time as well?
23:48:27 [fantasai]
jj: Does everyone agree we should meet?
23:48:32 [fantasai]
kk: Depends on deliverables.
23:48:45 [fantasai]
MikeSmith: If we meet 6 months from now, when would that be?
23:48:50 [fantasai]
?: April
23:49:22 [fantasai]
mc: Just want to be sure who the "we" is the invite would go out to
23:49:49 [fantasai]
wilhelm is designated in charge
23:50:37 [fantasai]
Meeting closed.
23:50:42 [fantasai]
RRSAgent: make minutes
23:50:42 [RRSAgent]
I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html fantasai
23:50:44 [fantasai]
RRSAgent: make logs public
00:31:24 [glg]
glg has joined #testing
00:31:57 [glg]
glg has left #testing
00:48:08 [montezuma]
montezuma has joined #testing
01:08:54 [Zakim]
Zakim has left #testing
01:16:51 [arronei_]
arronei_ has joined #testing
01:17:26 [dobrien]
dobrien has joined #testing
01:18:02 [dobrien]
dobrien has left #testing
01:43:41 [gfddg]
gfddg has joined #testing
01:43:55 [gfddg]
gfddg has left #testing
03:14:31 [plinss]
plinss has joined #testing
03:16:18 [stearns]
stearns has joined #testing
03:20:12 [charlie]
charlie has joined #testing
05:05:44 [shepazu]
shepazu has joined #testing