16:25:38 RRSAgent has joined #testing 16:25:38 logging to http://www.w3.org/2011/10/28-testing-irc 16:25:55 plh has changed the topic to: Wait, we're trying to figure out the logistics 16:30:29 plh has changed the topic to: Wait, we're trying to figure out the logistics. Current room is too small. 16:32:09 MichaelC has joined #testing 16:33:18 JohnJansen has joined #testing 16:36:50 rrsagent, make log world 16:37:21 meeting: Browser testing meeting 16:37:22 krisk has joined #testing 16:37:29 agenda: http://lists.w3.org/Archives/Public/public-test-infra/2011OctDec/0014.html 16:37:46 bryan has joined #testing 16:37:49 chair: Wilhelm_Andersen 16:37:58 present +Bryan_Sullivan 16:37:59 scribeNick: MichaelC_SJC 16:39:12 plinss has joined #testing 16:39:28 fantasai has joined #testing 16:39:57 present+ Wilhelm_Anderson 16:40:02 topic: Introductions 16:40:24 wa: testing helps everybody 16:40:30 figure out how to make best possible test suites 16:40:32 Wilhelm: I'd like to figure how to make the best possible test suite, how to make the Web better 16:41:05 I work for Opera as testmonkey, test manager 16:41:17 in various parts 16:41:37 simonstewart has joined #testing 16:41:52 present+ James_Graham 16:41:56 jg: also work for Opera 16:42:36 16:42:39 present+ Elika_Etemad 16:42:46 ee: also known as fantasai 16:42:51 work on testing in CSS WG 16:43:03 present+ Jason_Leyba 16:43:07 jl: work on testing in Google 16:43:18 want to improve the ecosystem so it all works better 16:43:41 present+ Simon_Stewart 16:43:42 dobrien has joined #testing 16:44:02 ss: created Webdriver, working Selenium 16:44:16 very aware of the differences between browsers, would love to sort it out 16:44:26 present+ Kris_Krueger 16:44:38 kk: worked in testing at Microsoft 16:44:44 more recently on Web standards 16:45:06 present+ John_Jansen 16:45:14 jj: also at Microsoft 16:45:19 interested in automation, test suites 16:45:26 present+ Peter_Linss 16:45:32 pl: co-chair of CSS WG 16:45:44 have contributed extensively to that test suite 16:45:59 and working on test shepherd for 16:46:05 present+ Mike_Smith 16:46:22 ms: work for W3C, staff contact to HTML WG 16:46:32 work on testing for HTML, extensive contributions to framework 16:46:46 present+ Alan_Stearns 16:46:48 as: working for Adobe 16:47:11 interested in tests working across browsers 16:47:27 present+ Narayana_Babu_Maddhuri 16:47:31 represent Nokia 16:47:44 nm: learn what's up 16:47:54 present+ Duane_O'Brien 16:48:05 do: 16:48:15 https://browserlab.adobe.com/en-us/index.html <- Adobe BrowserLab 16:48:17 present+ Charlie_Scheinost 16:48:26 cs: represent adobe 16:48:36 TabAtkins_ has joined #testing 16:48:53 RRSAgent, make minutes 16:48:53 I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html MikeSmith 16:48:53 Ken_Kania 16:49:09 RRSAgent, make logs public 16:49:19 present+ Ken_Kania 16:49:28 kk: work for google, Webdriver 16:49:36 bs: AT&T, mobile data services 16:49:57 interoperability in various fora 16:50:08 want to understand the challenges browser vendors have in automation 16:50:16 and how to leverage tools in repeatable continuous framework 16:50:33 to certify new devices as they come out, get updated, etc. 16:51:00 present+ Jeff_Hammel 16:51:05 jh: Mozilla, test automation 16:51:15 present+ Clint_Talbert 16:51:27 ct: Mozilla, testing 16:51:36 present+ Tab_Atkins 16:51:41 ta: Google, work on Chrome 16:51:58 not as closely involved in testing, but have worked in CSS on some 16:52:11 present+ Michael_Cooper 16:52:43 involed in WAI. zstaff contact for PF, developping ARIA> we're struggling in testing. hoping to contribute to the test framework 16:52:55 ... we have reuirements that we'd like to bring as well 16:53:27 present+ Philippe_Le_Hégaret 16:53:44 plh: W3C, Interaction Domain, lots of your favourite groups 16:54:00 jhammel has joined #testing 16:54:15 want a common framework, common way to write tests 16:54:27 topic: Agenda Overview 16:54:39 wa: first, want browser vendors to introduce how they do testing 16:55:04 then, presentations of a few testing approaches 16:55:32 finally, discussion of how to write tests for different types of functionality 16:55:46 90% of tests cover how something rendered to screen in a particular way 16:55:56 or script returns an expected result 16:56:04 or user fills out form and certain result 16:57:02 topic: WebDriver API 16:57:54 ss: WebDriver is an API for automation of WebApps 16:58:03 developer-focused, guides people to writing better tests 16:58:10 Merged with Selenium a couple years ago 16:58:41 fairly simple, load page, find element, perform actions like focus, click, read, etc. 16:59:07 kk: does it simulate user input at driver level, or elsewhere? 16:59:21 ss: in past user interactions were done by simulating events in DOM 16:59:30 but browers inconsistent in how they handle those 17:00:01 when they do what etc. 17:00:12 so events at script level not feasible 17:00:17 so do events at OS level 17:00:31 that is high fidelity but terrible machine utilization 17:00:46 and wastes developer's time 17:01:14 so now, allow window not to have focus and send events via various OS APIs 17:01:31 but OS not designed to send high fidelity user input to background window 17:01:47 so now, Opera and Chrome pump events into event loop of browser 17:01:56 17:02:23 Webdriver has become a de facto standard for browser automation 17:02:30 most popular open source framework 17:03:03 as can be seen by job postings requiring familiarity with it 17:03:11 has reasonable browser support 17:03:26 Opera, Chrome, and Android add-on, Mozilla starting 17:03:34 uses Apache2 license 17:03:40 business-friendly license 17:04:13 nm: tried on mobile browsers? 17:04:21 ss: yes, in various 17:04:37 it's a small team 17:04:48 covering wide range of browsers and platforms 17:04:57 see 3 audiences for automation 17:05:06 1) App developers are vast majority 17:05:19 need to test applications 17:05:34 hard to get developers to write tests, and can only get them to write to one API when you get it at all 17:05:47 first audience for WebDriver 17:05:50 2) browser vendors 17:06:26 desire to automate their testing as much as possible 17:06:40 bs: how does Webdriver related to qunit 17:07:00 ss: 17:07:12 bs: so Webdriver isn't a framework, it's an API for automating events 17:07:20 ss: clearly a browser automation API 17:07:35 e.g., understand Opera runs 2 million tests / day with this 17:07:44 3) Spec authors 17:07:56 some specs can be articulated entirely in script 17:08:00 and tested that way 17:08:09 others need additional support, this provides that 17:08:39 ee: more spec testers than authors? 17:08:55 ss: yes, those focusing on test aspects 17:09:09 ss: user perspective 17:09:23 it's a series of controlled APIs 17:09:37 to interrogated DOM 17:09:44 execute script with elevated priveleges 17:10:08 and provide APIs to interact, so not just read-only 17:10:47 jj: 17:11:06 ss: 17:11:28 jj: avoids cross origin vulnerability? 17:11:30 ss: yes 17:11:43 bs: good, some complicated scenarious 17:11:50 ss: implementer view 17:12:07 neutral to transport and encoding 17:12:09 provide JSON 17:12:24 which bring clients that can handle immediately 17:12:33 also have released JavaScript APIs 17:12:41 ss: Security 17:12:44 My question was regarding the bypass of the x-origin security restriction 17:13:04 ss: automation and security are opposite concerns 17:13:10 answer: the jscript still honors that restriction, though webdriver itself ignores it. 17:13:24 generally, build support into browser 17:13:31 and enable it via an additional component 17:13:52 or command line features 17:14:39 ss: Demo 17:15:26 17:16:30 kk: how Opera? 17:16:39 ss: Water on top of WebDriver 17:17:10 ss: API designed to be extensable 17:17:32 expose capabilities via a simple interface or casting 17:17:49 jj: How are visual verifications handled? 17:18:09 ss: can take a screenshot, platform-dependent 17:18:28 Opera has extended with ability to get hash of the screenshot 17:19:07 attempt to capture entire area described by DOM, not just viewport 17:19:35 deals with difficulties like fixed positioning etc. 17:19:43 but very browser specific 17:19:54 jj: human comparison mechanism? 17:19:54 charlie has joined #testing 17:19:59 ss: in google, teams of people do that 17:20:13 we just provide the mechanism 17:20:31 don't want to over-prescribe how to process images, as state of the art continually changes 17:20:39 bs: to compare layout between different browsers 17:20:47 capture screens, or query position of elements? 17:20:49 ss: can do both 17:20:54 can get location of an element 17:20:57 and size 17:22:01 bs: how about different screens sizes 17:22:18 interested in specifically how things rendered in various circumstances 17:22:59 ss: the locatable interface can provide various types of measures 17:24:20 kk: differences among browsers are wide for many reasons 17:24:29 it's part of the landscape 17:24:51 ss: was able to use same tests using same APIs 17:24:59 q? 17:25:22 at rendering level can be different 17:25:41 plh: platform AAPIs use similar services 17:25:52 hope e.g., ARIA can use WebDriver 17:26:04 ss: have looked at AAPIs, can look at elements by ARIA role etc. 17:26:20 on relationship to AAPIs 17:26:25 sometimes they're enough, sometimes not 17:26:46 one of the next big things in hybridized apps, part native and part Web 17:26:49 may need to use AAPIs to test 17:27:07 plh: think ARIA can be tested using this 17:27:59 ss: have applied Webdrive to native app testing using AAPIs 17:28:22 kk: there has been a path starting with MSAA 17:28:25 Zakim has joined #testing 17:28:29 rrsagent, do not start a new log 17:28:50 ss: AAPIs are extremely low-level 17:29:02 q+ 17:29:20 e.g., a combobox is represented as a few different controls together 17:29:35 kk: developers create all kinds of crazy things 17:30:28 so UI automation allows patterns 17:30:33 mc: can speak to AAPI from WebDriver 17:30:41 ss: Webdriver sits on top of AAPI 17:31:02 but because of script interface, could talk back and forth a bit 17:31:19 wa: Opera has a layer "Water" on top of WebDriver 17:31:45 17:32:13 test file looks like a manual test, e.g., a human could interact with it 17:32:23 17:32:40 17:33:07 for each test file, there's a block in the automation script 17:33:40 ss: Webdriver simlilar 17:33:51 nm: 17:34:17 ss: 17:34:22 jj: why wrapping in Water 17:34:33 wa: was done before projects had merged 17:34:43 now doesn't matter as much 17:34:56 plan to submit Opera set of tests to HTML WG for official test suite 17:35:07 but want them in a format other browser vendors could use 17:35:18 Opera uses Ruby bindings, Mozilla uses Python bindings 17:35:33 need to automate in all browsers, Webdriver seems way to go 17:35:43 for official W3C tests, question of what language binding to use? 17:35:50 ss: Javascript is hugely known 17:36:14 Python is the other one being explored by Mozilla and Chrome 17:36:31 also is "politically unencumbered" 17:36:57 vs some other candidates out there 17:37:08 I vote for Javascript 17:37:09 wa: how complete are JS bindings? 17:37:39 js: still finalizing 17:37:57 kk: 17:38:07 js: API stable 17:38:33 loading script within browser is the part that still needs working on, to get around sandbox 17:38:50 it's usable now, but have debugging etc. to do 17:39:32 ss: so maybe Python preferable? 17:39:50 jg: having dependency on core could be a big stability issue 17:39:58 <^ not sure that's scribed right> 17:40:14 kk: dangerous to build on things that are changing 17:40:32 otoh, need bindings to be something that's available on all targets 17:40:59 ss: normally test and browser communicate like a client / server 17:41:07 can do over a web socket 17:41:23 and run test on machine independent of browser 17:41:42 wa: was able to test a mobile device on a different continents this way 17:42:15 plh: if we set up a test server on W3C site, could you allow it to just run tests at you? 17:42:23 ss: can connect from browser to a test server 17:42:26 so in theory, this works 17:42:29 but security concerns 17:42:37 need a manual intervention to put browser in testing mode 17:43:38 mc: have to trust W3C server from security POV 17:43:47 how we allow tests to be contributed needs to be careful 17:44:11 17:44:36 as: 17:44:53 as: is there support for IME? how good is it? 17:45:04 ss: support varies by platform as we prioritize development 17:45:40 17:45:49 do support internationalized text input 17:45:59 for testing I18N but could be used to test other stuff 17:46:27 do: how well documented is JS API? 17:46:32 ss: fairly extensive 17:46:39 http://code.google.com/p/selenium/wiki/JsonWireProtocol 17:46:57 krisk has joined #testing 17:47:08 Facebook developed PHP bindings using this documentation 17:47:25 ctalbert_ has joined #testing 17:47:27 Selenium stuff hosted under software freedom conservancy 17:47:57 can use w/o the open source stuff, but also handy to use the open source stuff 17:48:09 wa: Just started browser tools and @@ WG 17:48:31 http://www.w3.org/2011/08/browser-testing-charter 17:48:33 primary goal is to standardize Webdriver API at W3C 17:48:38 (i think) 17:48:47 welcome you all to join to make this happen 17:49:15 also want to explore whether all browser vendors can handle official test suites using Webdriver API 17:49:27 ss: aware of support from Google, Opera, Mozilla 17:49:40 explicit non-support from Microsoft, Apple, Nokia, HP 17:50:07 also support from RIM 17:50:22 plh: would Microsoft be able to accommodate tests using this? 17:50:25 kk: depends 17:50:40 standardization of the API will help a lot 17:51:03 17:51:46 also need tests structured in certain ways we can work with 17:51:51 kk: having the tests be self-describing is very important. If I was a TV browser vendor that doesn't support webdriver, I would want to be able to leverage the W3C tests as well 17:52:10 jg: tests always structured so you could run manually, though would be ridiculous to do so with them all in practice 17:52:24 ms: first thing we need is a spec 17:52:49 doesn't matter where editors draft hosted, can do at W3C 17:52:58 IP commitments kick in when we publish a Working Draft 17:53:18 ss, wa: ready to move right away on that 17:53:27 kk: W3C would own code? 17:53:33 ss: W3C would maintain spec 17:53:38 and a reference implementation 17:53:52 but there could be other implementations 17:54:19 mc: reference implementation doesn't necessarily have to be W3C 17:54:25 plh: spec is most important for W3C 17:54:39 ss: all Google testing in some way related to Webdrive 17:55:38 bs: supported in mobile? 17:55:42 ss: chrome and android 17:55:51 wa: also opera for mobile 17:56:01 bs: so other platforms is just lack of implementation? 17:56:11 ss: right; Nokia and Apple haven't implemented 17:56:20 just need a driver 17:57:06 kk: support IE6? want to get rid of that 17:57:15 ss: drop support when usage drops below a certain level 17:57:40 plh: support from Microsoft for Webdriver API will help HTML WG a lot 17:58:17 jj: even if Opera submits tests and HTML adopts, they're self-describing so still testable manually 17:59:30 plh: what does Nokia think? 17:59:41 nm: Nokia not really interested 17:59:58 focused on Webkit stuff 18:00:07 today is first time hearing about it 18:00:13 Tim has joined #testing 18:01:05 ss: it's not just about testing a spec, it's about ensuring users can use content in your browser 18:01:35 so that market force should drive interest even if internal interest is elsewhere 18:01:53 nm: how is performance? 18:02:03 ss: rapid on Android, but slow on emulator 18:02:16 Iphone is fast directly and in emulator 18:02:25 fast 18:02:30 nm: 18:02:41 ^ pixel verification 18:02:42 ss: haven't seen a lot of pixel verification on mobile devices 18:03:59 18:04:23 agenda: http://lists.w3.org/Archives/Public/public-test-infra/2011OctDec/0014.html 18:04:39 Could we get the minutes updated again as well please? 18:05:01 jj: propose not requiring webdriver in first version of test suite 18:05:34 rrsagent, make minutes 18:05:34 I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html MichaelC_SJC 18:05:38 Scribenick: bryan 18:05:59 Topic: Testing IE 18:06:38 kk: To walk thru testing of IE 18:07:15 ... shows slides "Standards and Interoperability" 18:07:39 IE testing diagram: Standards, Customer Feedback, Privacy, Accessibility, Performance, Security 18:07:57 (these are pictured as hexagrams around a centra "Internet Explorer" label) 18:08:11 s/centra/central/ 18:08:26 ... IE testing has various chunks as shown on the slide (slides to be shared) 18:08:27 "Internet Explorer Testing Lab" w/ photo 18:08:31 IE5 -> IE10 18:08:36 948 Workstations 18:08:37 119 servers 18:08:42 1200 virtual machines 18:08:45 remotely configurable 18:08:52 152 versions of IE shipped every "Patch Tuesday" 18:09:03 Green Lab Initiative saves ~218 tons of CO2/Year 18:09:24 ... IE testing lab using a lot of machines with a lot of IE versions tested every week 18:09:57 "Standards Engagement" 18:10:00 ECMA 18:10:05 TC39 (Ecmascript 5) 18:10:06 W3C 18:10:10 - CSS 18:10:12 -WebApps 18:10:14 -HTML 18:10:15 -SVG 18:10:15 Slides for the webdriver notes: https://docs.google.com/present/edit?id=0AVrYfCxRNKUGZGc5Nm1ocGhfNzFnaGd2bmZnYw 18:10:17 -XML 18:10:32 cycle diagram: Testing -> spec editing -> implementations -> (loop back to Testing) 18:11:00 "Standard Contributions" 18:11:02 - Spec editing 18:11:04 -co-chairing 18:11:11 -test case contributions w3c and ecma 18:11:13 ... encourage standards engagement and participation in various groups 18:11:17 -- 14623 tests submitted 18:11:25 -- across IE9/IE9/IE10 features 18:11:29 - hardware (Mercurial server) 18:11:33 - IE Platform Preview Builds 18:12:42 ... have contributed a lot of tests and hardware 18:13:13 ... preview builds allow early access and feedback 18:13:20 "IE10 Standards Support" 18:13:49 CSS2.1 , 2D Transofrms, 3D Transforms, Animations, backgroudns and Borders, Color, Flexbox, Fonts, Grid alignment, hyphenation, image values gradients, media querie,s multi-col, namespaces, OM Views, positioned floats, selectors, transitions Value sand Units 18:13:58 DOM element traversal, HTML, L3 Core, L3 Events, Style, Traversal and Ragne 18:14:01 ECMASCRIPT 5 18:14:02 File Reader API 18:14:04 FIle Saving 18:14:05 FormData 18:14:07 Geolocation 18:14:17 ... IE 10 will support a lot of standards CSS, HTML5, Web APIs, ... http://ietestdrive.com 18:14:25 HTML5 appcache, asycn cavnas, drag and drop, forms and validation, structure clone, history API, parser sandbox, selection, semantic element,s video and audio 18:14:28 ICC Color profiles 18:14:31 Indexed DB 18:14:32 Page Visibliity 18:14:35 Selectors API L2 18:14:37 SVG Filter Effects 18:14:41 SVG standalone and in HTML 18:14:42 ... also look at the IE blog 18:14:43 Web Sockets 18:14:45 Web Workers 18:14:46 XHTML/XML 18:14:49 XMLHttpREquest L2 18:14:54 "Items for Discussion" 18:15:03 * WG Testing Inconsistent 18:15:09 - when are test created? Before LC? CR? 18:15:11 - Whena re tests reviewd? 18:15:13 - vendor prefixes 18:15:22 - 2+ impl passing test srequired for CR/ 18:15:22 q? 18:15:25 * Review Tools (none) 18:15:25 ... issues are inconsistent testing across WGs 18:15:26 q- 18:15:36 Note -- that's not quite true anymore, plinss wrote one for csswg :) 18:15:54 ... when tests are created e.g. related to last call or earlier 18:16:15 ... soft rules for how a spec is allowed to progress are maybe not enough 18:16:44 plh: these are soft rules currently 18:16:51 q+ to say I now believe tests need to be ready by Last Call 18:17:55 jj: test tools recently developed have helped with consistency, flushing our remaining inconsistencies is a goal 18:18:32 ... different test platforms result in different tests as submitted to W3C 18:19:09 Michael_Cooper: experience has convinced that tests should be available by last call 18:19:29 Kris_Krueger: why would this not be a rec across W3C? 18:19:37 plh: its not easy to enforce 18:19:46 ... some WGs will complain 18:20:09 jj: amping the expectations on testing will help 18:20:25 mc: it should be the rule, with exceptions allowed 18:20:25 q? 18:20:50 ack me 18:20:50 MichaelC_SJC, you wanted to say I now believe tests need to be ready by Last Call 18:21:05 Elika_Etemad: implementations are needed to see how tests are working 18:21:19 James_Graham: the process does not map to browser development reality 18:22:00 Elika_Etemad: its difficult to say when spec development is done thus making a hard deadline 18:22:23 @ 18:22:23 Mhmv @7 18:22:35 John_Jansen: problems often cause the specs to move backward 18:22:41 Sorry about that. 18:22:59 Elika_Etemad: CR is test the spec phase, not fixing bugs in browsers 18:23:40 ... having to move CR back due to bugs is an issue, we need an errata process to allow edits in CR 18:23:56 plh: we are not here to fix the W3C process 18:24:34 John_Jansen: the more times you go thru the circle (edit/implement/test) the better, and also the earlier 18:24:59 James_Graham: when we implement we write the tests... test suites should not be closed 18:25:19 James_Graham: The state of the spec is irrelevant to when we write tests 18:25:56 Mike_Smith: the Testing IG is scoped broadly perhaps too much so. The IG will decide what its products will be, e.g. a best practice on when test suites are developed. 18:26:19 ... writing this down even if we do not fix the process will help others avoid the same mistakes of the past 18:26:29 ... it will still have some value 18:26:54 Wilhelm_Anderson: how do you run tests, what is automated, is development inhouse 18:27:05 Kris_Krueger: write our own tests 18:27:12 plh: from JQuery? 18:27:41 Kris_Krueger: no, customer feedback is also considered 18:28:04 charlie_ has joined #testing 18:28:08 ... e.g. Gmail support provides feedback 18:28:42 ... have a lot of automated tests, ship every Tuesday, and get quick feedback from users/developers 18:29:41 Narayana_Babu_Maddhuri: is there any review of the test cases to determine is the test a valid test, validation of the test results? 18:30:09 plh: the metadata of the test log should clarify what is being tested 18:30:29 Kris_Krueger: pointing to where the test relates to the spec is helpful 18:30:57 plh: we cannot force metadata into tests, but we can encourage this info to help ensure test value clarity 18:31:20 Narayana_Babu_Maddhuri: good reporting would be helpful 18:32:10 plh: knowing e.g. what property works across devices and platforms is a goal, and matching tests to specs would support that 18:32:56 James_Graham: knowing why something is failing is sometimes difficult, dependencies are not clear and why the test failed is unclear 18:32:57 [lunch] 18:33:00 == Lunch break is 1 hour == 18:44:21 dobrien has joined #testing 19:06:49 jimevans has left #testing 19:25:46 shepazu has joined #testing 19:29:46 charlie has joined #testing 19:35:54 plh has joined #testing 19:35:55 stearns has joined #testing 19:36:39 MikeSmith has joined #testing 19:37:52 krisk has joined #testing 19:38:06 MichaelC_SJC has joined #testing 19:38:19 plinss has joined #testing 19:38:58 jhammel has joined #testing 19:39:07 http://people.mozilla.org/~ctalbert/automationpresentation/Automation.html 19:39:14 JohnJansen has joined #testing 19:39:19 Topic: Testing Firefox 19:39:22 simonstewart has joined #testing 19:39:33 krisk_ has joined #testing 19:39:45 Firefox Testing Presentation 19:40:02 clint: Tools automation lead at Mozilla 19:40:23 Clint: overview of their testiong 19:40:40 Grown over the years 19:40:52 Test Harnesses 19:41:10 "Automation Structure: Test Harnesses" 19:41:14 - C++ Unit 19:41:21 C++ Unit testing, XPCShell, no too intresting for this group 19:41:22 - XPCShell (javascript objects) 19:41:25 - Reftest 19:41:26 -Mochitest 19:41:30 -UI Automation Frameworks 19:41:34 - Marionette 19:42:20 Mochitest - tests dom stuff 19:43:01 New UI automation framework - Marionette 19:43:20 Reftest drill down 19:43:25 bryan has joined #testing 19:43:56 "Reftest: style and layout visual comparison testing" 19:44:09 Reference:

This is bold

19:44:18 Test:

This is bold

19:44:49 clint: The test and the reference create the same rendering in different ways. 19:44:58 clint: Then we take screenshots and compare them pixel by pixel 19:45:22 clint: Mochitest is an HTML file with some javascript in it. 19:45:30 clint: One of the libraries it pulls in is the SimpleTest library. 19:45:45 charlie has joined #testing 19:45:47 clint: It has the normal asserts: ok, is, stuff to control whether asynchronous or not 19:46:03 clint: This other file here (in this example) turns off the geolocation security prompts 19:46:18 clint shows a geolocation test 19:46:38 ^ http://mxr.mozilla.org/mozilla-central/source/dom/tests/mochitest/geolocation/test_allowWatch.html 19:46:55 plh: How does this route around the security checks? 19:47:03 clint: uses an add-on 19:47:28 clint: has a special powers api 19:48:23 "Marionette: Driving Gecko into the future" 19:48:40 This is a mechanism we can use to drive any gecko-based application either by UI or by inserting scrit actions into its various script contexts. 19:48:43 How it works - 19:48:46 1. socket opened from inside gecko 19:48:54 2. Connect to socket from test harnes, either local ro remote 19:49:00 3. Send JSON protocol to it 19:49:07 4. Translates JSON protocol into browser actions 19:49:09 uses webdriver json protocol streamed over sockets directly 19:49:13 5. Send results back to harness in JSON 19:49:21 wiki page: https://wiki.mozilla.org/Auto-tools/Projects/Marionette 19:49:29 (WIP) 19:50:35 clint: We run all of these test on every check in every tree we build on. 19:50:46 clint: Goes into a dashboard 19:51:01 slide: shows screenshot of TinderboxPushLog 19:52:15 wilhelm: Can we steal your Mochitests? What do we need to do to do so? 19:52:23 clint: Check them out of the tree and see how well they run in Opera 19:52:37 clint: Some of the stuff we did, e.g. special powers extension, 19:52:53 clint: but it's now a specific API (used to be scattered randomly throughout tests) 19:52:56 TabAtkins_ has joined #testing 19:53:08 clint: If you had something similar and named it specialpowers, then you could use that to get into your secure system 19:53:12 clint: So should be possible. 19:53:26 clint: A lot of tests we have in the tree are completely agnostic; don't do anything special at all, should work today 19:53:35 mochitests are at http://hg.mozilla.org/mozilla-central/file/tip/testing/mochitest 19:53:40 wilhelm: Are there plans to release these tests to geolocation wg? 19:53:48 clint: I think they already did. guy wrote tests is on that wg 19:54:07 kk: ... they're hard-coded to use the Google service. If you don't use it, they don't run... 19:54:11 kk: Not too many though 19:55:30 MichaelC_SJC has changed the topic to: Browser testing face-to-face meeting 28 October 2011 http://lists.w3.org/Archives/Public/public-test-infra/2011OctDec/0014.html 19:55:49 some discussion of sharing tests 19:56:02 Alan: I think WebKit is using some Mozilla reftests, but not using them as reftests 19:56:24 kk: I'm fine w/ reftests. But of course won't work for everything. 19:56:24 bryan has joined #testing 19:56:43 kk: CSS tests we wrote are self-describing. 19:56:56 Alan: do you have automation? 19:56:59 kk: Yes 19:57:34 rakesh: Do you run the tests every day? 19:57:39 clint: Every checkin 19:57:46 clint: Different trees run different numbers of tests. 19:58:06 https://tbpl.mozilla.org/ 19:59:02 clint: Our goal is to have test results back within 2 hours. Right now we're averaging 2.5hrs 19:59:44 fantasai: You're responsible for watching the tree and backing out if you broke something. 20:00:22 discussion of test coverage 20:01:38 discussion of subsetting tests during development 20:02:14 wilhelm: How much noise do you have? 20:02:21 clint: Don't know about false positives 20:02:37 clint: Probably not many; once we find one, we check for that pattern elsewhere 20:03:01 orange factor, for tracking failures: http://brasstacks.mozilla.com/orangefactor/ 20:03:05 clint: Thing we really have is intermittent failures 20:03:15 clint: We're trying really really hard to bring it down 20:04:00 clint: Used to be on every checkin you'd get, on average, 8 intermittent failures 20:04:06 clint: we pushed it down to 2 20:04:11 clint: And then we added the Android tests 20:04:21 clint: trying to bring it down again 20:04:32 duane: Can I instrument Marionette today in FF7? 20:04:43 clint: No, code we're depending on now is landing currently on Nightly 20:04:49 clint: Released probably... May? 20:04:59 clint: Depending on work done by Developer Tools group 20:05:08 clint: They have a remote debugging protocol they're implementing 20:05:26 clint: Will be really nice; decided this would be great to piggyback on. Don't need two sockets in lower-level Gecko. 20:05:33 clint: So won't be available until that's released. 20:05:54 clint: Currently in a project repo... land in Nightly in ~2.5 weeks 20:06:14 plh: Marionnet is only for Fennec, not for desktop version? 20:06:27 clint: For Fennec right now. Planning to go backwards and use for Desktop as wel. 20:06:33 clint: My goal is to move all our infrastructure towards that 20:08:14 kk asks about reducing orange 20:08:42 clint: It's mostly a one-by-one effort of fixing the tests 20:09:21 Interesting comment about avoiding using setTimeout in tests 20:09:49 kk: Are you going to take Mochitests into W3C? Anything preventing you? 20:10:10 clint: Nothing right now. We'd have to clean them up and make them cross-browser. Good for everyone, not opposed, just a matter of finding people and time 20:10:49 jgraham: there's a bug on making testharness.js look like Mochitest to Mozilla 20:11:47 wilhelm tries to find his slides 20:11:52 "This looks vaguely familiar" 20:12:10 wilhelm: Say a few words about testing at Opera 20:13:00 jgraham: We have a mainline, which is supposedly always stable, and then when we're developing a feature, it gets branched and at some point tests start passing (that's the yellow, b/c out of sync with mainline) and then we merge and that becomes mainline 20:13:08 diagram shows mainline with six green dots going forward 20:13:16 branch goes off, two red dots, one yellow 20:13:22 arrow from mainline to green dot on feature branch 20:13:28 The wiki page we(mozilla) wrote that details our "lessons learned" from fixing intermittently failing tests is here: https://developer.mozilla.org/en/QA/Avoiding_intermittent_oranges 20:13:29 arrow from green dot back to green dot on mainline 20:13:46 jgraham: ... 20:13:56 jgraham: Our setup's a bit different 20:14:15 jgraham: All the tests are in subversion in their own repository that's separate from the code. It's just a normal webserver: apach, php 20:14:29 jgraham: When you ask for tests to be run, they get assigned from the server and we send them out to a couple hundred virtual machines 20:14:36 jgraham: not quite MSFT's setup 20:14:42 jgraham: And then we store every result of every test 20:14:57 jgraham: I think you just store did all the tests past.. we store, in this build this test passed. 20:15:03 jgraham: We have a huge database of this information 20:15:16 jgraham: Theoretically we can delete stuff, but we store everything. 20:15:32 jgraham: In a mainline build from yesterday, we ran quarter of a million tests 20:15:50 jgraham: That's not quarter million files -- it's 60,000 files, some of which produce multiple results 20:16:03 jgraham: e.g. some tests from HTML5 test in W3C, one file might produce 10,000 results 20:16:22 jgraham: Typically it's a JS thing and it just runs a bunch of code and at the end it has some results 20:16:27 jgraham: Dumps them to the browser in some way 20:16:37 jgraham: The way we do that right now is pretty stupid, so I won't talk about it 20:16:54 slide: Visual tests, JS tests, Unit tests, Watir tests, Manual tests :( 20:17:01 jgraham: System was designed 7 years ago or sth 20:17:13 jgraham: For visual tests, you just take a screenshot, and then we store the screenshot. 20:17:22 jgraham: Someone manually marks whether that screenshot was a pass or fail. 20:17:26 s/Water/Watir/g 20:17:35 jgraham: Don't do that. You have to do it once per test, and then once any time anything changes very slightly 20:17:55 jgraham: e.g. introduce anti-aliasing test, have to re-annotate all tests 20:18:02 jgraham: this format is deprecated 20:18:17 wilhelm: We have 20,000 tests on 3 different Opera configurations... 20:18:41 wilhelm: We want to kill these tests and use reftests instead 20:18:49 jgraham: Oh, reftests should be on that list too 20:19:04 jgraham: Recently we implemented reftests, and we're actively trying to move tests to reftests. 20:19:22 jgraham: You can't test everything with reftest, but when you can it's much better 20:19:40 Alan: Do you keep track of when the reference file bitmap changes? 20:20:31 Alan: What if both the reference and the test change identically such that the test should fail but doesn't? 20:21:00 plinss: In the case of the CSSWG when we have a fragile reference, we have multiple references that use different techniques 20:21:25 jgraham: We have a very lightweight framework we used to use for JS tests. Only allowed one test per page. 20:21:46 jgraham: Easy to use, but required a lot of convoluted logic for each pass/fail result. 20:21:49 dawagner has joined #testing 20:21:51 jgraham: For new test suites, we're using testharness.js 20:22:02 jgraham: similar to Mozilla's MochiKit 20:22:14 jgraham: Unit tests are C++ level things not worth talking about here 20:22:23 jgraham: When things need automation, we use Watir -- discussed this morning 20:22:31 jgraham: When all else fails, we have manual tests 20:22:38 wilhelm: Notice that the monkey looks really unhappy 20:22:58 jgraham: For the core of Opera, we schedule a test day and just run tests 20:23:05 plh: How many manually tests do you have? 20:23:15 wilhelm: around 2000 before, less now... 20:23:25 wilhelm: Probably spend about a man-year on manual tests per year 20:23:43 wilhelm: Say some things about challenges we have, things we need to take into account when writing tests internally and for W3C 20:23:50 wilhelm: First thing is device independence 20:24:10 wilhelm: We run 3 different configurations of Opera: Desktop profile, Smartphone profile, and TV profile 20:24:26 wilhelm: Almost every time someone requests a build, it will be tested on those three profiles 20:24:55 wilhelm: We notice that if you have a static timeout in your test, e.g. wait 2s before checking result, that will break on stupid profile with low resources 20:25:24 wilhelm: On some platforms we automatically double or triple it, and we hope it works, but it's not really good solution 20:25:49 jgraham: How do you deal with ... ? 20:26:07 clint: we time out our tests after a set time period and mark it as failed 20:26:27 jgraham: Most assumption is don't depend on device size or speed -- test will randomly fail. 20:26:34 s/jgraham/wilhelm/ 20:26:39 wilhelm: Brings me to the next problem: random 20:26:54 wilhelm: If you have so many tests and even small percentage fail randomly, going to spend man-years investigating those failures 20:27:26 wilhelm: When we add new configurations, when we steal tests from source of unknown quality, we spend many man-years stamping out randomness in the tests 20:27:35 wilhelm: The more complex the test, the more likely to randomly fail 20:27:45 wilhelm: Simplest tests are JS. 20:27:54 wilhelm: For imported tests from random sources, could be very bad 20:28:02 wilhelm: Then comes visual tests 20:28:12 wilhelm: Sometimes complexity is needed, but if can simplify will do that 20:28:31 wilhelm: We have a quarantine system: run 200 times on test machines first to make sure its good 20:28:38 wilhelm: Still, sometimes things slip through. 20:28:45 wilhelm: We steal your tests. Thank you. 20:28:54 slide: jQuery, Opera, Chrome, Microsoft, mozilla, W3C 20:29:11 wilhelm: Keeping in sync with the origin of the test is difficult 20:29:25 wilhelm: When someone updates a test elsewhere, w don't automatically get that 20:29:42 wilhelm: When we muck about the test to get it to work on our system, we have to maintain patches 20:29:44 Zakim has left #testing 20:29:53 wilhelm: If we fix bad tests, sometimes easy to contribute back, but sometime not 20:30:10 wilhelm: Automating tests to use our Watir scripts, can also become a problem. 20:30:13 Zakim has joined #testing 20:30:16 wilhelm: Our current approach is not usable 20:30:24 wilhelm: need a better way for us all to keep in sycn 20:30:47 kk: This is why we have submitted and approved folders 20:31:02 jgraham: The problem from our POV is really .. part of it is version control problem on our 20:31:05 end 20:31:11 jgraham: Don't have a good way to keep our patches separate from upstream changes 20:31:29 jgraham: If we have w3C tests, and we pull new version, don't have a way to say "these are bits we changed ot make it work on our version" 20:31:43 jgraham: ... reporting and script file separate 20:32:11 jgraham: if we pull some tests from Mozilla, say, and they're JS engine tests andthey update them, if we try and merge them.. someone has to work out how to do that by hand. It's kindof a nightmare. 20:32:19 wilhelm: Last thing about randomness, esp imported 20:32:27 wilhelm: Some tests rely on external tests. 20:32:31 wilhelm: Great when we only had a few tests 20:32:39 wilhelm: But now it's a problem. Servers go down, etc. 20:32:52 wilhelm: Conclusion there is: don't do that. :) 20:32:55 wilhelm: That's it! 20:33:33 jhammel: Wrt upstream tests, standardizing on formats and standardizing on process 20:33:41 wilhelm: We set up time at 3:15 today to discuss this exact issue 20:33:53 mc: You say you have to fix tests to work on your product. 20:34:07 mc: Question is how do you separate fixing test to be not random, vs. making them work on a particular product 20:34:20 jgraham: When we pull in tests, we try not to change anything to do with the test. 20:34:32 jgraham: We don't require the tests to pass to be in our system. 20:34:41 jgraham: The thing we need to change is, can this test report back to our servers. 20:34:47 jgraham: But external tests are usually not designed that way. 20:35:00 wilhelm: I think testharness.js approach is good, because those are separated. 20:35:27 That is the end of Opera 20:35:36 's presentation 20:36:02 The next person up is peter from HP on css wg update (10 minutes) 20:36:25 Then a discussion on rendering tests for about 1 hour 20:36:34 topic: Testing in the CSS WG 20:37:32 s/wilhelm tries to find his slides/topic: Testing Opera/ 20:37:44 test.csswg.org 20:37:55 has lots of information on CSS WG testing 20:38:23 Tests are 'built' from xml into multiple formats - html, xhtml, etc... 20:39:46 Test harness is a wrapper around the tests that are loaded in an iframe 20:40:07 It loads the tests that have the least number of tests 20:41:11 The harness has a filter for spec section, etc.. 20:41:55 The harness has meta-data description for each of the tests 20:42:01 test format requirements: http://wiki.csswg.org/test/css2.1/format 20:44:04 The harness also has test results that can be shown for each of the browser/engine versions 20:46:26 Build process has requirements that will be improved overtime - meta data, ref test, title, etc... 20:47:06 Adding meta-data helps review process, though most submitters don't like to add this data 20:48:00 Multiple refs for the same test exist and a negative ref test as well 20:49:12 You can have two ref tests if the spec has two different results - for example margin collapsing 20:49:55 If a ref test can't be used then in some cases a self-describing test works 20:50:25 http://test.csswg.org/annotations/css21/ 20:50:43 Spec annotations are used that map back to the annotated spec 20:51:54 The annotated spec has total tests and results for each section of the spec 20:51:59 ctalbert_ has joined #testing 20:52:03 Now on to the test review system 20:52:32 http://test.csswg.org/shephard/ 20:53:20 Very tight coupling to the css test metadata 20:54:09 Tracks history and other information about a test case 20:55:31 jgraham: is this tied to the test file? 20:55:50 peter: no it's possible to have this information in another file 20:56:28 jgraham: can this handle a case when multiple files are used to create alot of tests 20:57:09 peter: yes we have the same issue for the media query test cases 20:57:36 Wilhelm: So does css still use visual non-ref tests? 20:58:05 fantasi: for css3 we require ref-tests, so no 20:58:37 Alexia has joined #testing 20:58:42 http://b39b5112.thesegalleries.com 21:00:25 s|http://b39b5112.thesegalleries.com|| 21:01:52 peter: The system is built to save time and automate parts 21:02:11 peter: for example when a test is approved it is moved from submitted to approved 21:02:47 Michael: Does the system have access control checks for approval? 21:02:50 peter: yes 21:03:28 topic: Testing Chrome 21:03:28 Ken: Chrome Testing Information 21:03:36 kk: works on the chrome automation team 21:03:52 kk: not an automation group in the same sense as mozilla 21:03:56 chrome depends on webkit 21:04:08 kk is not krisk 21:04:09 webkit layout tests, pixel-based tests 21:04:16 kk == ken_kania 21:04:28 kk: dom dump tree tests 21:04:44 kk: not got a lot of insight into the specifics of the webkit tests. Focuses mainly on the chrome browser 21:04:50 kk: couple of layers of testing 21:05:02 kk: lowest layer is the c++ browser tests 21:05:34 kk: probably more than other browsers do. Special builds of chrome which will run C++ in the ui thread 21:05:41 kk: relatively low level, though 21:05:56 kk: beyond those, there are the ui test framework. Based on the automation proxy (AP) 21:06:06 kk: ap is pretty old, but is an ipc mechanism 21:06:11 kk: very much internal facing 21:06:23 those tests are still fairly low level, depsite being called ui tests 21:06:42 kk: higher than that, Ken's team work on something called the chrome bot 21:06:50 kk: runs on real and virtual machines 21:07:19 kk: cache of a large number of sites in a cache. Often used for crash testing. Also include tests that perform random ui actions 21:07:30 kk: a little bit smarter than pure random, but that's the gist 21:08:08 kk: qa level tests. Tests that are done by manual testers. Piggy back off the ui test automation framework. things ilke creating bookmarks, installing extensions, etc 21:08:34 kk: break down manual testing to test parts. First app compat. Push a new release of chrome it continues to work, and testing chrome at the ui level 21:08:40 Most of the ui is "based on the web" 21:08:49 For the chrome specific native widgets there are manual tests 21:08:59 kk: app compat depends on webdriver 21:09:16 kk: lots of google teams depend on webdriver to verify that sites work. 21:09:50 kk: guess that at a high level, the testing strategy tends to be developer focused. 21:10:06 kk: devs should write the tests in whatever tool and harness is most expedient for their purpose 21:10:28 kk: piggy back a lot on the fact that chrome does rapid releases. 4 channels release to users (canary, dev, beta, stable) 21:10:37 kk: different release schedules 21:10:47 kk: depend a lot on user feedback from the canaries 21:11:10 kk: that's the gist of it 21:11:17 tab: sounds good to me 21:11:30 jhammel: do chrome do performance testing? 21:11:45 kk: we do. Using the AP and the ui testing framework mentioned earlier 21:11:50 http://build.chrome.org 21:11:55 to view the tests that have been run 21:12:21 plh: do we run jquery tests 21:12:30 ^ correction: http://build.chromium.org 21:12:37 kk: not really. webkit guys might, and we pick that up 21:13:00 krisk_: do you create tests and feed them back 21:13:08 TabAtkins: we don't do much, but we do 21:13:16 krisk_: is that because it doesn't fit with the systems 21:13:35 TabAtkins: the ways we write and run tests isn't really compatible with the existing w3 systems. 21:13:43 TabAtkins: would like to change that! 21:14:14 TabAtkins: some tests are html/js. which might be used where possible. Doesn't ahppen that regularly 21:14:23 krisk_: how do you know that you're interoperable? 21:14:42 TabAtkins: in terms of webkit stuff, it's a case of testing being done by different browser vendors 21:15:10 kk: lots of c++ tests that are specific to chrome 21:15:10 simonstewart: np :) 21:15:14 krisk_: v8? 21:15:24 TabAtkins + kk: v8 team live in europe. Who knows? 21:15:53 wilhelm: also has legacy stuff for opera. New tests written in a way that (in theory) is usable outside. Can chrome do the same thing? 21:16:09 TabAtkins: will agitate for that. Involved in spec writing rather than active dev, so might be tricky 21:16:28 wilhelm: This is a great forum to raise those issues. Opera happy to share with Chrome if Chrome does the same :) 21:16:41 krisk_: do chrome try and pass a bunch of the w3c test suites? 21:16:59 TabAtkins: yes. Some of the might be integrated into the chromium waterfall. Some of them might be run by hand 21:17:21 ?? does anyone know about webkit testing 21:17:38 TabAtkins: the people who'd I'd like to ask aren't around 21:18:18 webkit does seem to take in test suites from mozilla. They're running against a bitmap that's different from the moz rendering 21:18:33 TabAtkins: we don't have a good infrastrcuture for ref tests 21:18:51 TabAtkins: the test infrastructure people _do_ want to fix that 21:19:10 TabAtkins: every time a new port is added to webkit, there are more pixel tests. Provides pressure to do better 21:19:26 plh: any other questions? 21:19:37 15 minute break coming up 21:20:11 RRSAgent, make minutes 21:20:11 I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html simonstewart 21:20:49 bryan has joined #testing 21:20:55 Info available from webkit: https://trac.webkit.org/wiki 21:21:05 also see http://www.webkit.org/quality/testing.html 21:31:12 plinss has joined #testing 21:31:18 JohnJansen has joined #testing 21:31:18 dobrien has joined #testing 21:31:29 charlie has joined #testing 21:35:29 Next agenda Item jgraham talking about testharness.js 21:35:32 scribe: testharness.js 21:35:38 scribe: krisk_ 21:35:55 topic: krisk_ 21:36:02 topic: testharness.js 21:36:12 s/topic: krisk_// 21:36:21 scribenick: fantasai 21:36:24 TabAtkins_ has joined #testing 21:36:30 jgraham: testharness.js is something I wrote to run tests. 21:36:36 jgraham: It runs JS tests specifically 21:36:48 jgraham: It's a bit like MochiTest or QUnit which JQuery uses, or various things 21:36:54 --> http://w3c-test.org/resources/testharness.js testharness.js 21:36:56 jgraham: Every JS framework has invented its own testharness 21:37:01 jgraham: This has slightly different design goals 21:37:15 jgraham: The overarching goal is that it's something we can use to test low-level specs like HTML and DOM 21:37:23 jgraham: So it can't rely on lots of HTML and DOM :) 21:37:47 jgraham: The design goals were to provide some API for writing readable and consistent tests 21:37:50 in JS 21:38:00 jgraham: Our previous harness at Opera, as I mentioned, didn't resul in very readable 21:38:04 tests 21:38:13 jgraham: The other is to support testing the entire DOM level of behavior 21:38:24 jgraham: There are 2 test types : asynchronous tests and synchronous tests 21:38:32 jgraham: second us purely syntactic sugar 21:38:35 s/us/ is/ 21:38:52 jgraham: Another design goal was to allow possibility of the test to have multiple assertions, and all have to be true for test to pass 21:39:04 jgraham: typical example might be checking that some node has a set of children. 21:39:18 jgraham: Might want to first test for any children before testing that 4th child is a

21:39:41 jgraham: Multiple tests per file was a requirement; learning from Opera's 1/file, which was painful for test writers and discouraged many tests 21:39:50 jgraham: ... runs everything in try-catch blocks 21:39:57 jgraham: One feature of that is that every bit of the test is like a function, basically 21:40:03 jgraham: it tries to handle some housekeeping. 21:40:16 jgraham: if you have 1000 tests in a file, nice if you can time out those tests individually 21:40:31 jgraham: Uses settimeout(); can override that if you want, e.g. if running on slow hardware 21:40:44 jgraham: and a design goal was easy integration with browsers' existing test systems 21:40:55 jgraham: Should be easy to use on top of MochiKit or whatever you use for reporting results 21:41:01 jgraham: next thin I thought I'd do is go through creating a test. 21:43:19 jgraham's text editor: 21:43:26 21:45:34 jgraham: Each test has a number of tests, and each step is a function that gets called 21:45:58 jgraham: It gets called inside a try-catch block, and we can check if the test failed. We don't put anything as top-level code. 21:46:03 (added at the bottom) 21:46:07 t.ste(function() { 21:46:23 (ok, that's too much to type) 21:46:37 jgraham: Here it's adding an event listner before the second step 21:46:57 jgraham: When it gets called, it'll cal lthis other function here, which will run this other step, which is another function. Can get a bit verbose. 21:47:20 jgraham: There's a convenience method that will make this easier.. all documented in testharness.js 21:47:41 jgraham: Simple assert_equals() with value we get, value we expect, and then you can optionally have a string that describes what it is you're asserting. 21:47:53 jgraham: At this point everything we want done is done, so we say t.done(); 21:48:19 jgraham: If you load this in a browser, because we have div#log, it will show whether it passes or fails and what assert failed 21:48:25 --> http://w3c-test.org/webapps/ElementTraversal/tests/submissions/W3C/Element-childElementCount.html Example of testharness.js 21:48:35 jgraham: That's all 21:49:07 jj: Is there an id on the steps, so that you can say you failed step 4 of test foo? 21:49:18 jgraham: If there's demand, there could be a second argument there. 21:49:37 jj: would be nice to know where it failed so I can set a breakpoint there 21:49:51 jgraham: If you get a huge number of tests per file, it's usually auto-generated 21:50:14 jgraham: if it's failing in an assert, then it'll tell you which assert failed 21:50:51 plh shows his example 21:51:13 plh: everything shown here is generated by testharness.js 21:51:53 jgraham: There's a failure in this, and it seems everyone fails that. 21:51:58 plh: Bug in testharness.js 21:52:19 jj: Easiest way to debug the test. Is there an error in the test, error in testharness.js, or error in browsers 21:52:31 jgraham: There are various types of assertions. Usually corresponds to webIDL 21:52:37 jgraham: But what's in webIDL isn't always the same 21:52:52 kk: It's pretty well-written, only 700 lines or so 21:53:51 clint: If it's synchronous, you don't have to do t.step() 21:54:00 jgraham: A test that is synchronous implicitly creates a step 21:54:17 wilhelm: Opera currently uses this tool for all the new tests that we write. Can others use this? 21:54:20 clint: Yeah, I think so 21:54:28 kk: There use to be some nunit or something that W3C had 21:54:33 jleyba has joined #testing 21:54:34 kk: Was in IE, but some browsers couldn't run it. 21:54:39 kk: Very complicated 21:55:22 [server problems] 21:58:53 plinss: Are tests grouped by section into files? 21:59:23 jgraham: In this case, it checks reflection section, plus section of each part of the spec that defines a reflected attribute 22:00:24 topic change 22:00:39 wilhelm: plh wanted to talk about test harness, fantasai wanted to talk about syncing problem 22:00:50 topic: How should we organize public test suites so that they are as easy as possible to contribute to and reuse? 22:03:34 htp://w3c-test.org/framework/ 22:03:52 MikeSmith: This is an instance of the framework peter demoed 22:04:15 Mike: I'm going to show you what has been added here to make it easier for test suite maintainers to add data to the system. 22:04:25 Mike: There's this area called Maintianer Login 22:04:42 Mike: It'll give you an http_auth, which authenticates against W3C's user database 22:04:52 Mike: Email me if you want access to the system 22:05:08 Mike: Once you go in there you'll see 2 options: add metadata, change metadata 22:05:17 Mike: Can add a specification 22:05:33 Mike: one early piece of feedback I got was they have tests they want to run that are not associated with a spec. 22:05:47 Mike: So in this instance of the system, it's not a requirement to have a spec for your test suite 22:06:01 Mike: You can give it an arbitrary ID as long as not a duplicat 22:06:05 Mike: Title of the spec 22:06:08 Mike: URL for the spec 22:06:18 Mike: It expects you'll point it to a single-page version of the spec 22:06:31 Mike: If you have a multi-page spec, don't point it at the TOC. You need the full version of the spec. 22:06:39 Mike: Could change later, but initially set up this way 'cuz easier 22:06:45 Mike: This will get added to the list here 22:07:13 Mike: Next thing you can do is needed if you want to do what Peter was demoing earlier, which was associating testcases with specific sections of the spec -- or specific IDs in the spec 22:07:22 MIke: Structured around idea that you put your IDs per section 22:07:38 Mike: But some WGs like WOFF WG they're putting assertions at the sentence level 22:07:53 Mike: They don't actually have section titles, so needed to accommodate that too 22:08:02 Peter: Alan and fantasai did some work on that, too. 22:08:14 Peter: Shepherd tool will be able to parse out spec to find test anchors 22:08:26 Peter: and then can report testing coverage of the spec, so this is something we will automate 22:08:59 Alan: What fantasai and I worked out was based on WOFF work, but will be simpler for spec editors. A bit harder to automate, though 22:09:08 Mike: This part add spec metadata. 22:09:16 Mike: Instead of a form to fill out, it lists existing specs in the system 22:09:27 Mike: once you go here, if there's already data in the system, will show you data in the system alread 22:09:35 Mike: otherwise it'll show you generated data 22:10:06 Mike: This parses the spec and pulls out the headings. If it looks ok, you press submit 22:10:13 Mike: It'll put these section titles into the database. 22:10:25 Mike: If you have IDs below the section title level, then you'll have to use a different way to get it into the DB 22:10:31 Mike: You might have to get me to do it for now :) 22:10:45 Mike: Those steps are optional right now. 22:10:53 Mike: What is necessary is going in and giving info about the test suite itself. 22:10:57 Mike: you can give it an arbitrary ID 22:11:05 Mike: Title, longer description 22:11:12 Mike: to explain better thet est suite 22:11:33 Mike: base URL of where your test suites are stored 22:11:47 Mike: Difference from CSS is, that one requires format subdirectories 22:11:50 plinss: it's optional 22:12:05 Mike: This one doesn't expect subdirectories. Expects all tests in this one directory 22:12:16 Mike: If you have separate subdirectories... 22:12:32 Mike: Need to make different test suites or ... 22:12:39 Mike: Simplest case you have all tests in one directory 22:12:51 plinss: The code's actually a lot more flexible wrt formats. We'll talk offline. 22:13:11 MikeSmith: Then you have contact information for someone who can answer questions about test suites 22:13:16 MikeSmith: Then you indicate format of the test suite 22:13:56 MikeSmith: Then you have a list of flags, you can select which ones indicate optional tests 22:14:09 MikeSmith: There are ways to add flags to the system 22:14:16 MikeSmith: No ui for it, so contact me 22:14:23 MikeSmith: Last thing you then do is upload a manifest file 22:14:28 MikeSmith: You have to have a test suite 22:14:33 MikeSmith: You select a test suite 22:14:48 MikeSmith: and then what I have it do right now is that you need to point it to the url for a manifest file, and it'll grab that and read it in 22:14:59 MikeSmith: Right now two forms of manifest files that it will recognize 22:15:27 MikeSmith: second one here is just a TSV that expects path/filename, references, flags, links, assertions 22:15:37 MikeSmith: links are the spec links 22:15:56 MikeSmith: The other big change is, I was talking with some people e.g. annevk and ms2ger 22:16:09 MikeSmith: the format they're using is just listing the filenames 22:17:14 MikeSmith: it marks support files as support files 22:17:32 kk: Mozilla guys wanted to know what files were needed to pull to run a test case 22:17:58 plinss: In the CSSWG, the large manifest file with metadata -- that gets built by the build system 22:18:09 MikeSmith: This form expects the full filename, not just the extensionless filename 22:18:15 MikeSmith: Because that's what they had 22:18:25 MikeSmith: Once you have that, you should be able to get your test cases into the test database 22:18:31 MikeSmith: and it'll show up on the welcome page 22:18:40 MikeSmith: Long way to go on this. 22:18:56 MikeSmith: Goal when I started on this was to get it to the point where I didn't have to manually do INSERT in SQL to get specs into the database 22:19:06 MikeSmith: What would be really nice is if ppl start using this and getting more test suites in there so that we can .. 22:19:20 plinss: But right now only limited set of ppl can contribute to that code 22:19:26 MikeSmith: I created two groups in our database 22:19:37 MikeSmith: I created a group for developers -- anyone who wants to contribute to framework 22:19:48 MikeSmith: That'll give you write access to hg repo for the source code for this 22:19:58 MikeSmith: Take a look at source code and see problems, send me patches or I'll give you direct access 22:20:17 MikeSmith: Second thing is if you want to have access to use this UI to submit test suite data, I'll have to add you to a particular group 22:20:32 fantasai: how is this code related to plinss's code? 22:20:36 MikeSmith: It's forked from that. 22:20:42 MikeSmith: I've just been pulling the upstream changes 22:20:48 MikeSmith: been able to merge everything without it breaking. 22:20:58 MikeSmith: Think it's in good enough shape that we could port it back upstream 22:21:08 plinss: This system and the Shepherd share a lot fo the same base code 22:21:26 plinss: Lots of things I was going to port Shepherd system back into this system, and then pull your stuff in too 22:21:51 plinss: Mike also has code that ties into the testharness.js code, and will automatically submit results from that 22:22:08 MikeSmith: If you go to enter data, it gives you some choices about whether you want to run full test suite or not 22:22:32 MikeSmith: There's a button here that will pull automatic results where possible 22:22:54 MikeSmith: Be careful, this will submit the data publicly! 22:23:18 jgraham: Not saying it's a bad idea, but from our POV, we're not going to use it offline. 22:23:31 (Brian was talking about trying out the system privately offline) 22:24:10 plinss: The system tracks who's submitting the data. By login if you're logged in, by IP if not 22:24:23 Brian: Privacy is useful 22:24:34 plinss: goal is for pulling data from as may sources as possible 22:25:01 wilhelm: fantasai wanted to talk about keeping things in sync 22:26:06 Is someone scribing? I can't keep up on the iPad 22:26:16 This is the writeup that we are planning to set up at Mozilla for the CSS tests specifically: https://wiki.mozilla.org/Auto-tools/Projects/W3C_CSS_Test_Mirroring 22:28:48 Mozilla has a way to move tests from mozilla -> w3c -> mozilla 22:29:39 wilhelm: how will this cope with local patches? 22:29:45 fantasi: The master copy only lives in one place... 22:30:02 jgraham: probably not a problem with the css tests 22:30:03 fantasi: approved is the master in w3c 22:30:28 dobrien has joined #testing 22:30:30 fantasi: submitted is the master for submissions 22:30:41 jgraham: opera is thinking of having the master from w3c which is intact, and our checkout from that master will have the local patches, and when we pull we'll rebase our patches atop the w3c master 22:30:59 this should be possible now that hg is in the w3c side and our (opera) side 22:31:11 fantasai: we'll probably have to do something similar 22:31:16 wilhelm: how does this handle local patches? 22:31:33 jhammel: is there a technical limitation to not have people editing the w3c tests 22:31:36 fantasai: no 22:31:44 fantasi: this is only for css which don't seem to have this problem 22:31:51 jgraham: probably make it a commit hook 22:31:57 ctalbert_: agreed 22:32:39 peter: if someone pushes to the approved directory without actually being approved then the system just automatically denies them 22:32:48 that may be incorrect ^ (scribe error) 22:33:28 wilhelm: might be an idea to split test suites down at lower granularity levels so that you can have test suites with differnt levels of maturity 22:33:37 jgraham: don't think that would make a difference tbh 22:34:00 peter: our repo would keep all the data from all the suites in the repo so that our build system could build any version of them from any suite 22:34:17 wilhelm: are there other things we can do to make it easier to contribute test suites? 22:34:44 fantasai: one problem on the mozilla side - there's no place to put tests that should go to the w3c - we depend on a manual process to sort out which should be submitted and then it is done later 22:34:55 fantasai: these tests just sit in a random place and are forgotten 22:35:11 fantasai: once we have a directory that goes to w3c and we tell the reviewers, then it will help quite a bit. 22:36:08 fantasai: the basic idea is to make the process obvious what developers need to do with that test to indicate that it is appropriate and ready for w3c then it should "just happen" 22:36:46 jgraham: we have a similar problem. it's hard to surface those tests and bugfixes without a policy and a place for those tests 22:37:31 peter: if we have a standard format among the test writers then it will be easier to help developers to upload the tests to the w3c. If the developers have to convert the tests it's too difficult and people won't expend the effort to make it happen 22:38:09 krisk_: sometimes it depends on the editors as to when they allow tests into the spec, and you find that tests sometimes lag the spec by quite a bit 22:38:58 fantasai: we found that with the css - the person writing the spec is often nominally tasked with also writing the test suite but because the skill sets are different and the spec editor is usually swamped, then the tests get neglected 22:39:16 fantasai: we really need a dedicated person to manage these tests and testing effort for each spec 22:39:30 MikeSmith: is there some way to motivate people to do that? 22:39:51 MikeSmith: maybe we should publicly track the testsuite owner? 22:40:12 fantasai: we can do that, but the burden is on getting resources for that, really. 22:40:34 MikeSmith: yeah, the question is how do you encourage the managers allow their people to spend times on w3c work 22:41:14 MichaelC_SJC: you might be able to convince your company to do that, but we also need to have the working group chairs understand that this needs to happen 22:42:06 jgraham: if we have them already in an interoperable format then it's pretty easy, but for our existing tests that are in a different format, we aren't going to spend the time to convert them 22:42:33 fantasai: we might just have a place at w3c to take those tests, and just post them publicly and have someone else do the conversion work 22:42:52 jgraham: I suspect that's a wide problem 22:43:03 q+ to ask how much should there be a "W3C format" vs how much does W3C framework need to format (nearly) any format? 22:43:40 krisk_: if you getin the habit of submitting stuff as you're doing development, tat seems reasonable. 22:44:12 krisk_: keeping things not super complex is a win, and being consistent will pay dividends 22:44:32 fantasai^: Because for Opera it may not be valuable to do the conversion, but e.g. Microsoft might want those tests, and decide that the cost of converting is less than the cost of rewriting tests from scratch, so to them it'll be worth it to do the conversion 22:44:49 fantasai: thanks, I'm not too good at this :/ 22:44:58 (scribe note ^) 22:46:10 wilhelm: the more I think of this, the more I realize that facilitating the handover of tests is a full time job 22:46:46 ack me 22:46:46 MichaelC_SJC, you wanted to ask how much should there be a "W3C format" vs how much does W3C framework need to format (nearly) any format? 22:46:53 wilhelm: if we could get every browser vendor to commit one person to do this work on their team then that would be good. 22:47:12 fantasai: the problem we're at now, people havne't adopted the w3c ofrmats internally 22:47:20 it will be less work once that happens 22:47:39 it's not w3c's responsibility to convert your tests to w3c 22:47:53 fantasai: you can write a conversion script to convert your test to w3c format 22:48:07 better to do that than to have w3c to accept all differnt formats 22:48:45 jgraham: the problem is that many of these harnesses are not built for portability 22:49:10 MichaelC_SJC: the problem with a common format (and I may be wrong) is that you run into things you can't test 22:49:36 jgraham: if we run into that, then in that case maybe we can find some lightweight format for those tests, or in that case maybe we use a different type of harness 22:49:56 scribe: ctalbert has to step out 22:49:58 fantasai: ^ 22:50:09 ... 22:50:25 kk: If you can write it with testharness.js, do that. If not, try reftest, if not, try self-describing test 22:50:57 kk: In your case you have the difficulty of needing a screenreader or something 22:50:58 ... 22:51:15 jgraham: If you can get ppl to contribute in one format, at least you solve the problem once per platform rather than once per test 22:51:24 mc: I think there's a hierarchy of goodness 22:51:41 mc: The framework should have at least thepossibility of hooking in new formats 22:51:44 general agreement 22:52:14 wilhelm: For the Watir cases, we noticed areas where we'd want to addtests for something very obscure and specific. What we've done is add support at a low level in Opera and use an API 22:52:20 wilhelm: Such things could be later added to WebDriver 22:52:23 s/I think there's/I can agree with the idea that/ 22:52:43 Alan: For tests where there isn't a w3c version, but browsers have something, is there a list of most-wanted specs that need tests on the w3c site 22:52:57 fantasai: All of them? :) 22:53:27 Alan: We were talking about poking ppl, committing ppl to translating browser tests to w3c tests 22:53:28 bryan has joined #testing 22:53:36 Alan: Would be more successful to getting resources if we have a specific list of things we need 22:53:41 jj: Also possibility to ask specific people. 22:53:50 jj: Rather than saying, please call all submit tests for HTml5 22:53:58 jj: Say, can you submit tests for WebWorkers 22:54:10 jj: need a specific ask to get things done 22:54:49 jj: It might not cause immediate surge in test submissions, but for me from outside to inside, the idea of submitting tests was impossible to me. Didn't know where to submit them, figured they'd be rejected, didn't know what a reftest was, etc. 22:54:54 jj: So process was hard, and not being specific 22:55:01 jj: Better way to get things done is asking 22:55:11 jj: Would like Opera to submit WebWorker tests 22:55:20 wilhelm: Can I get that in writing so I can show it to my manager? 22:55:38 Alan: Identify the tests, see who has those tests, then request them 22:56:02 plh: We've been corrsponding on testing framework a little bit, but part of task is also going out there in the wild and finding tests and getting them to W3C 22:56:18 plh: Need to get to point where we have framework and start on asking tests 22:56:27 Alan: Use framework to identify areas, since it annotates the spec 22:56:42 jj: We have no idea how much coverage those 47 tests have -- number isn't meaningful from a coverage perspctive 22:56:53 jj: 1 is better than 0, but maybe 100 is needed not47 22:57:13 ?: Test coverage is a negative covered only know when something is not covered, not how well something is covered 22:57:31 jj: Even if you say you have 100% on that normative statement, still doesn't tell you if you got all the edge cases 22:57:40 jgraham: At the moment for HTML we have nothing, though. 22:57:54 ^^ simonstewart: test coverage is a negative thing. It'll only say what's not covered, not how well the covered areas are tested 22:58:02 jgraham: We have our tests organized by section in the repo, but it's not explicit 22:58:17 jgraham: Being able to say per normative statement, do we have a test for this, is pretty nice 22:58:26 --> http://www.w3.org/2011/10/timer.html (annoying) timer 22:58:39 jgraham: If you look somewhere, there's an annotation per sentence in the spec showing tests for section X 22:58:48 jgraham: But that's really complicated, because spec isn't marked up to make that easy 22:59:01 jgraham: and testing dozens of disconnected statements 22:59:22 kk: The problem we're struggling with is not that how do we get perfect coverage. There's a spec, and there's no coverage. 22:59:33 kk: Browsers all have this feature, and they don't work the same. So having some is a good start. 22:59:54 Bryan: If you look at most of WebAPIs near LC or at LC, only 1/3 have tests available 23:01:38 fantasai: setup a process for getting tests from *your* organization to w3c, and *going forward*, you should write w3c-submittable tests *and* submit the tests. Once that is in place, we can go back and convert legacy tests 23:01:57 s/corrsponding/working/ 23:02:17 fantasai: we need to get the webkit people to commit to this 23:02:36 fantasai: you can require that when checked into repo, they become reftests 23:03:01 fantasai: plan going forward is to convert to reftest 23:03:33 jgraham: if you're comparing to something bitmap-based, it may take 2x time, but it will save time going forward 23:03:40 fantasai^: Because then the number of legacy tests that are not w3c-formatted stops growing, and we can work on making that number smaller 23:06:54 topic: Additional Items 23:07:40 example of a test that has to be self-describing: This tests that the blurring algorithm produces results within 5% of a Gaussian blur 23:07:43 http://test.csswg.org/source/contributors/mozilla/submitted/css3-background/box-shadow/box-shadow-blur-definition-001.xht 23:08:06 cion has joined #testing 23:09:00 cion has left #testing 23:09:43 bryan: We developed a number of specs for device APIs 23:10:05 bryan: We recognize these APIs are quite sophisticated, an it'll take some time, but we're continuing the development of these capabilities for web runtimes 23:10:29 bryan: We have developer program, global ... ecosystem 23:10:41 bryan (from AT&T): wanted very briefly ... 23:10:55 bryan: show you these links to the specs, the APIs, but more importantly the test framework 23:11:03 bryan: Test framework is based on QUnit 23:11:43 bryan: Pulls in a file from a test directory, which has the list of test associated with this particular API. 23:11:50 bryan: Tests individual JS filesin the same directory 23:11:57 bryan: will run them one by one 23:12:08 bryan: This is packaged up as a widget file, whcih is available for download 23:12:17 bryan: So we can run all the tests for example using this widget framework. 23:12:34 bryan shows pie charts of resutls 23:12:44 bryan: Automatically uploaded and made available to vendor 23:12:54 plh: Say 1000 tests for core web standards? 23:12:57 bryan: No for APIs 23:13:24 bryan: What comes for underlying platform is inherently tested by that community 23:13:30 bryan: We need to cover device variation 23:13:42 bryan: identify things that we reference 23:13:49 bryan: We have individual tests for these, test scripts 23:14:07 bryan: this is more than acid level test, but not what we hope to see from W3C in long run 23:14:20 bryan: We don't want to develop and maintain this level of detail in WAC. Want to leverage W3C test suites 23:15:17 bryan: If you look at the tests, you can see for example the geolocation test suite, which we reference. 23:15:25 bryan: We want to auto-generate the tests as widget 23:15:37 jj: So if hte test suite changes, do you update your widget? 23:16:51 bryan: Our goal is to create frameworks where we can pull in tests and run them in this runtime environment without havng to necessarily maintain the tests ourselves 23:16:59 bryan: We would benefit from a common test framework 23:17:02 mouse has joined #testing 23:17:10 bryan: What exactly these tests are is basically just a JS procedure 23:17:44 bryan: We test existence of methods, call qunit functions for pass/fail, not necessarily married to this format, but it was the most common one at the time we developed this. 23:18:05 bryan: So to summarize our goal is to have the scalability to support this widget-based ecosystem across dozens of devices across the world 23:18:10 bryan: So we have to have scalability 23:18:20 bryan: To depend on the core standards as something we don't spend a lot of effort on 23:18:22 mouse has left #testing 23:18:26 bryan: Duplicate things that eventually come from W3C. 23:18:37 bryan: We'd like to see this developed at W3C so we can directly leverage it. 23:20:13 fantasai comments on how this shows having a few common formats is better than having w3c accept many similarly-capable formats -- it better supports reuse of the tests 23:20:50 rrsagent, make minutes 23:20:50 I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html MichaelC_SJC 23:21:48 Topic: Conclusions and Action Items 23:21:55 1. Vendors commit to running W3C tests 23:22:07 2. Vendors push internally to adopt W3C test formats 23:23:14 plh says W3C should make ti easier for vendors to import suites 23:23:27 fantasai: what does that entail? 23:23:52 plh: make guidelines for WG 23:24:01 jgraham: I feel the problem is more on our side than on W3C side 23:24:16 wilhelm, jgraham: but of course, using hg instead of cvs is important for tests 23:24:30 wilhelm: W3C should commit resources to get tests from vendors 23:24:41 plh: start with webapps 23:24:58 wilhelm: Any conclusions on WebDriver discussion? 23:25:12 wilhelm: We commit to work on the spec, and get that into our browser 23:25:19 plh: MS and Apple should look into that 23:25:52 Mike: normal people at apple are interested, but they're not the ones who sign off on things 23:26:25 kk: Using testharness.js seems to me a very low-hanging fruit, rather than writing a whole bunch of APIs 23:26:26 "not buy Apple" would be more effective 23:26:50 wilhelm: There should be a spec that talks about it, for the IP stuff, we need to get a spec out so there's less risk for those implementing 23:27:05 jgraham: There was some discussion, but no decision, about which bindings W3C would accept tests in 23:27:09 wilhelm: I'd list that as an open issue 23:28:05 MikeSmith: We want to follow up with testing IG , [other grou] 23:28:09 s/grou/group/ 23:28:30 MikeSmith: Spec discussion would go to [... mailing list ...] 23:28:44 wilhelm: Dumping ground for non-W3C-format tests 23:29:30 kk: You can put whatever you want in submitted folder 23:29:48 public-browser-tools-testing@w3.org 23:29:53 jgraham: It would be nice if ppl dump random test suites in random formats, to separate those out from thing sthat would be approved in roughly their current form 23:29:55 http://lists.w3.org/Archives/Public/public-browser-tools-testing/ 23:30:23 kk: We should have an old_stuff directory 23:30:29 jgraham: And encourage people to dump stuff there 23:30:32 for the Testing IG, http://lists.w3.org/Archives/Public/public-test-infra/ and public-test-infra@w3.org 23:31:20 plh: We can associate a repo with the testing IG, and then anyone in that IG can push to the repo 23:32:01 ACTION: Mike to create mercurial repositories for Web Testing IG and Browser Tools WG 23:32:03 fantasai: Should be clear that dumping things here is not the same as submitting to an official W3C test suite 23:32:26 bryan: Should also have a wiki that documents what's there 23:32:33 TabAtkins_: I accidentally locked myself on the patio, could you come rescue me? 23:32:50 jj: Right, should be clear these are not submitted for review; they're there, and someone can take them and convert them and submit them 23:32:59 http://www.w3.org/wiki/Testing 23:33:00 jgraham: Come up with a prioritized list of things that need tests 23:33:05 jj: anything that's in CR? :) 23:33:10 plh: I'll take an action item to do that 23:33:19 ACTION plh: make a list of things that need tests 23:33:32 bryan: Need a list of what's available, what are the key gaps, what do we need to get there 23:33:41 kk: Identify specs that are in a bad situation. 23:34:22 fantasai: Also want to track not just what needs testing, but ask vendors whether they have tests for any of these. 23:34:29 fantasai: Can then go pester people to submit those tests 23:35:15 ACTION MikeSmith: Create repos for testing IG and testing framework group 23:35:46 plh: Need places to dump tests for groups that don't have repos atm 23:35:54 plh: more and more groups have their own test repo 23:36:39 ACTION: plh to convince the geolocation WG to use mercurial for their tests 23:36:56 3. Vendors commit to finding a person to facilitate submission and use of W3C tests 23:37:48 wilhelm: need to make a formal request to each organization 23:38:04 bryan: Someone should pull together format descriptions and include the guidelines 23:38:23 --> http://www.w3.org/html/wg/wiki/Testing/Authoring/ Authoring Tests 23:38:33 bryan has joined #testing 23:39:53 dicussion of where to collect this information 23:39:56 --> http://www.w3.org/testing/ Testing 23:40:42 jgraham: should be in a place not specific to a given working group 23:41:57 ... 23:42:07 plinss: There's a lot to be gained by standardizing metadata 23:42:32 jgraham: hard to do the CSS way for an HTML test 23:42:40 jgraham: Could have n ways to do it, where n is a small number 23:43:05 Alan: It would be nice to have everything on a wiki so we don't have to go through a staff member 23:43:12 Alan: What if this page was a redirect to a wiki? 23:43:26 jgraham: Could have that page be a link to a wiki 23:43:40 MikeSmith: I like redirect idea, minimizes work I have to do :) 23:43:47 bryan has left #testing 23:44:35 wilhelm: So when should we meet again? 23:44:45 jj: I think we should definitely make this a regular meeting. 23:44:48 bryan has joined #testing 23:44:56 jj: Seems like everyone in every WG is going to be solving the same problems 23:45:48 ... 23:45:54 plh: WebDriver will be under browser tools WG 23:46:12 mc: Who's "we"? 23:46:17 wilhelm: I don't know, but this crowd is great. 23:46:24 plh: We can put under the IG 23:46:44 fantasai: We can say at last meet again next TPAC 23:46:54 plh: Would be in France next year 23:48:18 fantasai: Since not everyone will be travelling to TPAC, would we want to do another place at at different time as well? 23:48:27 jj: Does everyone agree we should meet? 23:48:32 kk: Depends on deliverables. 23:48:45 MikeSmith: If we meet 6 months from now, when would that be? 23:48:50 ?: April 23:49:22 mc: Just want to be sure who the "we" is the invite would go out to 23:49:49 wilhelm is designated in charge 23:50:37 Meeting closed. 23:50:42 RRSAgent: make minutes 23:50:42 I have made the request to generate http://www.w3.org/2011/10/28-testing-minutes.html fantasai 23:50:44 RRSAgent: make logs public 00:31:24 glg has joined #testing 00:31:57 glg has left #testing 00:48:08 montezuma has joined #testing 01:08:54 Zakim has left #testing 01:16:51 arronei_ has joined #testing 01:17:26 dobrien has joined #testing 01:18:02 dobrien has left #testing 01:43:41 gfddg has joined #testing 01:43:55 gfddg has left #testing 03:14:31 plinss has joined #testing 03:16:18 stearns has joined #testing 03:20:12 charlie has joined #testing 05:05:44 shepazu has joined #testing