See also: IRC log
I'm dialing in right now....incase someone else dials into the conf call
If no one dials in then this can just be on IRC
Lets wait a few minutes - maybe David or Areyh will participate
<jgraham> Did dst in the US change or something? I thought this was an hour later…
yes we are an hour a head (spring forward)
<jgraham> OK, it doesn' change 'till next weekend here
I'll have to take note for next year and send a reminder out
Hope you are OK Mike
Let's get going
Item #1 Check for any bugs on approved tests
I see no new bugs on the list
<Mike5_> krisk: thanks, yeah, I am doing fine… I just got back to Tokyo today after being away for a week
Note that I updated the canvas security tests
and did a find . grep "test.w3.org" '{}' - print in the approved folder
which shows no results (as expected)
<plh> Plh: we're in discussion with vodafone on their test suite
<plh> ... they have a set of tests that they are interested in contributing
<plh> ... but it's based on their own framework, with the tests generated by the framework itself
<Mike5_> hey krisk
<Mike5> ごめん for the echo/noise
<Mike5> I can hear krisk fine
That is good to hear about a new particpant
Agenda Item #2 Approve the Google A/V Tests
I updated 66 tests and added a third parameter
So we are good except for a few tests that have a few bugs
I'll move them into the approved directory
<plh> We did the redirect for test.w3.org now, so it might have broken some cross-domain pages
I did a find/grep and I see no more 'hits' so we are all set at this point with the server name change
Now some of the A/V test have a '_manual' which I'm not going to move since they are dups
Not sure why they were added
see http://w3c-test.org/html/tests/submission/Google/video/events/event_canplay.html and http://w3c-test.org/html/tests/submission/Google/video/events/event_canplay_manual.html
You see a comment from Simon Pieters about this as well at http://lists.w3.org/Archives/Public/public-html-testsuite/2011Mar/0003.html
If someone wants them to be moved to the approved folder feel free speak up
Moving on to Agenda item #3 HTML5Lib parser tests
<plh> PlH: I did check with Rigo and he said it was fine
Looking at the list we are all OK
jgraham did see that thread about this question?
<jgraham> I more or less have them ready to push
<plh> Plh: we can have tests under the MIT license in the HTML test suite
Once they are pushed into the w3c server I'll take a peek
<jgraham> I expect to do this in the next few days
<plh> we'll need to make sure not to use the license information
<plh> and whatever attribution there needs to be
<jgraham> not to loose?
<plh> :)
<jgraham> Ah :)
Plh can you take a peek once they are pushed to make sure all is correct
<plh> I suggest making a separate directory?
It's your call plh
<plh> with a file indicating that all tests in the directory are under MIT license
seems best to have you setup these requirments (I'm not a lawyer)
<plh> it seems that we're not forced to have proper attribution, but it would be nice
any more comments about the HTML5lib parser tests?
<plh> plh: we'll need to figure out how to run the tests within the framework
<plh> ... I'm hoping the testing project will help there
Ok lets move on to the next agenda item
<jgraham> plh: The stuff I have runs the tests using javascript
<jgraham> in a browser using testharness.js
Agenda #4 Open discussion on test approval process (per feedback from Google/Mozilla)
<Mike5> for some suggestions, see http://www.w3.org/wiki/Testing/Requirements#Test-case_review … in particular, "allow anyone to easily give feedback on tests, not just named reviewers or people with W3C accounts" (which is a suggested requirement from jgraham)
I think we have been doing an OK job at keeping up with a back log that builds up
Now I understand that Areyh is not happy with the time to get his set of tests approved
<plh> Kris: we still need to figure out a way to get Aryeh through...
The main issue issue is that no process is going to get a few thousand tests reviewed quickly
<Mike5> true
<jgraham> presumably that is why Mozilla are pushing for a default-accept model
<plh> on possibility here, let's separate them in chuncks and give those chuncks some deadline for review
<jgraham> Since, I assume, they believe that would concentrate effort on the areas that actually need it
<jgraham> i.e. the tests that implementors have problems with
I think when we do get a bunch of test we do need to get them reviewed and in this case reviewing and accepting a 'chunk' of tests seem appropriate
Some other issues exists are when we have hundereds of tests per page
All the tests on a page need to be correct before any can get approved
-or- the tests with bugs need to be commented out
<plh> so we have 9 files of tests to review. which ones should we start with?
I'd pick one that is not to contraversal
<plh> one pb is that those tests contains more than just the html5 spec
I'll take an action item to send to the list one 'set' to start to review and work on getting approval
<plh> ok
Yes that and the other feedback will need to get taken into account that comes up
So the other issue us the btoa/atob tests that he submitted
<jgraham> FWIW I don't think that dividing up the tests will take a significant burden off the reviewers
<jgraham> Almost all of the review there is ensuring his code is correct
<jgraham> The actual tests are just tables of element/attribute/type
<plh> do you have an other approach?
<jgraham> Not really
<jgraham> If the tests were split differently, one could review all the string reflection tests then the url ones, and so on
One approach would be to create a script that generates tests
Then you can review the html files that get generated over time
<jgraham> ?
For example a perl script that took parameters or had arrays of attributes built in could output individual html tests
<jgraham> Yes, one could do that
these individual tests could be reviewed/approved more quickly in smaller chuncks
The issue today is that all this data and logic is all tied together...
<jgraham> Although reviewing the javascript code would still be most of the effort
<jgraham> and the tests would likely run much slower
It really comes down to a trade off
if you want a test approved fast vs slow
surely a single test can be reviewed alot quicker than a big complex page that has lots of tests bundled together
Though I don't think we should go ask him to rewrite all his logic...
<jgraham> As I understand it he has a relatively small amount of logic that covers a large number of tests
Now with the btoa/atob tests we should wait till the bug is resolved...
<plh> seem ok to me
<jgraham> The problem is that relatively small amount of logic is still enough effort to review that volunteers have been sparse
<jgraham> And then there are lots of tedious tables to check
<jgraham> It is not hard to understand why people are not queueing to do this in their free time
<jgraham> I think it is fine to review the atob / btoa tests now
lets move forward with a 'chunk' and see how it goes
It's after the meeting time shall we adjourn?
<plh> for atob, Kris is waiting on the chairs to decide
<plh> and yes, it's fine to adjourn
feel free to review the tests...
Just because a test does end up in the approved folder doesn't mean they have no value
e.g non-normative tests
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/timezones/dst/ Succeeded: s/sais/said/ Succeeded: s/use/loose/ No ScribeNick specified. Guessing ScribeNick: krisk Inferring Scribes: krisk WARNING: No "Topic:" lines found. Default Present: krisk, Plh, Mike5_, Mike5 Present: krisk Plh Mike5_ Mike5 WARNING: No meeting title found! You should specify the meeting title like this: <dbooth> Meeting: Weekly Baking Club Meeting Agenda: http://lists.w3.org/Archives/Public/public-html-testsuite/2011Mar/0028.html WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 22 Mar 2011 Guessing minutes URL: http://www.w3.org/2011/03/22-htmlt-minutes.html People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option. WARNING: No "Topic: ..." lines found! Resulting HTML may have an empty (invalid) <ol>...</ol>. Explanation: "Topic: ..." lines are used to indicate the start of new discussion topics or agenda items, such as: <dbooth> Topic: Review of Amy's report[End of scribe.perl diagnostic output]