See also: IRC log
We can wait a few minutes incase folks from Intel attend
good morning plh
<plh> good morning
We are waiting a bit to see if the folks from Intel attend the meeting
<plh> ok
<jgraham> Is it morning in plh land?
<plh> it is
They seem intrested in the 'sandbox' tests
<plh> Boston, MA
<jgraham> Yes, I knew that…
<jgraham> Although I gues you might have been almost anywhere
plh it's 11am
<plh> it's 6 hours behind Europe
<jgraham> Right, I had just forgotten that you were in Boston
<plh> btw, I'm trying to convert philip taylor canvas to use testharness.js
<plh> is it of interest here?
<jgraham> Yes
<jgraham> Didn't Ms2her already do that?
Yes - note ms2ger did alot of this already
<jgraham> *Ms2ger
<plh> oh he did
<plh> but our approved tests don't reflect that
though a bug exists that cause alot of tests to timeout...
including approved tests
<plh> http://w3c-test.org/html/tests/approved/canvas/
<plh> those don't use testharness.js...
<plh> where are ms2ger tests?
take a peek at http://www.w3c-test.org/html/tests/submission/PhilipTaylor/canvas/2d.gradient.object.invalidcolour.html
Do you see how it timeout?
<plh> hum, that test require manual checking anyway
<plh> that are 10 of those or so
Here is the approved one http://www.w3c-test.org/html/tests/approved/canvas/2d.gradient.object.invalidcolour.html
<plh> I'll have a look
In August I recall talking about this on IRC and thought ms2ger was looking at fixing
It looks like a bug that could be fixed
Also not that it impact a large number of tests that have been converted to use testharness.js
In the case of http://www.w3c-test.org/html/tests/submission/PhilipTaylor/canvas/2d.gradient.object.invalidcolour.html
It would seem possible to not have this be manual since the approvcd test ends up writing 'passed' to the page
<plh> ah, my version works :)
really?
that is great
Do you have other updates that are not pushed to Hg?
<plh> yes, but I was more conservative than ms2ger in the transformation
<plh> nope, no other updates yet
so how do we want to move forward?
<plh> I'll keep looking at ms2ger files
scribe: since it seems that ms2ger has the submitted tests partially converted
plh do you plan on submitting a patch?
<plh> yes
For the approved ones?
<plh> yes, for the approved ones
<plh> I'm trying to generate a preliminary implementation report for tpac
I have some data as well that I was thinking about presenting
In an effort to help allow the spec to enter CR
<jgraham> plh: Based on what data?
I was just going to have a chart graph based on how many tests have at least two browsers passing
<plh> jgraham, based on the approved tests
scribe: and not document who specifically passed/failed
<jgraham> That's not going to cover much
<jgraham> Seems it would be really useful to have a measure of which parts of the spec actually have coverage
<jgraham> As well as which parts have implementations passing the tests we do have
<jgraham> And some qualitative data where coveriage isn't great
<jgraham> e.g. Section X: no tests, but caniuse.com says 3 browsers implement this already
I was also going to go back a few years and show simalar data
I think it's important to focus on all the features that browsers have all added/imnplemented in the past 3->4 years
<jgraham> I think I diagree with that
Stuff like History API, Canvas, HTML Audio/Video are all good examples of features that have been added and that are highly interoperable in current browsers
<jgraham> *disagree
<jgraham> At least, if the goal is to find non-interoperability (which it should be), then older stuff is at least as valuable
<plh> in terms of determining the coverage, what data do we have?
Yes and I recall that the 2014 Plan is to remove stuff from the HTML5 spec that is not interoperable
<jgraham> krisk: That sounds like a very dubious plan to me
Well another way to view the problem is that the spec is huge...
<jgraham> Another way to view the problem is that the platform is large
<jgraham> And interconnected
<jgraham> small specs leave interoperability gaps at the edges
and we do have tests that cover big chunks of the spec and the test we do have don't show alot of interop issues
E.g. canvas, html5 video/audio, parser
<jgraham> Parser is the great success story :)
So it would seem to be important to have the WG agree that these parts of the spec are stable and enough to enter CR
Then we can start to look at other areas of the spec that don't have tests and are viewed as not interoperable
Make sense?
<jgraham> More or less, I think
Sorry if I am typing to fast...
I suspect the co-chairs (plh) can correct me do want to see some more data from the testing task force
I think it would not be unreasonable to take the table of contents
scribe: e.g. http://dev.w3.org/html5/spec/single-page.html
and for each section have data...
For example
4.8.2 The iframe element (http://dev.w3.org/html5/spec/single-page.html#the-iframe-element)
<jgraham> I think that would be excellent
<jgraham> If we understand what the limitations of the data are
Then list out what we know today...Implemented in browsers - sandbox is only implemented by webkit, ie and FF17
I would expect this to generate alot of discussion
For example in this part of the spec has the 'seamless' attribute which I think only chrome supports
<jgraham> http://dvcs.w3.org/hg/html/rev/40ed9a085839 <- I just pushed some script scheduling tests that Opera had previosuly released in a more useless location
plh do you think this is what the co-chairs need for the director to have the spec enter CR?
<jgraham> (that was a little off topic, I know)
<plh> kris, yes
<jgraham> (hopefully I got the most up-to-date versions of everything)
Nice are these all HTML5 tests?
<jgraham> In what sense?
e.g. http://www.w3c-test.org/html/tests/submission/Opera/script_scheduling/044.html
This looks like a webapp dom event test
scribe: not that I looked super close at the test
<jgraham> Well the mutation event ones are hard to classify of course
<jgraham> But that's exactly my point about tests that span the gaps between specs :)
Though script execution is a very key part of interop
<jgraham> (in general this testsuite was designed to find bugs; it wasn't written with the goal of being a spec testsuite)
Makes total sense
OK lets cover the agenda!
I don't think the Intel folks will be attending
I think the sandbox tests are intresting, since it seems like firefox will be supporting this attribute soon as well.
jgraham are you intrested in these tests?
<jgraham> Well not more than any other tests
<jgraham> So, "yes", I guess :)
That covers the 'sandbox' part of the agenda
Now let's look at new test submission
ms2ger seems to have submitted tests from webkit and adam barth
http://dvcs.w3.org/hg/html/rev/eed0adf06401
see -> http://www.w3c-test.org/html/tests/submission/WebKit/Location/
plh I'm not a lawyer but what does this mean http://www.w3c-test.org/html/tests/submission/WebKit/Location/LICENSE?
<jgraham> Seems to be a 2 clause BSD license
<plh> it seems compatible to me
<plh> but I'm not a lawyer
<plh> I could ask
Plh can you just check (not right now) that adam is part of the html wg and confirm that this license is the equivalent to the current w3c test suite license?
<plh> Adam isn't part of the wg
e.g http://www.w3.org/Consortium/Legal/2008/04-testsuite-license.html
<plh> but that shouldn't be a pb I think
and http://www.w3.org/Consortium/Legal/2008/03-bsd-license.html
<jgraham> The tests seem to be broken anyway
<plh> and I'll ask for the license
<jgraham> By which I mean, they seem to rely on the WebKit test harness
Yes ms2ger hg comment indicates they need to be updated/fixed/converted to use testharness.js
The other test submission was the approval for the Intel audio/video tests http://dvcs.w3.org/hg/html/rev/e98fc1cd0ea4
The last new submissions was the one mentioned above from Opera (script scheduling)
http://dvcs.w3.org/hg/html/rev/40ed9a085839
Moving on in the agenda - bugs on approved tests
No new bugs other than a few more odd dom viewer bugs (e.g. https://www.w3.org/Bugs/Public/show_bug.cgi?id=19306)
Shall we adjourn?
<plh> ok
<jgraham> sure
Meeting adjourned
This is scribe.perl Revision: 1.137 of Date: 2012/09/20 20:19:01 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/3/2/ No ScribeNick specified. Guessing ScribeNick: krisk Inferring Scribes: krisk WARNING: No "Topic:" lines found. WARNING: No "Present: ... " found! Possibly Present: jgraham krisk plh You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy WARNING: No meeting title found! You should specify the meeting title like this: <dbooth> Meeting: Weekly Baking Club Meeting WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 09 Oct 2012 Guessing minutes URL: http://www.w3.org/2012/10/09-htmlt-minutes.html People with action items: WARNING: No "Topic: ..." lines found! Resulting HTML may have an empty (invalid) <ol>...</ol>. Explanation: "Topic: ..." lines are used to indicate the start of new discussion topics or agenda items, such as: <dbooth> Topic: Review of Amy's report[End of scribe.perl diagnostic output]