W3C

- DRAFT -

SV_MEETING_TITLE

27 Jul 2010

Agenda

See also: IRC log

Attendees

Present
Regrets
Chair
Kris
Scribe
plh

Contents


<krisk> OK

<krisk> let's start then

<scribe> scribeNick: plh

bugs in approved tests

Kris: None

Review Current Tests Posted To List For Approval

<krisk> Lets move on to #2

Philip Taylor's canvas tests

<krisk> So we have Philip taylor's first batch of tests (fallback, type, size)

<krisk> I look at the first set and they seem fine except the type.delete test

http://test.w3.org/html/tests/submission/PhilipTaylor/canvas/type.delete.html

<krisk> So the spirit of the WebIDL is to ensure the DOM objects are like JS Objects

kris: it seems that we're all wrong
... and the test is wrong

<jgraham> Hmm? I'm not sure why that's the "spirit" of WebIDL

<krisk> The test tests that you can't delete the HTMLCanvas object

<jgraham> If browsers interoperably implement one behaviour I would expect that to be retained

delete window.HTMLCanvasElement;

<krisk> but delete is supported by all JS objects

<jgraham> Any js object can be [[DontDelete]]

<jgraham> or the ES5 equivalent

http://dev.w3.org/2006/webapi/WebIDL/#delete

<jgraham> I don't see a problem with this test

kris: do we want to support for JS in 2006 or align with ES5?
... ES5 is forcing you to delete the object

<jgraham> the ES5 equivalent is [[Configurable]]=false

kris: there is no [[DontDelete]] option

<jgraham> See 8.12.7

<jgraham> Of ES5

pointer?

<jgraham> The delete operator calls that with Throws=false (in non-strict mode)

<jgraham> http://www.ecmascript.org/docs/tc39-2009-043.pdf

thanks

<krisk> see page 40

<jgraham> Yes 8.6.2 explains what [[Configurable]] means

so, on delete window.HTMLCanvasElement

one need to throw a TypeError exception

if [[Configurable]] isn't true

<jgraham> No

<jgraham> Only if Throw is true

oh yes

<jgraham> (which is isn't unless you are in strict mode)

<jgraham> (which the test case isn't)

<krisk> so I think we should align to ecma5

does the 2d spec needs to say anything about [[configurable]] ?

<jgraham> WebIDL needs to be updated to ES5

<jgraham> But it is clear how [[DontDelete]] maps in this case

I gree

agree

<krisk> OK

so, I would expect the canvas 2d spec or html5 to say that the property cannot be deleted

<krisk> so then the test should be [[configurable]] == false

there is probably a general statement in thml5 indicating that you can't delete any of the properties

<krisk> so then we just need to check that the HTML5 spec states this so that it's clear

<jgraham> I assume that WebIDL says that interface objects in generat cannot be deleted so that every spec doesn't need to say it for every interface

<jgraham> *general

oh yes it does

in section 4.5

If a host object implements an interface, then for each attribute defined on the interface, there MUST be a corresponding property on the host object:

# The name of the property is the identifier of the attribute.

# If the attribute is declared readonly, the property has attributes { DontDelete, ReadOnly }. Otherwise, the property has attributes { DontDelete }.

so ew're fine

we're

<krisk> ok then we should assume that at some date the webidl will change and be updated to use configurable and not dontdelete

<krisk> Any other feedback on the first set of tests?

Resolution: fallback, type and size canvas tests are approved

<krisk> I can move them into the approved folder

Opera's and Microsoft's getElementsByClassName tests

<krisk> I'll keep the same structure

<krisk> Seems like these should be changed so they can fit into the harness

<jgraham> Anne wrote the Opera tests. I think he plans to convert them to the new test harness and submit them

<krisk> I can do the same for the MSFT tests

<krisk> great

<jgraham> (if Anne doesn't do it soon, I will do it; it doesn't look like much work)

<krisk> Now in theory once this is done they will look like http://test.w3.org/html/tests/submission/Opera/resources/apisample.htm

<krisk> So we just need to combine this with the test runner http://test.w3.org/html/tests/harness/harness.htm

<krisk> Though since these don't need manual verification they should run automatically

<jgraham> It should be easy to combine the harness with a funner. It provides hooks to get callbacks when the tests are complete

<jgraham> *runner

<jgraham> It's the same way the output is implemented

<krisk> yep

<krisk> we can approve them once they get moved into the proper harness

<krisk> jgraham do you agree?

<jgraham> Yeah

Discuss test runner/harness

Discuss test input and test results xml formats

<krisk> currently the test runner outputs plain text

<krisk> I'd like to convert this to xml, or JSON (jgraham's feedback)

<krisk> jgraham do you really want JSON or can you live with xml?

<jgraham> I can live with XML but I would prefer not to

<jgraham> I think for key-value pairs JSON is a better format

<jgraham> That's essentially exactly what we have here

<jgraham> It is also easier to work with from code

<krisk> I think either would work just fine, though xml would seem to be easier to validate that the results don't have a type/error

<krisk> though let's just choose JSON and move on

I don't think this is going to be an issue

if you don't give me correct results, I'll simply complain :)

<jgraham> Yeah, if we need a "validator" it should be a few lines of custom code

<jgraham> in python or javascript or whatever

as long as the results go through my transformation steps, I'll be happy

<krisk> OK let's talk about the data - to make sure we all agree

so, I need a clear result for each test

either pass, or fail

<krisk> the first section of data should contain UA info

is there an other state?

like not applicable?

<jgraham> kris had "not implemeneted"

<jgraham> I don't understand the use case though

<jgraham> We don't really have optional features

for the section, I need a simple string to identify the agent

OS info might be good

<jgraham> That is typically included in the UA string, I think

<krisk> userAgent, browsername, date, submitter, Testsuitename

<krisk> so then we want userAgent, browsername, date, submitter, Testsuitename, and OS

Testsuitename?

<jgraham> why submitter?

<krisk> TestSuiteName == HTML5

<jgraham> Why testsuitename?

<jgraham> If it is a constant

<jgraham> As Philip pointed out, we probably care about the version of the testsuite that was used

<krisk> that is true it should not change - so then userAgent, browsername, date

what's the different between userAgent and browsername?

<krisk> do we agree

<krisk> some browsers for compat place stuff in the UA to make them look like other browsers

<jgraham> As long as there is a 1:1 mapping between the complete UA string and the browser, it is not a problem

<krisk> We should keep both - since browser name makes more sense that a UA

<jgraham> Well I don't mind if it is easy to extract using js

<krisk> Lets move on to the next part - data for each test

<krisk> so we have URI,Featurename, specref, result

do we have specref?

or even feature name?

<krisk> I think we just need URI, Featurename, result

<jgraham> I think URI:result is fine

<krisk> I think we should have feature name - since parts of HTML5 are much farther along and will be interoperable long before other parts

<jgraham> But maybe URI:[result, message] is better

<jgraham> We can get the feature names from the URI

agreed. if we have featurename available somewhere, I can find it based on the URI anyway

<jgraham> No need to submit it each time

<krisk> That will work, we can pull lhe feature name from the URI

was thinking that I might need to do an extra classification anyway to make the results more comprehensive

<krisk> I tihnk using the URI will work e.g http://test.w3.org/html/tests/approved/audio/audio_001.htm == audio test

so, we need pass, fail? do we want to differentiate between the fail states? (timeout, not implemented, crashed, come back later, etc.)?

<krisk> using a simple split on '/'

<jgraham> I think timeout is like fail for conformance tests

<krisk> the next part is the results part pass, fail, not implemented

<jgraham> It might be different for browser vendors so it is worth distinguishing at the harness level

<krisk> I don't want timeout

<jgraham> but not in the presentation of the results

<krisk> The reason for not implemented is that we are going to have some features not implemented

<krisk> for example SVG in HTML -

so, you'll expect the harness to recognize if the feature was implemented or not?

<jgraham> I don't really like the idea of not implemented

<krisk> if a browser doesn't implement SVG in HTML that's differnet than one that does a very poor job at implementing SVG in HTML

<jgraham> It seems likely to be inconsistent between different parts of the testsuite

<jgraham> I disagree that is different

<jgraham> From the users point of view it is feature that you cannot rely on in that browser

the boundary between not implemented and failure isn't clear to me

<krisk> so then we should just stick to pass/fail?

if a function returns a wrong result, is it a failure or not implemented?

<jgraham> I think we stick to pass/fail

<jgraham> Anything else is too complex

<krisk> Well the browser vendor that submits the result can just say it's not implemented

<krisk> surely they would know if a feature is implemented or not

<krisk> I was thinking of the case for css2.1 run-ins

yes, but one might say it's not implemented, and an other one might say it's a failure

<jgraham> My imagined process here doesn't involve people editing the reaults files by hand

I agree with sticking to fail/pass seems a lot easier

<krisk> even if you don't implement css2.1 run-ins you end up passing about 1/2 the cases

<krisk> which is a little misleading

oh, that's different indeed. so you want to be able to say "not implemented" on tests that you actually pass>?

<jgraham> If we end up with a data-presentation problem, we can compile a seperate list of features that UAs claim to not implement

<jgraham> It doesn't need to be tied to the test harness

<jgraham> and indeed I don't see how it can be (without manually editing the results files) in the case you mentioned

<krisk> Ok then lets just have the tests results be pass/fail and then we can list features that are not implemented when the data doesn't make sense

<krisk> like the css2.1 run-in case

I don't mind have a "blacklist" on the side

having

ie, if you really insist, sure I'll pretend you don't pass the tests :)

<jgraham> heh

<krisk> Ok then we have for the test results part pass, fail

<krisk> and we can add a not implemented list to the report when it becomes a problem

ok

<krisk> The test input part - I assume this should be easy

<krisk> just uri and type

type?

<krisk> type == manual verification || trype == automatic verification

<jgraham> That seems to be a property of the testcase

<jgraham> So we don;t need to submit it with the results

<krisk> This is the input not the output

I think Kris is saying the same thing

<jgraham> OK

<krisk> At first I didn't think this woud be needed at all

<krisk> and we could just use plain text

<krisk> though when you run the tests you don't want to run a manual test, wait for a automatic test to run, run a few more manual tests, wait for another automatic test to run, etc...

<krisk> it should just let you run that manual tests as one bunch

makes sense to me

<krisk> any othe agenda items?

<krisk> I have 2

<krisk> I

<krisk> I'll send out another set of canvas tests to review (philip's tests)

<krisk> since we want to keep making progress on these tests

<krisk> also I'll be submitting some tests for http://dev.w3.org/html5/spec/Overview.html#dom-document-getselection

<krisk> so for the next meeting we should be able to approve some more tests and have further progress on the harness and test runner

<krisk> any others?

ok, souds good

sounds

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.135 (CVS log)
$Date: 2010/07/27 16:20:41 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.135  of Date: 2009/03/02 03:52:20  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/UA/UA string/
Succeeded: s/pul/pull/
Found ScribeNick: plh
Inferring Scribes: plh

WARNING: No "Present: ... " found!
Possibly Present: Kris jgraham krisk scribeNick
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy


WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting

Agenda: http://lists.w3.org/Archives/Public/public-html-testsuite/2010Jul/0009.html
Got date from IRC log name: 27 Jul 2010
Guessing minutes URL: http://www.w3.org/2010/07/27-htmlt-minutes.html
People with action items: 

[End of scribe.perl diagnostic output]