W3C

ERT WG

17 May 2006

See also: IRC log

Attendees

Present
Johannes, CarlosV, Shadi, Chris, Jim, CarlosI
Regrets
Nick, David, Charles
Chair
Shadi
Scribe
Jim

Contents


blanket statements

<shadi> http://lists.w3.org/Archives/Public/public-wai-ert/2006May/0054

SAZ: blanket statements proposal, 2 requirements making more compact view of what has been tested for humans, and not to lose information on exactly what has been tested
... 2 use cases, make blanked statements to compact, and also blanket statements to claim conformance
... The 2nd type are used in P3P and RDF TL which both described content for labelling

JL: I think the proposal is fine

CV: I don't see how it works with blanket statements

oops sorry carlos

CI: It's pretty much the idea I have too

SAZ: I want to include more semantics of what is included
... I think we can always pick up vocab from elsewhere for the conformance bits

CV: The issue is if we can use regexps in uri's or similar to allow describing more rather than a web-content-collection

SAZ: Initially we did want more semantics of what's in a subdomain, for a more comprehensive class, but I think that is beyond the basic what has been tested area, and we should wait until we can see how much we can re-use from other vocabs when they have been reviewed
... the webContentCollection is very good for compactness from automated crawled sites.
... So you can query the report for what was the first URL, or what was really tested.

CV: You want the start "url", and what was actually tested.
... I'm concerned the general claim, may not match the list

SAZ: It's not a problem if one tool only tests some, as long as it's recorded, it doesn't matter if e.g. the css isn't tested

CV: In our tool we have much more configuration options than just a url, so not everything may be tested, but it gives the wrong impression

SAZ: So you're saying there are missing parameters, e.g. link level to show how many beyond the first page.

CV: I'm concerned it's ambiguous.

JL: I don't see a problem with a slight level of ambiguity, there will always be the ovreall list of what was actually tested

CI: We're just adding another level for more human consumption, you don't have to trust it and can use the complete list

SAZ: Let's say I use tool A and test everything under example.org/* - and just go 1 level deep
... tool B do the same but say go 2 levels deep
... tool B will produce a lot more, is that a problem with the results?

CI: a regexp suggests there are not other parameters like link level
... if a regexp isn't enough we should add more properties

CV: another scenario, you have one tool that only managed to find part of the content under example.org/* and another that finds all and you get pass with the first.
... I don't think there should be something that can be interpreted ambiguously

SAZ: I propose we wait until the RDF-CL and P3P comes back and Chaals's webformats access-control PI's

Comments on WCAG 2.0 "conformance" and "baseline"

JL: I'm concerned that the baseline is against tech's that have no conformance criteria of their own.

<CarlosI> I strongly share the same concerns

JL: If they don't say what a conformant User Agent to e.g. javascript is then it's essentially meaningless

JK: The same issues with HTML aswell, e.g. with LINK/ABBR too that are not actually widely supported

<JohannesK> Jim: they are, but not in IE :-)

SAZ: How do you (an author) identify that there are actually UA's with adequate support for a baseline

<CarlosI> Basic use case: CSS in baseline. Could I use display:none knowing it doesn't work properly

SAZ: How does saying HTML/javascript exactly help with a baseline, for example if you are using document.write then you need to ensure it meets the appropriate guideline
... So if you focus on a feature, you can usually find guidelines that reflect on that.

JL: I don't think there are guidelies that cover everything within scripting
... It matters more for javascript because there are no conformance requirements on what a conformant javascript User Agent is

SAZ: So I think it is related to the previous point about finding a conformant UA for a particular baseline

CI: if we have CSS in the baseline, CSS has very different range of support in UA's does that mean you cover the different implementations or is it only in a perfect ua

SAZ: Could they be relying on conformant UA's in their baselines

CI: Are there any CSS 2.0 conformance UA's
... I think we shouldn't rely on a baseline that doesn't exist!

SAZ: So 3rd point is you're relying on a conformant browser, even though there are in reality almost certainly not conformant ua's - even html 4.01 has bugs and issues

CI: The conformance runs into too few uas and people using them that actually are conformant. we should keep that in consideration
... even if 90% of ua's conformant doesn't mean most people do
... What is javascript?

SAZ: How easy was it to understand the baseline model?

CI: Easy to understand, but can't put in practice

CV: Concept is flawed and tough to put in practice

CR: I think they had no choice, but to have a baseline

SAZ: Baselines might work well in the intranet environment, but not on the public web
... In an intranet you know every sort of browser

JK: I don't think the baseline concept is as flawed, it's just a matter of defining what I assume to be in the baseline

<CarlosI> Actually, I think people is already applying the "baseline concep" in intranet evironments

JK: If I have a controlled environment then I could use HTML, but without the controlled environment I can only use html

SAZ: Do you think there's enough guidence on how to choose a guideline for public websites.

JK: I don't think it's clear, but a public sector website should be very low.

SAZ: I'll summarise to the list and then people will add comments!

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.127 (CVS log)
$Date: 2006/05/17 15:28:02 $