Minutes of 4 December 2000 ERT WG Face 2 face - joint meeting with PF WG


In the room:

timothy springer, len kasday, mike  williams, susan morrow, sharon laskowski,  harvey bingham, wendy chisholm, charles mccathienevile, dave pawson, jon gunderson, and daniel dardailler.


William Loughborough (aka Geeze), Sean Palmer (aka Sean or SBP)


These minutes are a combination of notes taken in IRC by a participant in the room and comments from those attending via IRC.

Scribes: Daniel Dardailler, Charles McCathieNevile, Wendy Chisholm as well as IRC input from Al Gilman, Sean Palmer and William Loughborough

Summary of action items and resolutions


LK Please tell us: name, organization, location, relevant work, what want from this meeting

susan morrow, macromedia, sfo, dreamweaver, learn and contribute

len kasday, temple, chair, good first start for ADL

mike williams, macromedia, sfo, flash, accessibility of flash

charles MN, australia, wai, ADL

tim springer, ssb, sfo, build ert accessibility, incorporating ADL in product

al gilman, cochair pf, back from w3c ac, ert take lead

harvey bingham, retired, feel like becoming PWD (eyes, hearing, failing, can't remember what else :-)

jon gunderson, U illinois, uuag chair, learn ERT, ADL, PF. UUAG will consider more repair stuff in the future

sharon laskowski, nist, heard from al, len, focus so far on usability, worked on log format, looking for activity with this expertise

wendy, wai, staff contact for er/wcag, hope to leave with clear idea of where headed so we can create some tools

dave pawson, rnib, uk, wai, daisy, validation, repair

daniel dardailler, wai, france, interested in adl for w3c conformance in general

<dd> ok, we're done around the table

<dd> bill, sean, could you say something about you

<Sean> Sean B. Palmer: University of Sussex (CPES UG), U.K., WAI ERT/PF/GL WGs. Here to develop ADL. Semantic Web/XML/RDF/XHTML developer

<Geeze> William Loughborough resident geezer from Smith-Kettlewell Institute of San Francisco. Interested in marrying usability/accessibility and the Semantic Web



ADL - Accessibility Description Language

len: adl used to record check, both manual and auto

len: what do people expect from ADL

cmn (charles): ability to capture several evaluation (similar to css cascade)

<Sean> c.f. Len's original requirements (i.e. expectations) at


dd: is it also used for accessibility of language (e.g. smil) ?

<Geeze> Not so much a formal "language" as a collection of tools/methods for expressing assertions/evaluations concerning accessibility and conformance to guidelines therefor.

<Geeze> Should be a requisite part of *ALL* Web documents!

<Sean> And hence enabling us to point and repair according to those expressions

tim: record who did it, what it says, both used as input/output

dave: is the output a rating ?

tim: not just that

<Geeze> "AN" output is a rating. There is no "THE" output.

<Sean> In general we want to create a rich metadata description language that can describe the accessibility of languages, and also point into these languages specifically if possible. This should also enable the repairing of the documents

<Sean> People should be able to make accessibility assertions about their documents, and accessibility tools authors should be able to utilize ADL for pointing out accessibility errors and repairing them.

<dd> bill: why not a formal language ?

<Geeze> too restrictive and huge of an undertaking - leads to enmirement in details rather than obtaining results.

<Geeze> insufficiently diverse.

<dd> we're now looking at charles' RDF page http://www.w3.org/1999/11/conforms/

<Geeze> A collection of such items rather than a formal language is what I covet.

<dd> but it used rdf, so it is formal ?

<Sean> Note how it compares with myself/Mr. Loughborough's example at: http://xhtml.waptechinfo.com/subadl/

<Geeze> but it's not "ADL" nor is it itself a language but just use of existing languages/techniques - already in place.

<Sean> RDF provides a framework that can allow us to create vocabularies

<Sean> How we define and use those vocabularies is the important part

<dd> agree

<Geeze> If we focus on creating a "language" instead of using its precepts to utilize existing mechanisms we are viewing the "hole" rather than the "donut".

tim: ability to produce human readable report (using XSLT e.g.)

<Geeze> right on!

<Sean> That's one of many requirements and possibilities

<dd> we're now looking at the requirement section in Sean's

ADL summary http://www.mysterylights.com/xhtml/adl/

<Sean> http://www.mysterylights.com/xhtml/adl/#p31

<Geeze> decentralization and diversification are inevitable/central to success. More than one way to skin this cat.

<Sean> Look at 3.2 - it's more generic: http://www.mysterylights.com/xhtml/adl/#p32

susan; how use in authoring environment? important not to look at pages but entire sites. also, sites that are dynamically generated. what does that mean in terms of inserting code?

<Sean> ADL could be dynamically generated along with pages

<Geeze> E.g., Bobby/WAVE/A-prompt and many others do similar things in somewhat different ways. This diversity will be


<Sean> ADL could/should be an integral part of a well designed site

cmn: pull pieces together dynamically and make sure alternatives exist for those pieces. address contents of database plus template that's used to pull stuff together.

<Sean> Pull what stuff together? Be specific

<Geeze> client side proxy is !important last in "cascade".

LK: fixed content page look at template.

dd: how do communicate to human that human judgement is required? in a way that they can relate to it.

susan: there are people that don't know anything about accessibility or HTML (or other langs)

<Sean> Too many!

susan: to be as widely implemented as possible, needs to be understandable by wide audience.

<Sean> It should be automatic

davep: implement circularly behind the scenes.

<Geeze> They've opted to participate by using a "participating" authoring tool that has involved them in making human judgements which is flattering.

<Sean> If editing tools had ADL type systems built in, and required metadata etc...

cmn: assume test suite. i went w/rdf since can associate w/that suite rather than official version.

<Sean> Like the UWIMP system, we aim to just have people type in stuff about their site, and out pops a profile

<dd> what's UWIMP?

<Sean> I'll let Mr. Loughborough field that one!

<Geeze> Universal Web Index Management Program

<dd> URI?

<Sean> Not yet. http://uwimp.com/ under construction

<Geeze> http://uwimp.com but it won't be up for about three days. Just bought the domain name!

susan: re: cmn proposal: say that someone has a long pg, they get back 100 errors, how will the lang help a tool help them decide importance.

<Sean> Based on WCAG levels, I suppose...

cmn: does not prioritize. if you want to, do that on implementation. e.g. may have a tool that only describes images. can say, this is what i can help you do.

<Sean> If there were more levels: A1/A2/A3/A4/A5/A6, it would be easier to assign priorities to accessibility.

cmn: another tool might pick p1's out of WCAG.

<Geeze> User choice as to how usable/accessible a site should be.

cmn: depends on tool implementing and talking w/author.

<Sean> Yes.

jg: how many people use templates from AT vs. use own.

susan: customize templates.

<Sean> If people hear about accessibility, then they should be able to look it up and see that they can use our tools, the W3Cs tools.

jg g: "using for data?" "using for layout?" what about tool telling problems while designing template.

<Geeze> "templates" from AT are actually widely exchanged sort of like music mp3 files. Show me yours, I'll show you mine, etc.

susan: first, if they are a pro designer iterative design. usually have a design that they have to repair. re: AT (dreamweaver) we don't interfere w/the design process. we could design something that lays on top of that. to do that in the authoring environment that would be inappropriate.

<Sean> Most templates aren't even valid XHTML. Even Amaya!!!

<Geeze> E.g. use a little program that makes image map design easier "within" one's authoring environment.

cmn: take the 1st image in your doc if you turn it off, still able to use the pg.

sharon: checklist need to go through.

LK two methods: 1 batch mode other do creative process.

JG not to interfere w/creativity, some techs more access than others. if waiting for post-design comments, they will complain. "if i used this diff tech when i started out would be ok and not have to redesign.

ag want to capture MVC, model capture it all. give user filtered summary should be the "view" they reviewing results.

ag; benefits of formality:

  1. can use evaluation from one author, repair from another. Interoperability.
  2. drive view constructions.

sharon: i agree w/al. i want to add: interactivity during repair is important.

  1. as susan said, part of design is interaction.
  2. management of repair process. able to talk about progress.

<Sean> Mainly a) Asserts accessibility, b) Makes assumptions about accessibility, c) Makes repairs as far as I'm concerned

<wendy> are these requirements sean?

<Sean> Yes. They are the core requirements.

daniel: sean why is making repair a requirement?

<Sean> To make assertions about site and page accessibility is the main one

<Geeze> caveat that "makes repairs" is under user control.

<wendy> how is making the repair in adl a requirement?

<Sean> That is a strongly advised option

<Sean> If you make assertions about accessibility, why not repair based on those assertions?

dd: understand, not sure agree. should it include process assertions? when in fact a format?

tim: input into a repair tool.

<Geeze> ATAG revisited - the timing/extent of "repairs" must be author-controlled to be usable.

<Sean> Otherwise, what else would we use ADL for. Except to say this page is WCAG 1.0 AA

dd: can say where eval comes from. manual, auto, etc. name the credentials.

cmn An ADL description identifies 5 errors. Use a repair tool to fix two of them And generate some more aDL that says they're fixed

<Geeze> just pointing stuff out is instructional and that's a main goal - training in methodology. (optimized: change the existing ADL to acknowledge the changed status)

<Sean> What exactly is ADL for? It's an accessibility description language, by definition. O.K., so what are we using it for?

dave: bldg on charles idea of modularity. potentially end up w/100's of reported bad points in code.

<Sean> To assert accessibility, and then allow repairs based on that!?

dp: if prioritize, then can show only important ones. requires matrix of relative importance needed.

<Sean> That can easily be built in.

dp: if 155 errors are all equal, won't do any.

<Geeze> hence a "user template" of what to report/repair/ignore.

dp: particularly needed if modularized.

<Sean> Just say "you did this error 155 times"

<Sean> That could be a user template option

dp: misinterpreted not same error 155 x's, but 155 errors.

<Geeze> there's no jail term for ignoring or brownie points for following the pointers to "errors".

dp: where to start.

<Geeze> we've started.

<Sean> So what if we get a list of 155 errors. If we can automatically repair them all, it does't matter.

cmn: 1. agree that in an implementation, good to prioritize.

<Geeze> in fact Josh Krieger started it with Bobby .1

cmn: another model: hot dog. it has access checking. they have a structure view of doc. that id's errors.

<Geeze> so will Flash.

cmn: always available. use it as go. if you add 500 imgs to a site.

<Sean> Yes. WCAG need to say exactly how important each checkpoint is. Some checkpoints are more/less important to some people than others. There is no fixed table. We have to allow the user to make some sense of it: the user of ADL that is...

cmn: you might add 500 IMG elements, then assign images or do each as go.

<wendy> wc: (just to irc) we have priorities in wcag.

<Sean> We need more layers

cmn : important that design decisions be made when being implemented. author can say, "don't hassle me" or" help me as I go":

<Geeze> IMO those "layers" might be user-defined rather than by some "central authority"

tim: if incorporate priority into ADL.

<Sean> Some people may say that with 500 images, I'm not going to be able to write 500 longdescs, so they choose to ignore that. William is right.

tim: can point to external resource. as author of tool can say, for "webman" we'll prioritize that way. bob, will have a diff priority. if we have 10 standard tests, we can point to wcag, or our description.

<wendy> sean - ok. any comments b4 you go?

<Sean> I have one main comment

sharon: separate description of accessibility from user interface.

harvey: images might have multiple uses.

<Sean> ADL should be a tool that people can add to their sites to enable automated tools and humans alike to see how accessible that site is. They should then also be able repair to their needs so that they can access the site how *they* (the user) needs it. That is a lot of scope, but it is definitely needed.

harvey: if description attached to image, fix it once then all places point to it should be able to tell has been fixed.

<Sean> I'll be back later, thanks.

susan: people get back huge reports, the majority of stuff they should look at not in reports. b/c we couldn't evaluate the code. it is difficult to decide what to look at. even just priority 1 checkpoints. can be overwhelming. hard to give feedback as to what to focus on.

<Geeze> sounds like a "retrofit" viewpoint. Must focus on authoring tool reminders

al: geeze made good point: alerting on the fly is a user option. ATAG discussed this. has addressed. what do you tell the designer first? when have many errors. my ideal: have demographics. not only that violated rule A, but that there is a pattern.

<Geeze> the journey of 1000 errors begins with one mistake

al: 67% of violations of checkpoint 3 look like this. therefore, something in reporting function. other errors spread over variety of patterns. we don't have that knowledge base. design description lang so tool can use that along w/data w/what finds.

<Geeze> I think we DO have a form of that "knowledge base" via WebWatch and resources like it.

<wendy> (just on irc) of errors made via WCAG?

LK if we want to tell person the priorities, doesn't have to be in aDL file. that say the errors, separate source tells how to categorize and prioritize. e.g. 508 might have their own. that pointed to by tool.

<Geeze> I used to hate spell-check notification, learned to love it. same could happen with ATAG conformant tools.

dave p: another complexity: if I minimize author workload, but they say, "i have a graphical site, bump up priority of a particular group."

dave P: therefore weight tests based on audience needs. W3C more equal weightings.

CMN that's len's example. 508 vs. WCAG.

<Geeze> "audience needs" and one's own most frequent transgressions.

LK this is additional.

tim: thought about a lot. trick is attribution chunk of ADL to attribute chunks to people. if I attribute that SSB conforms and point to 508. access board points back to my doc and says valid relative to 508.

daveP: ref from rather than w/in.

cmn: statement of what tests use is not integral to test results.

tim: questions about # of tests. important to have commonalities between errors. would like to see each one flagged as individual violation. esp. as input into repair too. also, supports need for summary stats in ADL output.

<Geeze> "individual flags" would be *your* option but not mandatory for all.

[10:45] break.

LK: people having interesting discussions during break. share points from break?

daniel: my feeling is that we should decide if ADL (EDL) is raw data. anything inferred should not be there. should be in diff file or processed elsewhere. the eval format should just have result of eval. we need to discuss what data model is used. len and i drew a simple rdf graph where have a resource that has a statement about it. both description of error and the method used to eval. i.e. rdf triplet. URI, problem, method , author of description start w/simple data model then map to rdf. don't need to put in ADL that it is "A" or AA, can be inferred. if this file has 3000 image missing alt, don't put in file, infer.

<Geeze> not just "errors" but also pointers to conformances?

dave P: clarification: you are describing ADL as output of checking process. my thought was ADL was what does the process.

DD: ADL Is the lang.

daveP; the lang of the report?

DD YES storing rawa data of eval. e.g. want outp ut of HTML validator to be aDL. today it is just a human report then EDL - evaluation description lang.

LK shall we try to get consensus? should we create a lang that includes other types of validation?

<Geeze> and other items than "validations"

sharon: clarification: meta info about a product, e.g. B. Schneiderman this site works best with software X. does raw data include meta info about accessibility, what user tests were done, etc.

DD: could be. same type of statement. have URI, say something about it. could say, "it's not usable." using usability method Y. see diff level of rdf and schema. have overall schema that describe the statement,. the descrip point to another schema, e.g. for WCAG or whether used human testing, or heuristic, or ...

<Geeze> hb Jove, I think you've got it.

DD for HTML validation have only one method. using parser Z

sharon: this is a small step in that direction. this beyond what discuss this a.m.

<Geeze> Giant step!

davep: scope?

dd: boundaries are: designing a lang that assesses...



dd: list of assertions of fragments

LK continue going around the table

sean (on the phone now): making a framework to eval docs not just accessibility. accessibility one use?

dd yes

sean: beyond WAI?

DD yes we are in need in W3C to formalize conformance. e.g. test suite to test SMIL conformance. could generate EDL from test suite.

<Geeze> Hoo Ray!

DD the framework is simple.

<Geeze> WAI will wag the W3C dog in this matter.

DD work detail of accessibility specifics out later.

sean: sounds good.

susan: i talked w/dave during the break. about priority wcag checkpoints that we could not test for. e.g. p1, shift in natural lang.

mike: cmn and I discussed flash

<Geeze> the author can "test for" that.

mike: what we're working on in future.

dd: i also discussed w/mike at break. flash objects can have unique id.

<Geeze> and longdescs?

mike: pointing into flash files is something working on. want to make files searchable. like xpath.

mike: not be a longdesc. right now, no. you can assign a flash object w/in anything in HTMl doc.

cmn: this HTML fragment is a description for this flash object.

dd external file?

mike: yes.

cmn: right it in rdf.

tiM: affects of jetlag.

al; pass

harveY pass

jon: higher level concept: this based on design decision. creating this affect (e.g. menu) has a variety of accessibility problems. if diff tech used, could get rid of problems. say this group of things a problem?

<Geeze> tools to aid in habituating indexing.

<wendy> ?

jg: say i have 30 images in a composite images. w/mouseovers. images have text. if had used something else to make site map. access issues may have gone away. could simplify site. if known what author doing (site map) then the group (this feature) could be classified.

dd: summarizes.

jg: rather than saying 100 problems, say there are just these 2.

dd that is compatible.

<Geeze> If ADL-enabled tool is in use it will train the author in indexing/outlining/clarifying the generated materials.

dd description of real problem, but also the bigger problem. carry raw data (specific problem) as well as higher-level.

susan: do you mean things like disjointed rollovers? there is info embedded in jscript?

jg: i can give a demo. at lunch. at design time, questions have to be asked about what the author is trying to do. after they've created it (getting gifs in right places) then say "you could have done it this way" they won't want to hear it. could have saved time.

sharon: this sound like user interface design vs testing. perhaps tools (category analysis - do upfront). don't go to user testing until get basic testing out of way.

<Geeze> "user interface PLUS testing"?

sharon: in design phase, do some testing rather than testing phase (for usability). part of engineering process.

dave: scope, user impact. scope: w/shift to output (say bobby), i have the image that adl is the app. if emphasis on output have scoping problem w/prioritization & test integration. 2. from what we're hearing from users, i want to produce accessible content. but how? i don't care if it is a checkpoitn, i accept yoru authority it is wrong. i want to be told how to fix it. straight from report to suggestion. if really curious, explain why. (effect on users)

cmn: 1. tim&i - we start of mtg talking diff langs. 2. dd suggestion of separting eval lang that we plug pieces into., supports dave's point. the tool can point you to where you have configured, e.g. specific schema example or animated process. 3. checking for changes in nat. lang and many others are implemented. one reason to keep the model open is we need to account for that. tools are good and bad at different thigns. 4. take this model, obvious schema to use would be AERT. this now w/in AU. note dependency. 5. tim&i - implementing rdf in WART. (it's a form of yes/no questions).

ag: server that generates report tool from user.

wc web accessibility report tool (WART)

cmn: SSB uses XML schema to carry eval info. cool to see how plug together. 6. canonicalization of non-xml.  deal w/javascritp, css,etc. essentially declare algorithm. bad html another. tidy e.g. that makes that conversion. 7. re: jon's point: you can infer that tool X did or didn't do this. at implementation level, e.g. dreamweaver has clean up word generated HTML.' it expects what word generates. anothere.g., know of good images w/rdf. can expect rdf.

<Geeze> hence the "how" part of the indexing.

cmn 8. requirement: to add human readable comment. or assertion about a doc.

<Geeze> hence "proselevel"

wendy: pass

len: sketching on board what info attachand where. [sorry, william no way to transmit to you.] e.g. defn of priorities. defn of compliance. defn of repair.

LK talking about a process that will combine all the info to make a report or part of an editor. if we go w/that model. then can talk about the variety fo files. then suggestions about what some processors might do. re: concern about interactive rather than once finished, the raw issues list could be updated every time you completed or added another object. person always look there. check for alerts. this framework should cover people's concerns.

jg: can this be used tos ay, "this is a good way to do this."? can i use this to search the web to find out good exampes.

dd yes positive assertsion. right, don't want to call it an "error" it's a statement about a way of doing something.

jg: if building functionality. collection of 20 or 30 elements i can look for that.

hb: more than pointing to techniques docx.

<Geeze> real Web examples *become* the "techniques document".

al: chief thing that i hear william saying: we need to promote documentation of web base and quickly!

alg: now for myself: this is a tech. assessment at what you have of w3c recommendations. this output is the result of applying some method to some fragment. you have data of output. this data is the result of applying criteria to something. this is what rdf does. what rdf doesn't is what dave asked for. defn. of primitive tests. the pattern that defines a checkpoint is outside capability of rdf. tomorrow i will talk about pf interface w/query group. nothing in w3c recommendations that is the normal form for writing these. go ahead in prolog or whatever we need to write the low-level traps. all can do in rdf tod ay is refer to test by name. we'll create namespace for names of test.

<Geeze> but "the pattern that defines a checkpoint" is not outside the capability of the author, who can make an assertion in good faith.

alg: the point is: the database that we build should accept self-evaluation assertsions from author.

DP can come from anywhere.

dd: give an example rdf can't do but query can.

al: doesn't have logical variable. has to refer to concrete instance.

DP; schematron does what rdf doesn't.

cmn: can't make an inference.

<Geeze> author can make an inference.

cmn: you can say "man is moral" "socrates is man" but can't infer "socrates is moral"

al: my vision has to do w/subprogram call interface and notion of foraml variable. in calling prog. this is like "x" in forall(x). in logic languges. can't hypothesize entities, only concrete. tim's briefing of architecture of semantic web. but that info is member private.

dd: question of scope. rep hueristics? or rep adl? that's dave's question. on that boundary diff in technology base. if app of rdf, then recognise larger scope. no w3c precedent to say how to do

len: can say this img doesn't have alt-text. can take those 2 statements and infer a violation.

al: believe yes.

len: use RDF for everyting, but have own custome engine to bring things together. a custom logic engine.

al: wouldn't say "custom" but tool that performs logic must understand both rdf and what we add. simply understanding rdf doesn't provide everything.

dd: tools like WAVe have intelligence in the tool.

<Geeze> a multiplicity of such tools wouldn't do us any harm!

dd: we need something stronger than rdf so WAVE can generate ADL. consensus of scope? is scope the format or the logic process?

dave (has also drawn some circles & lines on the board) app makes ADL output. both machine and human processable. doesn't deal with formatting or to scope the priority engine. which should integrate standards, other prefs, and user prefs. priority engine takes as input: standards, user prefs, and other prefs. the app takes priorities from that engine as well as tests. plus a document. the app outputs report (adl) and report (human). the adl is input into repair tool.

tim: we want to codify inputs into testing tools.

<Geeze> sounds a bit like Sean's app?

dave: that will do it.

tim:that's impossible. not a bad idea, but issue of trust that not inserting into system. i have 10's of 1,000's of tests. we might be interested in what my inputs are, the industry is trusting me to make that for them. as ref. for output, it is enough for you to ref me. it's like certificate authority. i say this is accessible then responsible for that claim.

lk: you're lucky. whenever i say that something is ! accessible, people say "really a problem?" "how do you know?" i need traceability.

dp: time are you saying you deliver in standard format the results of your tests?

tim: yes. here is how testing and these are the tolerances.

<Al> Wm:  What do you mean "Sean's Ap?"  He proposed a format.  Is there a processor that goes with it?

tim: if you want to do the research on this, to codify that, ...

dp: thinking much higher. not that level.

<Geeze> "traceability" is a form of centralization that mattereth not. If the movie "gets two thumbs up" I'll buy a ticket.

tim: map to AERT 1.1.1.A, "al-text bad if have filename" and no further. that level.

cmn: "the machine said so" good traceability.

<Geeze> Sean has a program on line that does a bit of what's being talked about here, I'll find a URI later.

cmn: describing how online tool works. http://www.w3.org/1999/11/11-WWWProposal/atagdemo.html

LK: get most argument from subjective which can't formalize in schema.

cmn can formalize anything.

al: depending on schema lang.

tim: scope: dependency on tests. idea of tests and subtests.

<Geeze> al: one of sean's thingies is at http://xhtml.waptechinfo.com/xhtmltordf/

tim: a way in rdf to say levels of tests: meets checkpoint 1.1, but how.

cmn: can sum them up.

al: talking about single tree of tests. in cmn comment are 3 trees.

dd: do we have a list of URI whose granularity is good enough.

WC not yet. wrote up ideas last week

dd: need to assess goodness of doing something (methods)

cmn: granularity is test suite. for ATAG-TECHS we have draft test suite. AERT is sort of test suite. b/c of model, could say best test suite ref is statement of checkpoint in wcag. method is: check by hand. other cases: automated or semi-auto tests to gather.

dp: i want my own root as well as the w3c.

<Geeze> Right on, Dave!

[12:28] > lunch

<Geeze> I have this strong image of Charles sitting in this lifeboat labelled "RDF" as he tries to pull everyone into it from the WWW Sea.

<Geeze> One of the main characteristics of humans is their role as little islands of negative entropy constantly creating order from chaos and that's just what we are about here.

<Geeze> It's not just about accessibility. What ADL (EDL?) will do is be the "electronic curbcut" that makes all this accessibility the route for everybody to have a truly "Semantic Web"

<Geeze> All the considerations about what machines can do vs. what's required of the author underline the other buzzword that so nicely describes what's needed: a "Web of Trust".

[13:00] <Geeze> Al's summation of what I had in my mind was masterful - especially the "Quickly!" part.

[13:00] <Geeze> Now is the time. This is the way.

[back from lunch]

LK Do we want to have XML applicable to any language, even if it isn't an XML language?

HB Some of those are closed - you can't tamper without breaking

WC You can't even look at some of it

DP Restrict accepting for output formats. We don't want to present XML to the user We don't want to output XML to humans

AG isn't the question what we can refer to?

LK Yes

LK: Wendy and people have been thinking about problems in XML that would result in problems for the

WC We want to be able to look at the accessibility of languages, as well as the content written in the languages

DD We would have a system that would work for anything that can be referred to a URI. Anything outside that isn't on the Web

LK Anything can be referred to by a URI

AG It can be referred to. So talked about.

DD If there is a javascript and one instruction is a problem how do you refer to it? count the line? Is there a reference model for anything? How useful is that if it is not understood by others .

AG As long as it can be understood by ADl it is OK.

CMN This is the issue of a canonicalization of nonXML

DD Meaning?

CMN Meaning an agreed way of pointing into non-XML. e.g. bad HTML converted by tidy. That's an algorithm that is defined (and implemented) So what we mean is "the result of running this through


LK So it applies to each language, but it is necessary to figure out a canonical scheme to reference it

WC Think so. we need something that can make canonical references to javascript

LK Are these in the scope of this project?

WC I think it would be interesting to look at this for javascript. That would give us an implementation. Then we have shown proof of concept.

LK We have ECMAscript, broken HTML, XML (that is trivial) Should we try to get something on flash?

MW Not currently...

DD If you want to asses ATAG compliance what you are looking at is a program not a language. Either you consider Frontpage as an object with an ID, or maybe it has modules so you say "this piece of frontpage is not compliant" generally you have two entities. The subject of the evaluation (maybe a page, or a program, or a coke bottle) and the other piece is the object - the reference module you are using for evaluation - AERT, ATAG checkpoints, SSB assessment tests, ... we need to cover all those cases

LK Would we define properties that someone can say "this table lamp is UL approved"?

CMN The properties we came up with, cover that

WC For the case of a function of an authoring tool?

CMN We4 defined a property for an Authoring tool. We3 have a partial database of tools, and you can make assertions about a thing or collection of those things. to use a part of a tool you need a scheme for breaking and referencing part of the tool. We didn't do that

DP Something that has mostly ASCII - can you asses that?

CMN RDF is XML. XML uses Unicode. So you get to deal with nearly all characters...

DP So requirement is to deal with plain text information.

WC There are things like that.

DP How to address character encodings? repair could be switch to unicode

CMN an alternative is to define an XML translation for specific plain text document types (e.g. RFC) which have known characteristics

AG You don't want that if you want to repair them. You want to be able to index into the broken text and be

able to repair it.

CMN The requirement to make it work for repair would be to have two-way transformation algorithm. Mark up, address it, repair it if required, transform back to original format.

LK Scope. Any language (in theory)?

DD Anything that is on the web. anything with a URI

WC Authoring tools may not be on the web

DD We work on the detail for accessibility in that framework

DP Do we mean programming language or natural language?

AG Natural language

DP is that a sub-type of text file?

LK There is "any language. If we build it so it can handle any language, which ones do we actually work on?

WC An authoring tool isn't "on the web". We can specifically say "any language, web tools, ..." Basically anything covered by WCAG ATAG UAAG

AG There are those three things

DD That's fine. The charter of WAI is to cover anything that is used to browse / author on the web.

WC Or used to put something on the web

DD Where is a counter example

WC I think we all agree.

DD If it has a unique ID it is on the web

AG I disagree that it has to have a URI already.

DD I agree. If you consider the CSS test suite there are several tests grouped under one URI. The padding test has 5 tests with the same URI. To assess a user agent against it you need to create a URL for it If I want to say that Mozilla fails CSS1-box property - test 5 I don't have a URI for that but I will create one. I create a URI then I can use that.

LK Are we in agreement with what we want to include?

SM We had talked about dynamic generated things. THey are things you want to evaluate. Maybe you want to restrict it to how it evaluates to the guidelines. but there may be other examples

HB There are a large class of things that have URIs that don't have descr4iption available. It is only somewhere else that we get a description of them. It disturbs me hat anything with a URI is subject to our


DD What's the problem? I can evaluate them against a reference system - for example is it a valid PNG? We have a generic framework. Both the subject and the tests can be varied. We want to work on the accessibility semantics, because that is our role, but leave open the possibility to use other systems / test

TS THere is a question of what is a URI. How does a file, a hard drive, and a URI translate? There is no value for me in having a URI. If I have a java file and give a line number I can start there. How is a URI a necessary framework? It makes more sense to say this is how I interact with a pointer. SO people can go write an engine to point into CSS

WC That answers my question about an authoring tool URI Is a good way to label web documents, but for an authoring tool we would have to make them up - or have some other descriptor - some way to identify it. ideally machine-readable. I think identifier is what is required. with javascript we need a way to identify fragments.

LK Can we put aside a question of URI and see if we are agreed on the class of things we want this to apply to first?

DP Crudest one I would like to use is a document.

WC Web Content

DD Product is another thing We should put tools into scope. Are we agreed that we want to do tools, web content, things that a server reads (e.g. php intermediate processing?

WC Schemas and DTDs If you look at PHP and that are we looking into databases as


DP suggest we exclude that class of beast.

TS Goes back to the functionality question. It would be super useful to be able to look into those things

SM Maybe look at the guidelines/checkpoints that relate to dynamically generated content. For example, look at image fetched by scr5ipt and see if it also fetches a text equivalent. There is a lot of stuff in ASP, JSP, CFM, ... - maybe be specific and it would be more manageable.

WC It is basically an authoring tool

AG or the technology no longer supports partitioning the world into those things.

TS is there a framework in place where we can say "here is our generic means for doing it" and people can adopt the model and their engine deals with a pointer mechanism The default that makes sense is a URI. I could then make something that points into PHP myself...

DD Requirement for scope should be kept in mind as we design a system so we don't close the door, but focus on what you need as a toolmaker. discussion is to be aware that we might want to use the description for a larger scope, but this working group should focus on web content first

TS I agree. That would be a start

AG To index into PHP you need something else. If we divide into cream and skim milk,. you would like a report that says the problem is here in the PHP. We have standard indexing into XML. We don't have a standard way of fabricating that for programing languages.

WC Right. ECMAscript could be a proof of concept.

CMN 2 things. There are well-defined serialization for a lot of programming languages already And we should spend our time on building an initial tool and seeing where we are at - we could spend a long time not getting anywhere because we can't solve all problems at once

DP Good start targets are XML, poor HTML, and ECMAscript

LK We make sure we don't prevent ourselves from doing this, but our initial focus is those three

HB Poor HTML. If we tidy it, I don't think it is reversible.

<Geeze> I just put toothpaste back into the tube and unscrambled an egg so I should be able to untidy that poor HTML?

CMN Poor HTML and tidied XHTML result are in same domain - one has been repaired somewhat.

JG Is it a requirement for invalid HTML?

DP The stuff you find on the web today.

CMN Invalid HTML is a subset of poor HTML

LK We are figuring out how. Can we close off whether we will? So we want to do the work for XML, lousy HTML, and ECMAscript.

<Geeze> The fact we're figuring out how presumes we've agreed on whether?

TS I propose CSS.

Resovled: Ultimate scope: Tools, web content, programming languages used to generate it

DD Do we want to call it EDL?

LK 3 minutes

DD Don't care, but idea of more general would be good.

<Geeze> We're going to "evaluate" Prolog and Lisp? Whoopee!

Resolved: we will call it EDL

<Al> Bill, what this says is that the output of the KGV evaluating XHTML 1.3 validity is one of the things EDL may describe. Ultimate goals include schemas, DTDs, databases

Resolved: we want to start by publishing something

DP concerned of the scope of work. Does it cover input side?

<Geeze> How about "accessibility features of RDF" parallel to SVG/XML/CSS/SMIL papers of the same title?

CMN meaning?

dp: input side is test definitions. output side is EDL content, and delivery of that to people

DD We are saying that the programmatic system that leads to an EDL is out of scope - it is up to the tool to decide what kind of knowledge base to use. eg are they using AERT?

DP That diminishes the value

<Geeze> then we won't have to pay a VAT?

CMN I thought that we had just agreed that we would work on 3 specific input types - HTML-rubbish, XML, and Javascript, against some kind of scheme (My underlying assumption was AERT)

AG does that assumption underlie the consensus?

DD Yes, WC Yes

<Geeze> *YES*!

LK we make this stuff, and say "and furthermore, we recommend this file be used for... would that make you happier?

DP Yes

CMN agrees

WC doesn't that get into inferences and how to process them?

JG Are we going to talk about accessibility, or other uses to?

WC Once we can show it we can get other people to use it for other stuff.

<Geeze> The ADL subset of EDL

LK what would be one or two initial things to do with it?

DP Feed it to a repair tool

WC Feed it to insight...

LK RIght now Wave displays a test. If it includes a test something else makes

AG Can you consolidate a report with the metachecker

WC Some tools may not do any repair

LK Get some summary human readable report. As far as feeding it into repair...?

DP Suggest one is a variant on the other. One is presented for people. THe other may have different

content and be targeted at a machine interface.

DP the raw output is ugly for machines, transform it to a pretty thing for a user to read

LK We want one tool use it to help the user make a repair. Is it useful to make use of this, or would it make sense to plug this stuff right into dreamweaver?

SM We have a ajavascript API on dreamweaver - thinking about how to look at this to look at the code, and then use it.

DD Sounds that a tool would read an EDL, step through the problem, and help the user solve it.

WC Some of the examples of specialized tools - determining if the alt text is useful for an image - you use it as a transfer format when you run that plugin.

DD The intelligence to read EDL is less than to write it.

LK If all EDL did is say "you have missing alt text" it would be just as easy to do it another way

DD But a human report that says "the alt text is no good" - that is helpful for the tool to pick up on and help the user deal with

CMN Machine holds the repair functions, user can do fixing, and EDL says which things need to be worked on

TS Piloted fixing - this is what we do. This isn't a tough thing to do.

HB Do you recheck everything? Or assume original observations were independent

TS At the moment we check all, but I would rather have it working via EDL

DP Like a spell checker ignore function.

TS In industry the QA people can export the info back to the developer and get them to fix the problem

DD Success criteria? Define a format. Have tools output the format. Tools read the format and provide human output. wizard that reads the format and provides guided repair

CMN Add "read EDL and auto-repair" - e.g. validating HTML...

DD ADL part should include testing method in the specific case of accessibility evaluation.

LK I think Phill Jenkins (IBM) had more in mind. Here is some usability testing we did, here is the steps we did, ... test cases for a particular application.

CMN give them  aURI and I will give you some RDF that says what happened (met or failed)

WC I envisaged this with WCAG 2.0 having technology-specific checkpoints. So you conform to a requirement specific to your language (e.g. HTML) with a number of requirements.

DD There will be a set of URIs that we can use to point to as tests?

WC Yes.

DD Do we have anything that could help identifying the method? Do we have a schema that says "auto checking, manual checking? Do w need it?

DP Up to the implementor

WC We have been working on that - looking at tests you can do.

DD That is AERT?

WC IT doesn't live anywhere. THat would be best in WCAG.

DP I can't see the requirements in initial scope working for filling in forms, etc.

DD What was meant?

LK Phill was saying we do user testing - I want to document that if I click Edit -> Find -> enter some text, and I get something inaccessible.

DP If you think what is involved with that, the DOM that XML relies on is inadequate for that task.

LK Right. THis is describing the behaviour of a human testing a system.

WC maybe that belongs in ultimate scope not initial scope.

AG this makes manual checks incrementally more concrete.

<Geeze> So by evaluating output vs. tool used we are actually evaluating the author?

<Geeze> This person who logged in as "love26" never even used this feature?

CMN shows http://www.w3.org/WAI/AU/WD-ATAG10-EVAL-20000913/

WC There are tools that use eye-tracking and predict how people track through web pages.


LK Issue - include output?

<Geeze> using a rubber clock?

TS I suggested it. impetus is that the report should be transferable but I don't know if that it applicable anymore. I don't think it is as important anymore. It is purely derivative.

DD SO no longer a requirement?

Resolved. Not a requirement Combine results of different tools

DD Built in. If you can say which method was used, then you can combine them

DP I think this is where my scoping question comes. If I don't have information about the tests and the tester I can't derivethat

DD You will.

DP We are starting to put requirements on the application

TS I think it is the resolution with respect to the source document that says where the tests are

DP That is a definition of a test

CMN I don't think you will, I think that you may and teh system will still work. So long as the system supports having that information I hink it is OK

DP requirement should be that teh results include source of test and teh authority of the test itself

HB e.g. the human that made the judgement

TS The attribution should be the auto-test or the human who judged it

DD draws diagram

{subject of eval} --[the evaluation]--> {thing}

{description of test} --[description]--> {thing} <--[ ]--{test}

{SSB}--[test judge]--> {test} <--[method]-- {some test algorithm}

<Geeze> EDL includes a spot for who/when so the info as to who made the judgement inheres?

DP This says "A tool that SSB wrote says 'X'"

TS This is part of RDF - I make a set of assertions. "SSB says this alt is OK" and "a smarter person says it doesn't"

<Geeze> "A tool that SSB wrote and was used by "Y" at (time) says "X". etc.

DD There is a human per test?

CMN for a set of evaluations, you can attribute them all to a person and state that the person making assertions about manual tests did them

HB What happens if there is a conflict?

<Geeze> per EDL tool use instance.

TS Is there a way to resolve who said what?

DD That is the real meat of EDL. The same framework can be used to evaluate a SMIL browser.

DP I think what you have there is more detailed. What you are talking about is a higher level. If you wrap up a group of tests I would be happy. individual test consist of a URI, what, and how.

<Geeze> authoring = browsing = evaluating =

DD Schema will provide slots for saying what, how, ... and we have to come up with defaults - the person making the obverall statement is default for who did it. and so on.

<Geeze> whoever logged onto EDL assumes responsibility

DD So we need to some up with IDs for tools that are used to do tests and scheme for them

LK There are rules for assigning priority to tests. There are compliance assertions that depend on them. Within the scope of what we are doing do we want to define them?

TS I think you can point top that with the description of the test

DP the priorities are only relative within a group of tests.

<Geeze> No, because the "regulations" might change but the "evaluation" won't for a particular user test X has priority A but for another user it has priority B A user can feed in their priority scheme. But that is out of scope.

LK Do we want to make that part of the scope? If not we have a log file but no conclusion.

TS Doesn't it make sense for that to be the description of the test? argument about the shape of the scheme and which bit goes where...

<Geeze> <lang es>no me importa</lang>

LK there is a case for being able to give a level of confidence to a test result.

<Geeze> 3.7 stars?

LK yeah. For alt text... it might be woeful, just not very good, or really rather fine...

DD priority of a test is not defined in EDL but it is linked.

DP My organization has combined a bunch of different tests. Chaals has a different set. for my organization... they may demand a particular set of requirements

<Geeze> And some will use the Legion of Decency Index to find out which movies to go to rather than which to skip.

CMN In the confidence rating?

DP In marking scheme

CMN This doesn't define a marking scheme. just a lot of yes/no factoids

DD We are doing the same test, but for WAI it is critical, for dave it is nice to have. So priority setting is outside.

LK Needa piece that says how to combine these outside priorities...

<Geeze> That's called "CDL"

LK Do we want to point to a statement?

AG The question is what is the way you assess 65%? you have sum(accessible nodes) / sum(present nodes) for example

<Geeze> In GPA it's below mediocre, in an election it's a landslide.

AG the filter that weights the tests is an organizational policy

LK how you state the policy is outside the scope?

AG The way you state the policy depends on what we are doing, but doesn't sit inside it.

CMN e.g. you could write priority scheme in PICS

LK We are not going to say anything about the format of priority schemes...

AG they must be pointable from the rest

CMN you only have to point the priorities into the test.

AG wrong. You need to be able to point to where the problem was

CMN but not from in the EDL tests. From the pp, that does more

DP I just need to be able to say test 163 has priority 154. The application dealing with these can decide how to deal with the priorities

CMN There is not a requirement on this that tests are prioritized. That is another level up

<Geeze> Maybe Al can run this by Eric and I'll take it up with Kynn?

DD HTML validation. there are a number of tests. All auto.

DP There is that use case. There is also a website where I have a bunch of relative priorities that are for my organization. I will completely forget some things and insist on others.

<Geeze> The "Friday" principle: just the facts "ma'am" - what you make of them is up to you.

CMN That is an early stage development case. We should demo it using WAI priority scheme, but should keep it out of the basic tests

DP I agree it is out of the initial scope, but required for it to be commercially useful

LK You need to be able to deal with multiple URIs - e.g. two pages with inconsistent navigation...

DD You use a bag, which is a URI.

LK Why make it a URI?

<Geeze> [SBP] it's all about whether EDL contains stuff that isn't properly part of it.

DD When we are in the detail of RDF encoding we might find it is practical to have a list.

LK This is different.

<Sean> [WL] So they're discussing about making the scope for EDL even wider???

cmn: SBP nope.

<Sean> Whew.

<Geeze> [SBP:] Sort of. Some of them want such things as priority levels be inside EDL rather than using pointers.

cmn: nah, we seem to have decided we don't want that.

<Geeze> Whew

LK there are tests that intrinsically require two URI's

<Sean> Good, we should point to stuff from EDL, not include.

LK e.g. do text links match image map? there are two URI's. the test has to point to the URIs of each piece.

<Geeze> DUH!

AG the definition of the test includes a template which may define multiple references into the thing under test for the specific test there is a binding to particular pointers (multiple)

HB Conclusions may differ pointer to pointer

AG no. THat is if there are multiple things to test. THis is "are X and Y equivalent - you need to name X and Y to make the test clear

<Sean> [WL:] Is there any consensus on what the EDL structure is going to be?

DD I don't want to force cases to use multiple pointers when it isn't always necessary

<Sean> If you use XLink having multiple links is very easy

<Geeze> [SBP:] I think it's whatever we (you 'n' me) make it.

LK so use a one-URI test where applicable, and create a complex node for test that need to refer to more than one thing. That's getting into the nitty gritty a lot.

<Sean> [WL:] I hope so. We seem to be moving away from the point rather a lot.

DP we've got the item we test, the test itself, part of the test i have results. al, you said something about the assertion.

dp can't remember bell that rang... subject/object does not capture it.

al: subject/object/result that's on the test app.

dp: ya.

DD draws diagram

[16:25] > [15:45] > {subject of eval} --[the evaluation]--> {thing}

[16:25] > [15:46] > {description of test} --[description]--> {thing}

<--[ ]--{test}

{SSB}--[test judge]--> {test} <--[method]-- {some test algorithm}

hb: top level meta data about suite using.

cmn test id.

<SBP> The things that you have pasted in look very much like triples...

<wendy> yes, i suppose it is examples of the bubbles on the board.

dd: from the "desc of test" i want to point to a WAI document (e.g. AERT)

cmn priorities are diff than test. might be in same doc. id of test (refers to description of test)

dd: hb question is where find version of test.

cmn test id. it identifies a particular test by version. URI dated version.

dd: id identified version but version is in the test. from id find what version running.

cmn ya

lk can't tell if 2 people run the test using diff version can't tell which more recent version. unless check doc or human may read.

tim: if same tool read same test 2x's replace assertions.

cmn must explicitly state dated tests versions.

dd can have same statement twice. policy question. will tool bark.

dP: actually app that generate real codes could change. test descrip. could change while app stays the same. test not reflect description.

cmn pick which to refer.

lk could have a doc that changes w/so many tests. it changes. new version. 2 people apply same test, not obvious getting diff results.

dp: messes w/quality.

lk: confuses human . ideally put tests into a format where each one is wrapped in
something. so we know if referring to same test.

tim: e.g. of test changing?

dp: corporate color changes from green to purple, on corp web site says "corp color is X" a test looks for specific rbg value. series of tests passes,until the rule changes then fails. at some point either abstract source test data or do something to tie them together. otherwise blow quality.

lk therefor w3cneeds atomic tests.

wc yes, idea for 2.0

dp: if remain frozen.

lk: if have separate uri for each test a way for machine to know that 1-7 changed, and rest did not.

cmn don't produce change list in machine readable format.

wc need schema for tests so machine can read.

dP scope!!!!

lk we've generated work for other groups.

* Sean agrees with Wendy. Need to produce Schemas.

<Geeze> we've generated work for the entire W3C

ag: if generating engine that ...

<Sean> [GL: Too much work, the scope is much too wide


ag: cmn saying we're generating more work for someone else.

cmn: not enough for them to be committed to yet. we are going to build something and until we built that we

don't have something for them to commit to.

dp: each test which when in use shall have a stable URI (with help from cmn)

dp the content has to be static and it's uri as well.

<Al> Sean: the graphs on the board have more than three nodes in them. Maybe we could reverse engineer these graphs into a graph grammar of triples...(reply from before you fell off) or you pull it into a local copy.?

<Sean> Yes. That can be done. I have models that use an odd number of nodes in them

lk: if requirements document wrap in a span or div, if sufficient struct take 2 docs, do a diff...

<Sean> mainly from my XHTML to RDF thing. SiRPAC uses genIDs to fix it.

lk: can say which requirements changed which didn't.

dP: not quality.

<Sean> Al: Is that what you mean?

DP: diff set of tests.

tim: only see using this to transport data between eval and repair. if want static...

dp: yes, content needs to remain fixed.

lk as a procedure: if diff people judging have them use the same version. otherwise diff opinions could be caused by diff reqs.

dd: i am concerned about solidity of model wrt to outside changes. and factorization. if we generate docs that are 2 megs to express facts that's a problem. maybe when look at rdf detail can find way to express. at some point have a ref that something is missing.

<Sean> They mean that the EDL system will generate a lot of garbage

dd how do we cope with adding EDL to existing file... versioning should be brought inside.

so we can make management easier.

<Sean> But if you point to stable definitions (i.e. Schemas) as URIs, you save a lot of the descriptive waste

dd ya

next steps

dd we need to assign someone to encode the model in rdf w/real vocab. i need to look at for conformance point of view.

<Geeze> wonder who?

<Sean> :-)

dd, lk, cmn, wc

dd: assign someone to revise the EDL document.

(sean's summary)

dp: request that scope be included.

ag: expand pls.

dp: scope of work w/very clear boundaries need to be explicitly stated.

wc got it.

<wendy> sean? you gonna revise your summary?

<Sean> Yes, fine!

<Geeze> We've made ADL into EDL and this is a "good thing" IMO. What it promises is that the group meeting tomorrow will have to realize that they will be lobbying heavily with other W3C entities to acknowledge/advise/help in all this.

$Date: 2000/12/11 22:29:02 $