Minutes 18 December 2000 ERT Telecon

Summary of action items and resolutions

Participants

Regrets

Misc.

Action Daniel: Talk with Sean to find out how we could help with phone issue.

Agenda

Posted 15 December 2000 by Len Kasday

User Scenarios

LKUser Scenarios. First scenario: The user is a person writing an evaluation for a site. It's a 3rd party evaluator. /* Len reads from the User Scenarios page */ Item 1.

WL Similar to what people like Kelly Ford do. This would regularize the process.

AG One pod in a scenario with 3 activities.

LK Item 2: combine output.

HB Implication on running tools in same presentation environment? The same user preferences set in the user agent? Part of the evaluation? Running with Lynx, may have different reactions and propose different repairs.

LK Are you saying that when you run a tool, there would be setup parameters like "assume this capability in the browser." This relates to a WCAG discussion.

HB My suggestion is that we need to capture the user's environment from which they are making the observations. Even how using the tools.

BM The test tool should always put relevant info into description.

WL Yes, and that the environment is relevant.

BM For later on analysis? As long as that is recorded, then up to combiner tool to bring together.

AG A genuine quality factor for the data. How much of that environment is capturable.

BM Need more examples of what is relevant.

AG Good to capture and good to automate.

BM Yes.

AG But separate issue from combing input from different tools.

LK HB Could you give examples of a couple different environments?

HB No specific cases. Concerned that Gregory running a tool might have different insights that we would not get if we're not using the same ATs that he is.

AG Why people put different text in alt-text: someone using Lynx versus someone using Netscape. Lynx inlines while Netscape puts it in a box. In Netscape, click on the text you get the image. Therefore in Netscape you put description of image while in Lynx causes people to write functional. Therefore, helps to automate as much as possible. You might want info about the tools that were used and the settings.

LK Another aspect: if you run a tool and it identifies tests that were not passed. e.g. WAVE does fewer tests than Bobby or A-Prompt it would be useful to have indication of the scope so you know what the tool could have potentially checked.

HB That's a profile of the tool.

LK That would be a subcase of what HB bringing up. That will give info of the environment.

AG This relates to my new scenario. "3 box nature" and "examples". How do you recognize pages where the rule applies? Can we come up with an XML query that will bring back a good example? There is a database that for a given WCAG or AERT reference it can give you samples. Gives evidence of how A transforms gracefully and how B does not.

LK How fit into EDL?

AG It's a record of an evaluation. This is a reference database that a webmaster uses when someone complains. To answer "what do I do about it?"

WC This is something WCAG working on. Are you saying that it should EDL to express these?

AG It is one example.

LK All EDL do is point to checkpoint not explicitly to example. Are you saying that EDL points directly to example?

AG You're operating from an assumption of scope. I'm saying that what WCAG is doing with the database is an application.

LK I.e. stop thinking about EDL scenarios, these are user scenarios. An EDL plays a part of but not whole picture.

AG What is the value added? That's what trying to answer.

LK It may turn out that no additional functionality is needed to do this, but a tool that uses EDL could follow second-order pointer. That's one implementation.

WL I've been working on "How people with disabilities use the web." It points into the curriculum. It has demonstrations of just about every checkpoint. It is easy to reach into. How many other things like that exist?

WC WCAG working on a database of techniques.

AG That is an area natural for WCAG to call on ERT. The concept of the database is like the curriculum. We've got XML Schema coming up. The piece that keeps me interested, is how do I go up on the Web to find similar pages to what this rule is talking about. How do I find comparable pages.

HB Some tool was sniffing through lots of pages. It could address some of our issues.

AG That logic could be used differently. The WCAG database is an example of what I'm saying.

WC Clear that WCAG is a user of this. What other need to say now?

AG Content developer and web experts not the same people. 3rd activity is a hands-on evaluation by someone with a disability. Those are 3 activities. What are the communication mechanisms between them? Content developer likely not to look for examples, but webmaster could provide.

Action LK: change the intro of user scenarios to broaden scope.

Action LK: Add a scenario related to examples.

Action LK: Add a scenario where a person is doing hands-on, usability testing.

AG Mike Paciello has convinced Fidelity to make their site accessible.

Action AG: send discussion of overlap between database WCAG working on and EDL work.

WL Has anyone talked with anyone with W3C geek-world about what we're doing?

AG PF.

LK Daniel will go to the Amaya folks about whether they would input/output EDL.

DD Yes I can ask. Although it's not an evaluation tool.

AG But it's an annotation tool.

WL The inclusion of these features in an authoring tool is useful.

LK Once we do that the whole thing shifts to AU.

AG We're close to where we were a couple years ago. We talked about pointing into a broken document. This is saying one of our capabilities is a universal diff so that people could ingest changes. Then user could see propose "after" and accept or not.

LK Does the annotation capabilities have a diff capability?

WC Not needed, not changing anything.

Al's OOP proposal/example

AG The basic relationship is that it's a generic to specific relationship. A generic evaluation method that's been applied. Recording app of some evaluation method. The specific evaluation has a specific subject, what evaluating. other ref info says how eval. what conclusions come to.

LK What we see here (the pseudo code) does this come up for each test? or is there a header?

AG This is info, you query a test, and get info back. You have 3000 errors of 17 kinds in the report language build structures that incorporate reference to, this is the page i was on, etc. in terms of info requirements, this was not in compliance.

LK You could say, "AG says these things" or "AG says A", "AG says B" informationally the same.

WC test definition - get at how we were pointing into applications? like authoring tool?

AG It could be a natural english description of how get circumstance or pointer into son-of-AERT database in which queries and transforms that describe a technique. the reference is to something and it has a broad range of types. Only thing defined is the role and its relationship to what we add.

WC inheritance?

AG I've tried to sketch an info graph, at the infoset level. If you make this an XML language then you may point to test definition you may be referencing either an ID that the author put in xml or auto-generated node id that some XML to infoset algorithm generated. There is a numbering scheme in infoset. Then you point to eval record with this id. we see this in svg. the way you incorporate is by reference, if someone thought about referencing that item. we prefer ids over paths, but can use paths as well. that's inherited from xml environment. just an abstract link. The test method has an abstract statement of input and output. The test method tells you the type of result. "go/no-go" or prose paragraph. Core application of EDL doesn't define types of results those are imported from test definitions.

WC you did not sketch that out.

AG Where I say "includeByRefinement" generic to specific relationship between what refered to and what here. in the instance of an evaluation, the result must conform to type definitions.

WC TestFormal Ref vs TestActualRef

AG Generic don't know identity of what evaluated. formal: place holder.

WC an example?

AG /* VHML pins on chip vs places on board actual vs formal references.*/ defn of test has generic info about what is touched during test, to complete test must identify them. what was the actual viewer i was using to make this comment.

LK Accessibility web examples.

AG actual (specific): point at image element, ends in .gif flunked refering to actual string that is attribute of specific reference. formal (generic): path through the syntax, the text which forms the "alt" attribute of any/some instance of the HTML element type IMG.

LK EDL would not give go/no-go that would be inherited from the test defining. What if the test defn written in prose? In something that is not EDL? How be inheritance?

AG That's a type. We may develop additional vocabulary. If you tried to reverse engineer AERT, this is a subtly that escapes manual, could be part of EDL core. That may be one built-in attribute. I see this as putting a couple modules together. There is core that integrates. Forget about doing automatically, that would be suitable property to apply to that test description. if it's just natural language prose, you could annotate that. "forget trying to analyze." build a filter, how much manual eval am i willing to put into. do one search for "what can i automate" for results then "what are the test methods" then filter according to other rules.

WC so, saying this is just how to pull different pieces of the evaluation record together.

AG Look at what people doing, not just what does the format do in there. We're still looking for boundary on scope. The key thing is, what do different activities have to tell each other. Then you take the different scenarios and mix/match and normalize. The experts are a separate community - ask librarians about dublin core. human factors ask about why this fails. there are some things we define as glue to pull together. my OOP sketch is at level of "how to we build something that supports the info map to support communication between scenarios."

LK Do we have that captured in minutes or have AG post to list.

Action AG: comment on minutes to make sure ideas captured appropriately.

HB Anytime that one tests a site, one must test particular URI. Many have changing content, therefore evaluation at one point differ from later point.

WL They are 2 different things.

HB But have same URI.

WL The analysis is of a particular instance of that.

AG May need to provide a copy of some dynamic sites.

WL Consider machine as user.

Action WL: propose user scenario where machine is user.

AG Some more thoughts: As we try to gather mind around task, the user scenarios are good to state objectives. At the same time, good to survey, which resources can we build on. Sean clearly has finger on RDF as finger. Daniel look at Amaya for prototype - what is the minimum stuff to build for a working model. Charles prolog script is another example of something we could use to play with to say have we identify the functionality we need or might have missed. What are existing examples that come close that we can benchmark requirements against. HB's issue send me back to WART - what info do they capture?

Next meeting

Resolution: No meeting next week due to Christmas Day.

Action LK: confirm with AU that next joint meeting is 2 January.

AG To nominate something for the agenda, ask what their desires and constraints regarding an evaluation description such that they could use it. Likewise, repair description.


$Date: 2000/12/19 00:24:24 $ Wendy Chisholm