W3C logo Web Accessibility Initiative (WAI) logo > EOWG Home > EOWG Minutes

EOWG Minutes 25 July 2003 Meeting

on this page: attendees - outreach updates - Palo Alto review - user centered design - Dublin meeting - August meetings - next meeting

Action Items

Agenda

agenda in e-mail list archives: http://lists.w3.org/Archives/Public/w3c-wai-eo/2003JulSep/0026.html

Attendees

Regrets

Outreach Updates

DS: Radio program, 30 minutes twice a month on Sunday evening, alternate program on disability issues.

CL: When will this begin airing?

DS: We are pushing for a September start, but maybe in August.

Report on EOWG meeting & training exchange in Palo Alto last week

Overview of Palo Alto meetings.

JB: minutes will be available. We had a good turnout for all three days, 20 on Wednesday, 25-27 for Best Practices in Training on Thursday and 75 for Friday sessions, with number decreased throughout day.

SLH: 40 stayed for session after lunch ["open mic"]. 26 stayed for the last session [discussion of morning presentations].

JB: Wednesday: The EO WG meeting. We started with the social factors page of Building the Business Case.
ACTION ITEM: JB to put up revised social factors page.
Lots of talk about terminology, context-resetting reminders, etc., so now there are a fresh round of edits for that document. [Shawn posted change log.] We extended discussion of this to a second morning session. In the PM we discussed User Centered Design. Then discussed Thursday meeting.

JB: Thursday: Best Practices Training Exchange: 9 people gave presentations, some were descriptions of what people did, and a few were real (portions of) training sessions - a good variety (this was what we wanted). There is now a link on top of Thursday agenda page to "Details" of who presented. It was quite a bit of fun to see the different styles and there was good discussion after each presentation. Even people who had set styles felt there were things to be learned from others. We did get videotape of the sessions, but we have no explicit permission to do anything with these. We will have to think about what to do with this material. Any questions or comments about Thursday?

PG: question about Wendy and Shawn slides: will they be available on line? Wendy's are on-line I think.

SL: I have started putting things on line. Couldn't find Wendy's. I Will look.
[shawn's presentation materials at http://www.w3.org/2003/Talks/0718-slh/ wendy's at http://www.w3.org/2003/Talks/0718-WAC-EO/]

JB: several will be put on line, but not necessarily all of them.

SL: we will link to them from the relevant page.

JB: Friday: Morning started with intro from Michael Takemura of HP, I (Judy) gave intro on WAI, Shawn and Wendy gave training presentations. In the afternoon we called it an open-mic session, which was an evolved version of the "information exchange" concept: people from different organizations talked about various things. An informal session, really. After that we had a very interesting feedback session on morning training sessions. Then we had a brainstorming on the "ideal" training package. Anyone want to add anything about Friday?

PG: thanks all of you. We are beginning training here on Monday and will be using the video Shawn showed (of Neal Ewers) (with French titles added).

JB: thanks to HP and Wells Fargo and Natasha, Doyle, etc.

JB: At some point we should consider how useful this format is to us. We got a lot of people thanking us for the open training session.

User Centered Design (UCD) of WAI Web site

JB: We have had some good discussions clarifying some issues, but more is needed. Shawn will take us through this.

SLH: we have talked several times about this process. So won't talk about background now. Will be posting more information soon. I want now to focus on Usability Testing (UT). Generally UT can be a fairly formal process used to meet a few goals: 1. For evaluating the current site to find what's s working, not working, and areas for improvement. We already have some ideas of what is wrong (and right), but this is another method. 2. Use UT to set a benchmark, and as we redesign can see if changes are improvements.

Another (very practical) reason for doing UT now is because an organization has volunteered in-kind support for this: AIR (American Institutes for Research) will be conducting the first round of usability testing. We have had some idea of using companies in other countries to get other perspectives.

I want to talk today primarily about who we want to be the sample users. An important element in UT is recruiting participants. Want to get the most representative, relevant users: a time consuming process and we want to provide guidance to AIR. Participants will come into a facility - AIR plans to use a Concord Massachusetts facility: it has an observation room behind one-way mirrors to observe users and can tape sessions. Participants fill out pre-test questionnaire, and then are given specific tasks to complete. Following task completion users are given a post-test questionnaire and debriefing to ask questions. We have a link to WAI Site usability test planning

We discussed that it would be nice to have different participants do different tasks. I discussed this with AIR and it can be done, however there are some negative aspects to having only a small number of participants do a specific task. E.g. if only one person does one task and has a problem you don't know whether it is a systemic or personal problem. If all 8 participants do a task and one has a problem then it is more obvious where the difficulty lies. Also, having more participants in a task gives better data for benchmark improvements. Definitely should try to maximize tasks all 8 participants complete, and just have a few that only a subset attempt.

SP: would all people be using same environment (e.g. Assistive technology (AT), PC, Operating system, browser, etc.)?

SLH: not sure - since this is a donated service we can certainly request but not necessarily dictate parameters. SLH not sure what AT AIR has on site. Doesn't think they have budgeted to do offsite tests in personal environments. Have asked they include at least one participant who is blind and uses a screen reader.

SP: the point is if people are using different systems then the results might not be comparable.

SLH: the goal is to test the usability of the site: we have not talked about testing system configurations. A given will be that any configuration we test on will have no known system problems.

SP: e.g. XP has better MSAA capability so it communicates with AT better than older systems.

SLH: do you think that will impact the usability testing of the WAI site, which doesn't have many known technical accessibility problems?

SP: probably ok then.

JB: asked for clarifications from SP: is your issue that the problems encountered may be related to AT rather than site design?

JB asking SLH: Is this sometimes taken into account in UT?

SLH: Yes, we do consider this. As much a possible we will minimize this potential problem by recruiting people very familiar with their AT and make sure the system is tested ahead of time. This test is not intended for AT novices.

SLH: regarding Usability Test planning: we did a brainstorming session to list who we felt we wanted (usually would have had this from UCD already). What we did was start a list of people who visit our site.

Note that list numbers are not in priority order: numbered list is simply to aid discussion.

SLH: So, what's missing?

CC: Number 9: assistant to IT/ICT manager: she meant assistant to any manager, not necessarily to and IT/ICT manager.

SLH: I will be looking at this list more.

JB: is anyone thinking about other users?

WL: did we include people who use the page frequently, like WAI Working group members?

SLH: yes - #22 WAI WG member

SLH: next thing to talk about is characteristics for test participants. One thing to remember is that we are focusing site design, layout, architecture (e.g. item grouping, terminology), but NOT the content, e.g. can they find the document that talks about subject XYZ, but not whether XYZ is well explained. Shawn talked with AIR about whether we can have different test for different groups and AIR suggested only two groups. SAZ and SLH discussed two axes: people who know very little about Web accessibility and some who know at least mid range, second technical versus non-technical, and at least one person who is blind user of screen reader.

JB: thinking since last week we should make sure we have some diversity, to make sure we have some data e.g. for someone who is deaf, someone navigating by voice recognition (or eye-gaze, etc.). We started to have the discussion about overall number of participants, and I am getting nervous about small number of participant.

DS: agrees, and would be nice to have people with multiple disabilities.

AG: spoke to Helen Petrie about user studies with PWD, and at a minimum she would use 5 groups with at least three people.

SLH: some of the notes from our discussion on people is in item 8 of Usability Test Participants section. We have talked about this issue and it would increase the amount of coordination required from W3C/WAI. Constraint is that AIR may only be able/willing to do eight people and we might have to ask for help from other organizations if we increase this. More but smaller groups of coordinated users will mean an increase in resources.

CC: diversity of disabilities means we need to have two separate pools - we cant do both with one set of testers. Hard for her to believe we can cover everything in 8 participants, especially if we want to fully engage persons with disabilities.

JB: I am consistently hearing from UT specialists that small test groups are good enough (based on their experience). I trust them on this. But I am increasingly concerned about representation of people with variety of disabilities.

CC: I guess problem is that we haven't defined the categories yet.

SP: if we have small number of users, then they must be experience with web and with their systems and perhaps also with usability concepts to focus comments. If subjects are pulled from the general populations might not get all the inputs we are looking for. At least should know about Web accessibility and the Technology.

SLH: what about experience with the Web itself?

SP: that's important too.

SLH: will come back to that.

SLH: back to higher-level categories: UT on an existing site is not a requirement: we already know many of the problems. Given that we are talking about increasing the scope should we only do larger scale testing on the early prototype redesign of site and only limited testing of existing site?

AP: have to do testing on existing site to get a benchmark against which to compare.

DS: principle is you try to do it right first time. E.g. if you put a large group onto prototype and they find things wrong then you have lost investment in prototype.

SLH: is a strong proponent of early testing.

SCRIBE NOTES: There was some further discussion that is summarized as:

SLH: Doyle was concerned about doing UT too late in process.

DS: SLH said the testing would be on something that was not a full scale site but early prototype before much invested in design.

JB: you were about to comment Sailesh.

SP: when I tested a site, developers said we're redesigning, check back when have new site up.

JB: how does that fit in?

SP: not to get into the testing process late. The other feedback: hang on till we are done with the new site and take a look at that. Test the web site and give feedback. Might get addressed.

SLH: we are actually the developers. We should gather the information at the point it is most useful to us. We already know a lot what is wrong with the site. William said we know a lot of the stuff. Certainly that is my background. I have done a lot of usability testing. I have some experience in this. You will find issues. Find some now. At the same time ,we already know about what work with the site.

SAZ: we've had a lot of input about this and there is much more we have come up with, including having people around the world testing, and it multiplies exponentially and am wondering if we really have the resources to expand such a study, testing many different criteria? Maybe we need to highlight the top three or four problems to reduce the scope.

JB: the question is whether we need one category, or a few more categories with fewer people in each. I think SAZ is proposing an extreme. Do you think a small group with just a little diversity is sufficient for now?

SAZ: I think that is all we have.

JB asked SLH: is AIR limited to doing 8 people?

SLH: originally AIR manager suggested 4 and the person involved said would try for 8. So asking for more from AIR is not likely. We can consider getting extra help, but is it worth the time to coordinate it, and is it important to do all this now, or do 8 people now with one category with variety to use as the benchmark, then move forward with redesign having some early rough prototypes to be tested with much wider user categories? This would give us something to start with and ensure that subsequent fixes are tested fully.

JB: SLH is proposing that we stay with 8 for first pass (with diversity)? Are people for or against this?

CC: I don't object to the approach, but am concerned we don't know what the categories are. If we clarify that, then ok to proceed, but not otherwise.

SP: within 8 there can be just two subgroups?

SLH: not really looking at subgroups, but instead, a range. There will be no in-group or cross-group comparisons, but only site improvements.

JB: How do people feel about this?

JB: OK, people seem agreeable to proceed after clarification of categories.

SLH: everybody in the target audience must be experienced with Web accessibility or should be interested in Web accessibility. We don't want someone who has no relationship with the subject: the draft (and it's important to remember it's a draft) shows possible user requirements.

CC: what I need is "our categories are..." and what are the characteristics.

SLH: we have no categories, but only variables so far.

CC: Having trouble linking what Shawn is suggesting to what my experience with testing is. Will think on it and get back to list.

JLB: I think there is too much difference between how Charmane and Shawn understand the process in question.

ACTION ITEM: Shawn, provide more background on usability testing - before next discussion.

SLH: Ok. won't be next week.

JLB: Any suggestions for Shawn on how to reorganize?

None

JLB: wrapping up then, some people seem to have significant concerns, so we can take another pass at this in a week or two. I think we are making some progress, but also learning more at the same time. Thanks to Shawn for her efforts.

Agenda for EOWG meetings in Dublin on 3-5 Sept 2003

JB: Dublin, Ireland, September 3-5. Microsoft has offered to host meeting, at University College in Dublin, piggybacking on AAATE conference. Lots of groups are wanting to piggyback on AAATE (e.g. EDEAN network (design for all curriculum)) so we have to be careful about conflicts. Here's a proposed outline:

Wednesday afternoon: Open EO discussion meeting - an information exchange with other interested groups.

Thursday: Best Practices in Accessibility Evaluation exchange, evaluating specific checkpoints, how to build review teams, etc.

Friday: regular EOWG face-to-face, with discussions including Evaluation Resource Suite updates, site design, the perennial WAI gallery, finalizing "How people with disabilities use the web."

JB: Comments?

AG: sounds great.

CL: really exciting. Wish/hope I could be there.

SAZ: team evaluations are a big concern in the EU. Can the public attend?

JB: yes, we want people to come. We're thinking of putting some kind of criteria in place, like people who come might agree to contribute one or more evaluations of Web sites as candidates for getting feedback, or agree to do an evaluation that would be a nomination to the gallery where a webmaster has agreed to participate.

SAZ: what do you mean?

JB: open only to people willing to participate.

SAZ: good idea.

CL: thinks the idea is good, but cheeky (in William's words). I think it is not very welcoming (also in William's words).

SLH: what about instead of requiring that you submit evaluation ahead of time, but that the expectation is that you will agree to participate in reviews.

CL: still a little uncomfortable, would rather invite them and ask them to participate rather than demand it. If you think this is the only way to advance the gallery idea then go for it. The topic is much more urgent in the EU so people might not be offended.

JB/SLH: may also depend on number of people wanted/expected.

JB: ACTION ITEM: will try to do a refined version of Thursday agenda and send to list, and invite but not require reviews. Comments?

No objections raised.

Checking EOWG meeting schedule for August 2003

JB: August schedules? Let's try for August 1, 8 and 15 for sure, skip August 22, then have a call on August 29 as final planning for Dublin meeting if possible.

PG and CL not available on Aug 29.

SLH possibly not on 29th either because already in Ireland.

SP is not on available on August 8.

Next Meeting

01 August 2003

Meeting adjourned at about 10:30 EDT.


Last updated $Date: 2003/08/08 13:14:16 $ by Shawn Henry <shawn @w3.org>