<ericvelleman> http://lists.w3.org/Archives/Public/public-wai-evaltf/2012Sep/
EV: shall we keep the comments coming to the mailing list?
SA: there will be more noise because of the comments. Don't respond to the comments until Eric I& Martijn put them into a form
EV: have already received some comments. There should be lots of issues if the review keeps on this way
SA: we need a stable draft in December. We should
wait another week before we do another reminder. All the group should make sure
you send the draft to them and ask for comments to the group
... there will be lots of refinement required
EV: during the review we will collect the
comments and put them into the issue tracker - we have 2 of them today
... we started a subject on a test run of the methdology and Peter put in a
proposal
... we could propose that if we do a test drive with this public working
draft, we might be better to wait until december 2012. We already know there
would be some issues. We could schedule the test run for after Dec.2012 when we
should have covered more of the known issues such as the random sample etc
<shadi> VC: people concerned about confidentiality
<shadi> ...need to think about how to collect the data
<shadi> ...message it carefully to avoid any misunderstandings
EV: we try to get the idea ready before we do the test run - what will be public and how the results will be used
SA: agree with waiting until January
EV: yes, we should wait
SA: to avoid scaring people, we could try to make use of people's regular work. Many of the group do regular evaluations and can give us feedback on what is working and what isn't. Maybe just give us an idea of the type of website, and what works well and what doesn't. We shouldn't have to disclose the organisations identification
EV: put into discussion thread these idea and see what people think - get a good description of what we want to do before the end of the year
SA: it is more of a phase than an issue. A draft by December addressing all known issues will be lots of work.
<MartijnHoutepen> http://www.w3.org/WAI/ER/2011/eval/track/issues/9
<ericvelleman> <http://lists.w3.org/Archives/Public/public-wai-evaltf/2012Sep/0061.html>
EV: lots of discussion on the list on the random sampling issue and Mike did a summary where he added a Word document with an overview of the discussion
<Mike_> I did!
<Sarah_Swierenga> yes, but last week...
I started looking at it but didn't finish
EV: can start to walk through the document and
put ideas together
... can draft a first text
<MartijnHoutepen> me too
VC: talked with an automated tool manufacturer who described a method to go to the home page and randomly pick a link, and then fromk that link, randomly pick another link. That would be less overhead than scanning a 1,000,000 page website trying to get 25 pages.
SA: There are different methods
... there are different crawling strategies which is technical depending upon
the automated tools. You then have the limitations of automated only, depends
upon the hierarchy and entry points or even if they are generated by
JavaScript.
... other approaches described by Alistair and Kathy, We need to look at how
each of us do the picking of random pages. Evaluators have a method for
selecting pages - each have their own reasons/factors.
... people may not want to put their internal methodologies out there for
everyone to see. We need to ask questions so that we can collect some of the
wisdome of the group.
Ev: we could put the questions in the survey. What do we need to ask to be able to draft the method people use.
SA: the questions need to be the right ones that get the answers. With a set of 10-12 carefully crafted questions we can collect the right decision points that will improve the sampling procedure.
VC: I was asking about we select the random 25% of the pages the WCAG group want.
I'll dial in yet again
EV: yes, how do we actually select the random list
what typhoon?
SA: what portion of the sample should be randomly selected
didn't the WCAG group say they wanted 25% of the sample to be random?
SS: if we are going to survey the public about random sampling, we may want to get a sense as to whether people think they can accomplish random sampling. There were several suggestions as to how to get the sample and which ones the users think they can actually do. We need to carefully choose the core pages and then look at what other pages should be in the sample
EV: do you use random sampling Sarah?
SS: no, the client wants to keep the page count down. The client knows where the trouble spots are - and they consult with the evaluator
Ev: not a public survey - just within the TF
I'm doing a large evaluation at the moment and have instituted a random sampling for a portion of the pages
SS: agree with keeping the surveying within the group right now. Keep in the key pages, core functionality and then use random for selecting some additional pages.
SA: keep the survey internal
... Sarah makes a good point about the number of pages in the sample and the
cost involved. We had a notion of the levels of reporting in the methodology
and looking at the goal. Maybe the sample size and reason for evaluation would
affect the number of pages and types of pages etc
... maybe you just want to verify that the site is as accessible as someone
says - in that case maybe a total random sample would be appropriate
MH: we pick the random sample by hand so we can select.
Sorry Martijn, missed someof that
EV: instead of working on a first version of an outline of the approach. I will come up with a questionnaire or survey and combine some questions.
*shadi, I am, don't think it's me
<MartijnHoutepen> @vivienne you have the essential: do people select pages by hand (to select 'interesting' pages) or on a tool basis
<MartijnHoutepen> soory
<Mike_> +1
<MartijnHoutepen> yes
<Sarah_Swierenga> +1
<shadi> +1
<Tim> +1
<ericvelleman> <http://www.w3.org/WAI/ER/2011/eval/track/issues/6>
link is fine Eric
EV: issue 6 has a list of the related emails
... to improve the objectivity and reliability of the methdology - goodness
criteria. We put it into the list of necessary things to go in the methdology.
Kerstin's concern is that we are not including this criteria.
... we want to include this more in the methodology - but if you try to make
it too scientific, it may not read well
... if we tried to include all goodness criteria, we might have to go through
every sentence. How do we work on this?
Sorry, Eric I've got no idea
Maybe ask Kerstin for ideas?
<Mike_> I'm still not sure I understand the concept of "goodness."
<MartijnHoutepen> Seems like something for the testrun
EV: leave it for this call and ask Kerstin how we do this
SA: Understand the concept, and rather than
discussing it on the next call we should continue to ask Kerstin to write it up
more explicitly that it is more understandable. We need to know what Kerstin
would want and how it would benefit the methdology - the rationale
... do you want to say that if you follow the methdology, then you would be x%
of the reliability. If so, we couldn't place a percentage of confidence in the
methdology, if at all, until after testing it.
... how much time/work do we put into this - perhaps something for a research
project?
<Mike_> +1
is this okay for all?
<Sarah_Swierenga> +1
<MartijnHoutepen> +1
<ericvelleman> +1
+1
<Liz> +1
<ericvelleman> <http://www.w3.org/WAI/ER/2011/eval/track/actions/5>
was it Kathy working on the graphics?
Michael Cooper also was workingon it
Sa: think she is still working on it
EV: will ask Kathy for an update to discuss at
the next meeting
... any other issues?