Re: ISSUE-30 counter-proposal

Philip Taylor wrote:
> Shelley Powers wrote:
>> Ian Hickson wrote:
>>> Here's a counter-proposal for ISSUE-30:
>>>
>>> == Summary ==
>>>
>>> The longdesc="" attribute does not improve accessibility in practice 
>>> and should not be included in the language.
>>>
>>> == Rationale ==
>>>
>>> Several studies have been performed. They have shown that:
>>>
>>> * The longdesc="" attribute is extremely rarely used (on the order 
>>> of 0.1% in one study). [http://blog.whatwg.org/the-longdesc-lottery]
>>> * When used, longdesc="" is extremely rarely used correctly (over 
>>> 99% were incorrect in a study that only caught the most obvious 
>>> errors [http://blog.whatwg.org/the-longdesc-lottery]; the correct 
>>> values were below the threshold of statistical significance on 
>>> another study that examined each longdesc="" by hand 
>>> [http://wiki.whatwg.org/wiki/Longdesc_usage]).
>>> * Most users (more than 90%) don't want the interaction model that 
>>> longdesc="" implies. 
>>> [http://webaim.org/projects/screenreadersurvey2/#images]
>>> * Users that try to use longdesc="" find it doesn't work ("Who uses 
>>> this kind of thing? In my experience [...] it just didn't work. 
>>> There was no description.") 
>>> [http://www.cfit.ie/html5_video/Longdesc_IDC.wmv].
>>>   
>>
>> I'll let the accessibility folks respond to the accessibility 
>> components of your proposal, but we've had discussions in the past 
>> about your "studies", and the flaws associated with them.
>>
>> First of all, you've not provided access to the same data, so your 
>> results cannot be confirmed or denied.
>>
>> Secondly, you have a bias in the results, and bias has been shown to
>> compromise the integrity of studies.
>> [...]
>
> The original source of the 0.1% usage data in 
> <http://blog.whatwg.org/the-longdesc-lottery> is the public web, so 
> the results can be easily checked by running independent studies over 
> other subsets of the web. E.g. with the pages provided by 
> <http://www.dotnetdotcom.org/>, 0.5% (2288 out of 425532) have at 
> least one <img longdesc>. (The raw parsed attribute value data is at 
> <http://philip.html5.org/data/longdesc-raw.txt>, if anyone wants to 
> look in more detail). The exact numbers depend on how you sample the 
> web, but I'm not aware of any wide-scale survey of pages that has 
> produced significantly different results.
>
> http://wiki.whatwg.org/wiki/Longdesc_usage is already entirely 
> independent of Google, and comes from public data - the original 
> listing of pages came from dmoz.org, and Andrew Sidwell examined each 
> of the pages to derive the results. The WebAIM survey and the user 
> testing videos were also developed independently.
>
> (I'm not saying any of this data is perfect and infallible, or that it 
> can't be interpreted in different ways - I'm just saying that claims 
> of non-reproducibility and of personal bias in the collection of the 
> data seem to be inaccurate.)
>
Philip, Ian made his study on an analysis of data within the Google 
index. That data is not available for us to check the accuracy of his 
results. As for the dotnetdotcom.org data, again, we don't have access 
to the methodology determining the web bots path, we have no idea how 
often its blocked, it doesn't take into count the use of intranet data 
-- you can't form a conclusion from anecdotal data. This isn't me being 
mean to Ian, this is me referencing established practices in scientific 
methodology.

The dmoz data also can't be used to make a general conclusion, because 
the data is self-selecting: it requires human intervention for the data 
to exist. It is not representative of the web, at large.

Again, you all don't understand my use of the term bias -- it is a 
perfectly legitimate term when it comes to reviewing the basis on which 
studies are made, and the conclusions derived from the studies. When Ian 
formed a conclusion from his studies, the methodology used, and his own 
personal beliefs and assumptions, are a legitimate challenge point for 
that conclusion.

As for the video, anyone can tell you that one person is too small a 
sampling size to make any meaningful conclusion. Plus, whoever created 
the video "led" the subject to respond in certain ways -- using keywords 
and triggers to direct the subject's statements. It was an interesting 
video, and the subject provided sound opinion, but as a basis for a 
conclusion: flawed. As it was, the subject was very careful to make a 
point that his statements were his own opinion, and may not reflect the 
experiences of the wider community. Rightfully so, and well done on his 
part.

The WebAIM survey was derived from a much broader sampling, and I think 
it's findings should be taken into account by the accessibility 
community when they respond, but even this study states, "There is no 
clear consensus in these responses." Can we derive a conclusion based on 
a survey? Typically not, and there is no error factor given with the 
results. As noted at the top of the study:

A few disclaimers and notices:

Totals may not equal 100% due to rounding.

Total responses (n) for each question may not equal 665 due to 
respondents not answering that particular question.

** The sample was not controlled and may not represent all screen reader 
users.

**Care should be taken in interpreting these results. Responses are 
based upon user experiences with web content that is generally 
inaccessible. We cannot help but wonder if responses may have been 
different if screen reader interactions with web content were typically 
very positive.

Data was analyzed using JMP Statistical Discovery Software version 8

We hope to conduct a survey of this nature again in the future. If you 
have recommendations or questions you would like asked, please let us 
know. Additional analysis of this data and details on the responses to 
open-ended questions will be available in the future.

Now, those disclaimers were very well done. Notice the items marked **. 
The survey editors specifically warned against using the results to form 
a conclusion.

I have a degree in Psychology (industrial emphasis), in addition to a 
degree in computer science, and most of my time spent within the 
discipline was focused on testing, research, and how to conduct these 
types of studies. I'm not an expert, I only have a BA not an advanced 
degree, but the points I made are a fundamental, and not something I'm 
making up.

Shelley

Received on Sunday, 14 February 2010 17:37:24 UTC