Re: Using an image map for long described image links [Was: Revert Request]

On 30 January 2012 21:40, Benjamin Hawkes-Lewis
<bhawkeslewis@googlemail.com> wrote:
> On Sun, Jan 29, 2012 at 6:15 PM, Matthew Turvey <mcturvey@gmail.com> wrote:
>> On 28 January 2012 04:57, John Foliot <john@foliot.ca> wrote:
>
> [snip]
>
>>>  * Browsers must address the discoverability problem for all users.
>>>  * Browsers must natively support the user-choice of consuming or not
>>> consuming the longer textual description.
>>>  * Browsers must preserve the HTML richness of the longer textual
>>> description content.
>>
>> This simple solution meets all the requirements:
>>
>> <a href=foo><img src=pic alt="*the purpose of the link*"></a>
>
> Okay, I can see some web designers might use that pattern but …
>
>> If users need to be able to determine programmatically that the link
>> is a long description of the image, or authors want to put two links
>> on one image:
>>
>> <a href=foo rel=longdesc><img src=pic alt="*a programmatically
>> determinable long description link*"></a>
>>
>> <img src=pic alt="*text alternative*" usemap=#map>
>> <map name=map>
>> <area shape=rect coords=0,0,100,50 href=foo rel=longdesc alt="*a
>> programmatically determinable long description link*">
>> <area shape=rect coords=0,50,100,100 href=bar alt="*on an image that
>> is already a link*">
>> </map>
>>
>> This universal design approach works for everyone, right now, and
>> doesn’t require changes to accessibility APIs, software upgrades,
>> browser add-ons, user training, author training, or employing the
>> services of an accessibility specialist. Unlike longdesc (and ARIA)
>> this technique currently works in all screen readers, including
>> VoiceOver, Orca and NVDA, as well as all other AT, including screen
>> magnifiers, and all browsers.
>
> Screenreaders aside, how does this design work for:
>
>    1. Sighted keyboard users who cannot trivially determine which
> part of the image leads to what.
>
>    2. Sighted mouse users who will be confused by (say) a photo that
> leads either to a text description or to a larger version (e.g. a big
> raw JPG) depending on where they click.
>
> (Tooltips aren't a good workaround for these defects, as UAs don't
> show them on focus and users don't wait for them or read them.)
>
> The usability of this pattern can only degrade further when you throw
> partial sight (making it hard to use the tooltips), motor impairments
> (making it hard to focus the mouse), or learning disabilities (making
> it hard to understand opaque interfaces) into the mix.
>
> If the design will not afford distinct, visible, on-page controls, the
> discoverability problems raised by distinguishing between a primary
> action (getting a bigger image) and a secondary action (getting
> information about the image) are likely easier to mitigate through
> better UA interfaces than the usability headaches created by authors
> arbitrarily carving up photos, charts, and diagrams into mystery meat
> navigation:
>
>    http://en.wikipedia.org/wiki/Mystery_meat_navigation
>
> --
> Benjamin Hawkes-Lewis

Removing the HTML-A11Y-TF's "no visual encumbrance" and "no default
indicator" constraints would certainly improve perceivability for
sighted users, and the range of authoring options available :)

-Matt

Received on Tuesday, 31 January 2012 10:06:06 UTC