Re: feedback requested: Canvas change for improved hit testing that also facilitates accessibility

Rich Schwerdtfeger
CTO Accessibility Software Group

Ian Hickson <ian@hixie.ch> wrote on 03/30/2011 06:49:05 PM:

> From: Ian Hickson <ian@hixie.ch>
> To: Richard Schwerdtfeger/Austin/IBM@IBMUS
> Cc: public-html@w3.org
> Date: 03/30/2011 06:49 PM
> Subject: Re: feedback requested: Canvas change for improved hit
> testing that also facilitates accessibility
>
> On Wed, 30 Mar 2011, Richard Schwerdtfeger wrote:
> >
> > Here are your real use cases: [...]
>
> To clarify, what I meant was I think it would be good to get actual HTML
> pages that show the kinds of issues we're trying to solve. Only by having

> real content can we determine how well an API works. We can't design an
> API in a vacuum.
>
>
> > - The hit testing and mouse events that are normally directed only to
> > canvas can be directed to the fallback DOM element that receives the
> > keyboard element
>
> I don't really know what that would mean. This is the kind of thing for
> which actual HTML pages showing what you mean would be fantastic.
>
I attached a file. This example has two check boxes in fallback content
that have a 1:1 mapping to the canvas rendered version. A magnifier cannot
zoom to it in the UI without knowing where it is or moving keyboard focus
to it. As I said they keyboard focus is not enough.
(See attached file: CanvasEditor.html)

So, that is a simple example. Here is a more complex example from Lucidart:

http://www.lucidchart.com/

In this scenario the author has actually created separate canvas element
for drawing objects that are overlayed over the main diagram drawing space.
No doubt the author found it much easier to handle hit testing and mouse
event processing on each individual canvas drawing object. This is terribly
inefficient. Now, we have not made this accessible but to make it
accessible we would need to associate all the separate canvas elements with
the main canvas element for the drawing background. We could in fact use
the bounding rectangle for the entire drawing object (say it is a decision
drawing object in a flow chart) and the magnifier could zoom in on that but
at large magnification levels (say 10X) we might be looking at just a few
of the characters representing the label within the drawing object but we
have of determining where the text for the label is on the screen so the
magnifier can find it and zoom to it.

I hope this provides adequate use cases to explain the problems which are:

- determining the location and bounds of a distinct object in fallback
content but located on the canvas rendering.
- providing adequate hit testing functionality to avoid an author from
having to calve off drawing objects that should be part of canvas just so
they can be manipulated and moved within the canvas drawing space.

Whatever is done for hit testing, the author must be able to:
- define a drawing path that defines the bounds of the actual drawing
element as shown on canvas
- associate that path as the clickable region for an element in fallback
content so that the author can process both mouse (our touch device) and
keyboard input on the same HTML DOM element representing what is drawn on
the canvas just like the rest of HTML.
- Take the closed path and produce a bounding rectangle for the fallback
object in the accessibility API mapping
- Update the bounding rectangle/clickable region based on where the object
last drawn on the screen
- make it possible for the canvas rendering engine to know what was drawn
last placing objects on a higher "z order" to accommodate direction
directing the pointing device input to the appropriate object.

>
> > - The HTML spec defines 1:1 mapping for the fallback content to the UI
> > object. This allows us to tie the bounding rectangle needed for hit
> > testing to the object.
>
> My concern is that most of the time this won't make sense. The use of
> canvas is something that will typically happen when traditional UI
> paradigms don't work, or are not being used. There might not _be_ a
> bounding box, because the "widget", insofar as there is one, might be
> shattered into many pixels distributed across the canvas, with the pixels

> continually drifting around and just coming together when the user
somehow
> indicates a desire to interact with a particular aspect of the UI.
>
> When canvas is being used in a way that it can just be mapped straight to

> actual UI widgets it almost certainly is being misused -- if a 1:1
mapping
> is possible, then canvas is probably not necessary and one should just
use
> HTML instead. (This is similar to the argument that trying to address
text
> editing accessibility in canvas is misguided, since text editing should
> never be done using canvas in the first place.)
>
> What we need are small HTML pages that show examples of UIs for which
> canvas is appropriate and for which we need accessibility hooks that are
> not yet available. It doesn't make sense to talk about this in the
> abstract without examples to look at.
>
> --
> Ian Hickson               U+1047E                )\._.,--....,'``.    fL
> http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Thursday, 31 March 2011 20:34:44 UTC