Re: Clarification for UI Request events (why no "activate"?)

Response below.

On 05/02/2013 11:38 AM, James Craig wrote:
>> For one, it is centered around a mouse metaphor and is not
>> input-agnostic.
> 
> It's only mouse-centric in its onomatopoetic name. Click has been the
> de facto input-agnostic activation event for more than a decade, and
> user inputs from all modalities have the ability to trigger a
> 'click'.

I would say that it is more than name only:
1. It uses the MouseEvent interface, that requires pixel coordinates,
modifier keys, which mouse buttons (with the assumption that a proper
click is always the left button).
2. If you dispatch a mousedown and then a mouseup, the user agent will
automatically dispatch a click.

> 
>> Second, because of the proliferation of pointer types and events a
>> web app might choose to discover a user pressing a button by other
>> means besides "click". For example, with touch (or soon, pointer)
>> events it might be touchstart and touchend with the same finger
>> within radius r from the widget without travelling more than
>> distance "d" and was lifted well after duration x but not before
>> duration y. Why would they do this? Because they might need enough
>> nuance to distinguish between a tap, a swipe and a dwell. Each of
>> which would do something else in the UI.
> 
> Taps, swipes, and dwells, in addition to being explicitly outside the
> scope of the group charter, are not independent of the physical user
> interface.

That is exactly my point. I would add click to that list of user
interactions that are not independent of a physical UI.

> The primary goal of IndieUI is to let the OS or AT layer
> determine what input is meaningful and pass that on to an app. For
> example, what you mentioned as "swipe" may in some cases, be sent as
> a "scrollrequest" or a "moverequest", but "swipe" in an of itself is
> tied to a specific interface.
> 

Correct. And I am arguing that click falls in the same category.

A click is distinct from a button press, as a swipe is distinct from a
scroll request.

> If there is a common UI pattern for what your app example needs for
> "dwell" then we should consider an additional meaningful discrete
> event for that. "Dwells" may detectable with what was previously
> proposed with the "ATFocus" and "ATBlur" events, though they were
> removed due to the AT-specific nature. We have been discussing
> "point-of-regard focus" and "point-of-regard focus" events but
> haven't come up with a good name for them yet.
> 

By dwell I meant "long-click". It was just an example.

>> They might also choose not to add an event listener to the widget
>> directly, but to its container, so that they can determine with
>> some fuzz which of the child widgets the user meant to activate.
> 
> Client side input fuzz is a deal-breaker. If a web app is using UI
> fuzz (such as a "long click" or "tap and hold" for ## seconds) to
> determine a meaningful difference in input, it will never be possible
> to separate the intent from the input method.

Why not? I don't think the user agent would be the only one dispatching
UI requests.

The reality today is that web applications must interpret distinct touch
events and compile them into higher-level gestures
(swipes/pinches/pans/taps/etc). A good migration path for this spec
would be to have these higher level gestures mapped in a way that they
then dispatch UI request events, and effectively separating input from
user intent.

Inventive web developers will always find new input methods (shake your
device to undo), as long as the input is separated from the user intent,
and both the UA and AT have access to that functionality, it should be fine.

Another example - a web app could decide that a certain control is
activated by a double-click. The user input would be different, but the
intent would be the same: activate.

> Does a long hold this
> mean "mark for deletion" or "mark for move"? Does it mean "activate
> secondary action" or "show menu"? Does it mean something else
> entirely?
> 

I think you misunderstood my example, sorry if it was murky. My point
was that web apps could employ today a whole range of input devices and
events with a lot of logic to interpret the input. The charter of this
group is to separate this device-specific interaction from the user's
actual intent so that assistive technologies, for example, could
directly request that intent on behalf of the user.

A click event, is not a user intent, it is a left mouse button being
depressed and then released in a short period with the cursor in a
certain coordinate on the screen. In my mind, a recommendation to use a
click event in the spec to signify a "default action" contradicts the
charter of this group which is to separate input from intent.

>From my own experience, both in Gecko's accessibility doAction
implementation, and the Firefox OS user interface, when scripted mouse
events are dispatched with the assumption that they "activate" an object
all sorts of things can go wrong.

> Check the product tracker to see if your request may already be
> covered by open issues or actions. These two may be relevant, though
> neither is in the spec yet.
> 
> ISSUE-3 SecondaryAction:
> https://www.w3.org/WAI/IndieUI/track/issues/3 ACTION-25

I think it is interesting that a SecondaryAction is being proposed. One
could make the argument that a secondary action, is really just a right
mouse click, since it is the de-facto input-agnostic method for more
than a decade.

I think that a PrimaryAction or DefaultAction request would compliment
it greatly :)

> MarkRequest: https://www.w3.org/WAI/IndieUI/track/actions/25
> 
> Other issues and actions for IndieUI Events 1.0: 
> https://www.w3.org/WAI/IndieUI/track/products/2
> 
>> I think it would fit under Indie-UI's goals to separate the nuanced
>> and fragile input interpretation from the actual activation of a
>> widget.
> 
> Do my responses above alleviate the concerns you have, or did I leave
> something unaddressed?
> 

I hope my response clarified my earlier mail. I am raising this issue
because I am working on making Firefox OS accessible with a new built-in
screen reader. IndieUI is exactly what we need to make this happen.
Simply adding 'click' listeners can introduce unintended consequences in
apps that are heavily tuned to touch interaction. That is partly my
inspiration for my original example. So I would like to see IndieUI go
the extra step and offer a complete divorce from the mouse event
anachronism :)

Cheers,
  Eitan.

Received on Thursday, 2 May 2013 20:50:40 UTC