Re: Clarification for UI Request events (why no "activate"?)

Responses inline.

On May 2, 2013, at 9:32 AM, Eitan Isaacson <eitan@mozilla.com> wrote:

> Hello,
> 
> *** Sorry if this was posted already, I sent it earlier from the wrong
> address and it bounced ***
> 
> In section 3 there is an editorial note that reads "There is
> purposefully no request event for activating the default action. Authors
> are encouraged to use a standard |click| event for default actions."
> 
> I failed to find the origin of that note in public forums, so excuse me
> if I am re-igniting an old debate. I would very much like to see an
> "activate" UI request. Using the "click" event is not always a good
> solution.
> 
> For one, it is centered around a mouse metaphor and is not input-agnostic.

It's only mouse-centric in its onomatopoetic name. Click has been the de facto input-agnostic activation event for more than a decade, and user inputs from all modalities have the ability to trigger a 'click'. 

> Second, because of the proliferation of pointer types and events a web
> app might choose to discover a user pressing a button by other means
> besides "click". For example, with touch (or soon, pointer) events it
> might be touchstart and touchend with the same finger within radius r
> from the widget without travelling more than distance "d" and was lifted
> well after duration x but not before duration y. Why would they do this?
> Because they might need enough nuance to distinguish between a tap, a
> swipe and a dwell. Each of which would do something else in the UI.

Taps, swipes, and dwells, in addition to being explicitly outside the scope of the group charter, are not independent of the physical user interface. The primary goal of IndieUI is to let the OS or AT layer determine what input is meaningful and pass that on to an app. For example, what you mentioned as "swipe" may in some cases, be sent as a "scrollrequest" or a "moverequest", but "swipe" in an of itself is tied to a specific interface.

If there is a common UI pattern for what your app example needs for "dwell" then we should consider an additional meaningful discrete event for that. "Dwells" may detectable with what was previously proposed with the "ATFocus" and "ATBlur" events, though they were removed due to the AT-specific nature. We have been discussing "point-of-regard focus" and "point-of-regard focus" events but haven't come up with a good name for them yet.

> They
> might also choose not to add an event listener to the widget directly,
> but to its container, so that they can determine with some fuzz which of
> the child widgets the user meant to activate.

Client side input fuzz is a deal-breaker. If a web app is using UI fuzz (such as a "long click" or "tap and hold" for ## seconds) to determine a meaningful difference in input, it will never be possible to separate the intent from the input method. Does a long hold this mean "mark for deletion" or "mark for move"? Does it mean "activate secondary action" or "show menu"? Does it mean something else entirely?

Check the product tracker to see if your request may already be covered by open issues or actions. These two may be relevant, though neither is in the spec yet.

ISSUE-3 SecondaryAction: https://www.w3.org/WAI/IndieUI/track/issues/3
ACTION-25 MarkRequest: https://www.w3.org/WAI/IndieUI/track/actions/25

Other issues and actions for IndieUI Events 1.0:
https://www.w3.org/WAI/IndieUI/track/products/2

> I think it would fit under Indie-UI's goals to separate the nuanced and
> fragile input interpretation from the actual activation of a widget.

Do my responses above alleviate the concerns you have, or did I leave something unaddressed?

> This is good for automated testing where the developers could have unit
> tests for the gesture detection, and readable automated UI tests with UI
> request events. This is obviously good for accessibility too!

I certainly agree.

Thanks,
James

Received on Thursday, 2 May 2013 18:38:43 UTC