Re: Clarification for UI Request events (why no "activate"?)

+Doug Schepers

On May 2, 2013, at 1:50 PM, Eitan Isaacson <eitan@mozilla.com> wrote:

> Response below.
> 
> On 05/02/2013 11:38 AM, James Craig wrote:
>>> For one, it is centered around a mouse metaphor and is not
>>> input-agnostic.
>> 
>> It's only mouse-centric in its onomatopoetic name. Click has been the
>> de facto input-agnostic activation event for more than a decade, and
>> user inputs from all modalities have the ability to trigger a
>> 'click'.
> 
> I would say that it is more than name only:
> 1. It uses the MouseEvent interface, that requires pixel coordinates,

0,0 pixel coordinates suffice in most cases. More precise pixel coordinates can be extrapolated by rendered position and bounds of the element to be clicked. For example, when WebKit receives an AXPress action on an element in OS X, it sends the x and y coordinates of the 'click' event as the width/2 and height/2 of the rendered bounds, and this does not rely on mouse behavior. FWIW, WebKit also simulates mousedown and mouseup when an assistive technology triggers AXPress.

> modifier keys,

See the "mark request" link listed in the previous email.

> which mouse buttons (with the assumption that a proper
> click is always the left button).

"primary button" I believe, which may or may not be the left one, depending on mouse preferences. The secondary action is usually triggered in the AX API with an AXShowMenu action.

> 2. If you dispatch a mousedown and then a mouseup, the user agent will
> automatically dispatch a click.
> 
>> 
>>> Second, because of the proliferation of pointer types and events a
>>> web app might choose to discover a user pressing a button by other
>>> means besides "click". For example, with touch (or soon, pointer)
>>> events it might be touchstart and touchend with the same finger
>>> within radius r from the widget without travelling more than
>>> distance "d" and was lifted well after duration x but not before
>>> duration y. Why would they do this? Because they might need enough
>>> nuance to distinguish between a tap, a swipe and a dwell. Each of
>>> which would do something else in the UI.
>> 
>> Taps, swipes, and dwells, in addition to being explicitly outside the
>> scope of the group charter, are not independent of the physical user
>> interface.
> 
> That is exactly my point. I would add click to that list of user
> interactions that are not independent of a physical UI.

See this 2009 thread about DOMActivate in favor of click.
http://lists.w3.org/Archives/Public/public-forms/2009Jul/0008.html

Maciej wrote:
> Unfortunately, Web compatibility requires sending a "click" event for
> non-mouse-driven activations. In particular, it is a common practice to
> give an <a> element or an <input type="button"> element an onclick
> attribute and the page author expects it to trigger even for keyboard
> activation. This practice precedes the existence of the DOMActivate
> event and remains common. Authors almost never use a DOMActivate handler
> instead.

I think you can still use DOMActivate if truly necessary, but most web apps use click handlers instead. Doug will be able to speak to this point better.

>> The primary goal of IndieUI is to let the OS or AT layer
>> determine what input is meaningful and pass that on to an app. For
>> example, what you mentioned as "swipe" may in some cases, be sent as
>> a "scrollrequest" or a "moverequest", but "swipe" in an of itself is
>> tied to a specific interface.
>> 
> 
> Correct. And I am arguing that click falls in the same category.
> 
> A click is distinct from a button press,

Sematically, I agree. Functionality, they are nearly identical.

> as a swipe is distinct from a
> scroll request.

The spec has events lists for "scrollrequest", "moverequest", "valueChangeRequest" and others that may sufficiently alleviate the need for most distinctions between a swipe and a scroll. 

>> If there is a common UI pattern for what your app example needs for
>> "dwell" then we should consider an additional meaningful discrete
>> event for that. "Dwells" may detectable with what was previously
>> proposed with the "ATFocus" and "ATBlur" events, though they were
>> removed due to the AT-specific nature. We have been discussing
>> "point-of-regard focus" and "point-of-regard focus" events but
>> haven't come up with a good name for them yet.
> 
> By dwell I meant "long-click". It was just an example.

Long click doesn't mean anything semantically. It just means "mousedown… delay… mouseup"

>>> They might also choose not to add an event listener to the widget
>>> directly, but to its container, so that they can determine with
>>> some fuzz which of the child widgets the user meant to activate.
>> 
>> Client side input fuzz is a deal-breaker. If a web app is using UI
>> fuzz (such as a "long click" or "tap and hold" for ## seconds) to
>> determine a meaningful difference in input, it will never be possible
>> to separate the intent from the input method.
> 
> Why not? I don't think the user agent would be the only one dispatching
> UI requests.
> 
> The reality today is that web applications must interpret distinct touch
> events and compile them into higher-level gestures
> (swipes/pinches/pans/taps/etc). A good migration path for this spec
> would be to have these higher level gestures mapped in a way that they
> then dispatch UI request events, and effectively separating input from
> user intent.
> 
> Inventive web developers will always find new input methods (shake your
> device to undo), as long as the input is separated from the user intent,
> and both the UA and AT have access to that functionality, it should be fine.
> 
> Another example - a web app could decide that a certain control is
> activated by a double-click. The user input would be different, but the
> intent would be the same: activate.
> 
>> Does a long hold this
>> mean "mark for deletion" or "mark for move"? Does it mean "activate
>> secondary action" or "show menu"? Does it mean something else
>> entirely?
>> 
> 
> I think you misunderstood my example, sorry if it was murky. My point
> was that web apps could employ today a whole range of input devices and
> events with a lot of logic to interpret the input. The charter of this
> group is to separate this device-specific interaction from the user's
> actual intent so that assistive technologies, for example, could
> directly request that intent on behalf of the user.
> 
> A click event, is not a user intent, it is a left mouse button being
> depressed and then released in a short period with the cursor in a
> certain coordinate on the screen. In my mind, a recommendation to use a
> click event in the spec to signify a "default action" contradicts the
> charter of this group which is to separate input from intent.
> 
>> From my own experience, both in Gecko's accessibility doAction
> implementation, and the Firefox OS user interface, when scripted mouse
> events are dispatched with the assumption that they "activate" an object
> all sorts of things can go wrong.
> 
>> Check the product tracker to see if your request may already be
>> covered by open issues or actions. These two may be relevant, though
>> neither is in the spec yet.
>> 
>> ISSUE-3 SecondaryAction:
>> https://www.w3.org/WAI/IndieUI/track/issues/3 ACTION-25
> 
> I think it is interesting that a SecondaryAction is being proposed. One
> could make the argument that a secondary action, is really just a right
> mouse click, since it is the de-facto input-agnostic method for more
> than a decade.

Point taken, but it's debatable that simulating is input-agnostic. As far as I know, unlike click, there is no way to simulate a secondary mouse click from a keyboard or input methods other than specialized assistive technology.

> I think that a PrimaryAction or DefaultAction request would compliment
> it greatly :)

That's DOMActivate, is it not?

>> MarkRequest: https://www.w3.org/WAI/IndieUI/track/actions/25
>> 
>> Other issues and actions for IndieUI Events 1.0: 
>> https://www.w3.org/WAI/IndieUI/track/products/2
>> 
>>> I think it would fit under Indie-UI's goals to separate the nuanced
>>> and fragile input interpretation from the actual activation of a
>>> widget.
>> 
>> Do my responses above alleviate the concerns you have, or did I leave
>> something unaddressed?
> 
> I hope my response clarified my earlier mail. I am raising this issue
> because I am working on making Firefox OS accessible with a new built-in
> screen reader. IndieUI is exactly what we need to make this happen.
> Simply adding 'click' listeners can introduce unintended consequences in
> apps that are heavily tuned to touch interaction. That is partly my
> inspiration for my original example. So I would like to see IndieUI go
> the extra step and offer a complete divorce from the mouse event
> anachronism :)


Unfortunately I think divorcing IndieUI from 'click' entirely would just result in a lot of web apps not working at all with IndieUI, which defeats the main purpose. If web apps don't yet work with DOMActivate which has been around for years, why should they have to use anything other than click? It's what the authors know, so let's keep it simple.

I remain unconvinced. Could you attend the next IndieUI conference call (Wed, May 15 at 2PM Pacific) to further discuss the idea? 

James

Received on Thursday, 2 May 2013 22:37:06 UTC