W3C

- DRAFT -

Independent User Interface Task Force Teleconference

28 Oct 2014

See also: IRC log

Attendees

Present
Susann_Keohane, Marc_Johlic, John_Foliot, Katie_Haritos-Shea, Mary_Jo_Mueller, Rich_Schwerdtfeger, Kurosawa_Takeshi, Cynthia_Shelly, Janina_Sajka, Michael_Cooper, Joanie_Diggs, JasonJGW, Ben_Peters, James_Craig, Kim_Patch
Regrets
Chair
SV_MEETING_CHAIR
Scribe
Rich, cyns, jcraig, MichaelC

Contents


<trackbot> Date: 28 October 2014

<MichaelC_> Minutes from yesterday

<richardschwerdtfeger> scribe: Rich

<richardschwerdtfeger> Katie: Do people think Shapes are enough?

<richardschwerdtfeger> Katie: I think we need the user context. The user was missing from the discussion

<richardschwerdtfeger> Rich: We need to allow the author to provide information to enable the shape to be controlled in a device independent way

<Ryladog> Rich: I thought that was the thing missing - we hav e this object in the DOM we want to send it device indeprnefdnt info - they had the iinsert text thing - but to make the decision on how mto do it

<Ryladog> you have to know more einformation. Event driven is to slow - it has to propogate

<Ryladog> is DOM 3 events enough. you fire an event and then it bubles. Does that give us enough

<Ryladog> Katie: What is DOM 4 paradigm

<Ryladog> Rich: I am not quite sure if the DOM paradigm is the right way now. An this was a question they asked us

<Ryladog> Rich: if we unlock that - we have to be a little careful of - I think he was looking t prototypee htings

<Ryladog> RS; i think we need to expose some things in the API to the DOM

<Ryladog> RS: Jame ws trying to use polyfills - we need to know - what is the increment - we have to provide that information

<Ryladog> RS: I think we need to push on getting information out - if we can leverage native host language semantis and ARIA and developes ake us e of that with device indepemdence

<Ryladog> RS: This would be good for Tab Index. One thing we missd talkig about. It is easy to reflect the role attribute but not the mountain of others

<Ryladog> Katie: CMN is working on AccessKey

<richardschwerdtfeger> John: The concept behind access keys that charles is working on is he wants to make them discoverable. He wants to also deal with conflicts.

<richardschwerdtfeger> John: today access keys are definitive. He wants to do here is what you want and the browser can map it the way they want to.

<richardschwerdtfeger> John: He wanted to do what Opera did which was to list the access key assignments and allow the user to remap them

<richardschwerdtfeger> http://www.w3.org/TR/xhtml-access/

<richardschwerdtfeger> john: the key is a tight binding

<Ryladog> Rich; In order to send the events what do we need to send to the browser? You basically nee a range of Nodes and an offset. That is basically what we do in AAPI...

<Ryladog> ...in a selection module dyoy have to have a node and an offset at the start - and at the end. Whatever mthey do in the event model they are going to have that running....

<Ryladog> ...he wants pass all of that selection state in the event?

<Ryladog> CS: I think so

<Ryladog> Rich: Why dont we make a list of questions

<cyns_> https://dvcs.w3.org/hg/dom3events/raw-file/tip/html/DOM3-Events.html#event-type-select

<kurosawa> http://w3c.github.io/selection-api/

<joanie> http://www.w3.org/TR/selection-api/

<Ryladog> http://w3c.github.io/selection-api/

<Ryladog> http://w3c.github.io/selection-api/

<JF> http://www.w3.org/TR/selection-api/

<richardschwerdtfeger> 1. Selection involves a starting node and an offset within that node for say text. Is your intent to only store that information in the event or can we get that from the Document object selection API?

<richardschwerdtfeger> If it is only in the event data other technologies (like assistive technologies don’t have access to what is selected today). Selection, today, is retrieved from the Document object vs. the event instance. We would need to remap from the event to the accessibility API.

<richardschwerdtfeger> What is the relationship between the event data for selection and what is stored in the DOM? … document.getSelection.

<richardschwerdtfeger> 2. We believe to do this correctly (beyond just selection) that we need to have state data that can be exposed by the author to the browser. For example, a slider adjustment from a device independent source would need to know the range and increment to be able to know what commands were needed to advance the slider position. In WAI-ARIA we expose much of information that is needed based on the role of the object. This was derived from accessi[CUT]

<richardschwerdtfeger> APIs on multiple operating system platforms that in turn were derived from common state and property data found in GUIs over the years. We believe it is essential that you include this information in your shape (API Design Pattern) planning.

<richardschwerdtfeger> We ran into issues with implementing polyfills in IndieUI around sliders as we needed this additional information.

<richardschwerdtfeger> We believe this is a way to mainstream aria work as authors gain mainstream benefits.

<richardschwerdtfeger> What is your plan to address this?

<cyns_> http://msdn.microsoft.com/en-us/library/windows/desktop/ee671665(v=vs.85).aspx

<cyns_> http://msdn.microsoft.com/en-us/library/windows/desktop/ff384841(v=vs.85).aspx#SelectingText

<cyns_> IE does support document.getselection http://msdn.microsoft.com/en-us/library/ie/ms535869(v=vs.85).aspx

<cyns_> sorry, wrong link http://msdn.microsoft.com/en-us/library/ie/ff975169(v=vs.85).aspx

<richardschwerdtfeger> Questions and Discussion Points:

<richardschwerdtfeger> 1. Selection involves a starting node and an offset within that node for say text. Is your intent to only store that information in the event or can we get that from the Document object selection API?

<richardschwerdtfeger> If it is only in the event data other technologies (like assistive technologies don’t have access to what is selected today). Selection, today, is retrieved from the Document object vs. the event instance. We would need to remap from the event to the accessibility API.

<richardschwerdtfeger> What is the relationship between the event data for selection and what is stored in the DOM? … document.getSelection.

<richardschwerdtfeger> Note: we believe there are advantages to having both. The event would allow us to speak the text automatically without having to round trip back to the document. The document access is important for browsing the content and dealing with things like embedded objects (e.g. attachments) in a web mail application.

<richardschwerdtfeger> 2. We need specific information in the events. If you are including data in the event for selection we would like to see to reduce round tripping to the DOM or accessibility API:

<richardschwerdtfeger> - Start node and offset

<richardschwerdtfeger> - End node and offset

<richardschwerdtfeger> - The text string of the text selected area

<richardschwerdtfeger> Mainstream example: I am creating and audio UI that includes speech output. This would be fast.

<richardschwerdtfeger> Aging or Dyslexic users: Having basic feedback that augments the selection enables better comprehension in users.

<richardschwerdtfeger> 3. We believe to do this correctly (beyond just selection) that we need to have state data that can be exposed by the author to the browser. For example, a slider adjustment from a device independent source would need to know the range and increment to be able to know what commands were needed to advance the slider position. In WAI-ARIA we expose much of information that is needed based on the role of the object. This was derived from accessi[CUT]

<richardschwerdtfeger> APIs on multiple operating system platforms that in turn were derived from common state and property data found in GUIs over the years. We believe it is essential that you include this information in your shape (API Design Pattern) planning.

<richardschwerdtfeger> We ran into issues with implementing polyfills in IndieUI around sliders as we needed this additional information.

<richardschwerdtfeger> We believe this is a way to mainstream aria work as authors gain mainstream benefits.

<richardschwerdtfeger> What is your plan to address this?

<andy> looks great rich

Intention events discussion continued

<MichaelC> scribe: cyns

RS: summary... The author can initiate an intent based event based onteh device dependent event
... also the UA can generate a user intent event, which can come from aapi or from UA specific code
... new selection api would not change. This is based off current slection model but gives the author more control

BP: This allows the author to modify the selection before it shows up in the document.seleciton or the viewport

JW: what about the actions done on teh selection?

BP: it would be covered, but not by selection. For example, drop would be an intention. paste, copy, etc are intentions
... so you create a seleciton, and then you can perform intentions on it
... it clarifies the difference between the action the user takes (keydown) and the intention (scroll)

joanie: I was thinking about notifications... Will all the info about the selection, like what's in the seleciton object, be available.

BP: yes, it's there in the selection object. what's the use case

joanie: screen reader reading the selection.

BP: good idea to have a way for AT to know that an intention happened and completed

MC: there are events that return an object that is the result of the event.

BP: do you need it in the event?

joanie: yes, we have to do a lot of round tripping.
... when text selection changes, I am notified that it changed on an accessible object assoc with node in dom. round trip for start/end offset. round trip to text at offsets. etc.
... now that you have started explaining more, my use case is more about selection api than intention events

BP: I work on that too.

<jcraig> Not related to "user intent" events, but Joanie was talking about the inverse: web application notifying the platform APIs what just happened. e.g. explaining some of the business logic of the web app.

BP: this seems like it;s on the other side of the browser. it's n ot script, it's how does the browser give the at/api the info.

<andy> apologies, I have to go

<jcraig> e.g. Twitter responds to j and k as "previous" and "next" intents. Screenreader gets a series of focus, node deleted, node added, and selection changed events.

joanie: twitter, when you press j, I get a ton of accessibiltiy events, but none of them say "move to next item". If there was an after event that could be communciated via the accessiblity api, then I woudl know that orca should speak the next tweet.

<jcraig> Would be nice if the web app could declare, "MOVE TO NEXT is about to happen"; event spew; "MOVE TO NEXT just happened"

BP: need a way to announce what just happened in a consice way

CS: we could work that use case into WAPA

BP: pleaes give me a bug

JW: getting into selection api, might generalize... when a script modifies a docuemnt, the way we normally enable at to find out when that modification has ended is to use aria live regions. In this case, script is modifiying the seleciton and I wonder if there is alocking mechnism to prevent seleciton from being queried when it's being modified and notify AT when it's done

<benjamp> https://github.com/w3c/editing-explainer/issues

<benjamp> please file a bug tracking the twitter issue and the need for custom Intention events on github above

JS: lots of aapi events when hitting j in twitter

CS: aapis are noisy and low level

JS: somehting similar to aria live region for the selection object

joanie: the answer to the 'j situation' is that right now you're talking about browser defined events, but you're thinking about custom events, which 'next item' would be?

BP: yes I'm thinking about it, but it's long term

joanie: is author initiating or consuming intention event?

BP: both
... auhor can catch any event and then call declareintention api and say what the inteatnion of that event is

KHS: isn't that custom?

BP: you can only send the defined set of events. custom would be making your own events

JC: so the server can declare the user's intention?

<Zakim> MichaelC, you wanted to ask about the comparison (similarities and differences) between this approach and IndieUI

BP: yes, the app can. If it's wrong that's an app bug

MC: how are these similar to and different from indie ui events? we need to figute that out, but not right now

RS: talking about use cases alraedy in IRC
... speach api in browsers. would be good to connect to intent events

BP: how does that differ fromt eh browser itself?

JC: I think what Rich is describing is that the user has a greasemonkey script or browser extension that acts as AT can listen to these selection evetns and then call speach api to have it spoken. I think those events exist

RS: yes, but you have to roundtrip.

MC: is that a feature request on the seleciton apoi

RS: android puts the seleciton in teh event and it's nice
... what about annotation.

<benjamp> file a bug in Selection API to include getSelection in the event: https://github.com/w3c/selection-api

CS: both seleciton an annotation have ranges that may span elements

BP: I want to talk about the events that are in indieui now, and how they fit into the specs being built in webapps
... indieui has undo, scrolling, etc. which we have too
... are you interested in putting the things that webapps/sites need for mainstream use into webapps with good accessibility, and only keep the things we're not covering in indieui

JC: yes, presuming that those mainstream APIs provide enough introspection for accessibility and AT needs

JS: yes

MC: if there are going to be general purpose events that cover oru requirements, we prefer these. from the indieui wg, we need to think about how that impacts our deliveerables.
... if you think some will be taken up in webapps, maybe we should look at whether we can do with all of them. indieui might continue to exist as non-deliverable group to push use cases.

BP: events are scroll, undo/redo, selection

JC: mark request is for selection in rich text but also selectable items like table rows and lists.

BP: need to disambiguate text selection and item selection across web. selection api might own ths

CS: what about media events?

JS: these are general purpose events on os's but not on the web

MC: one idea woudl be to adopt model of intention events, develop the ones that no one else is doing on that model, and then roll it into another spec

<Zakim> jcraig, you wanted to mention concurrent editing related to selection API

JC: media events you can trigger via methods other than keyboard

BP: we have traction on some event sin web apps. indieui has traction on other events. you should keep those for now because otherwise they won't have any traction

joanie: media events feel similar to move next in twitter

BP: there is no native control in move next, but there is in media

JS: there's html api for them

joanie: move next previous isn't really custom. it exists in native os
... might be good to expand set of intention events

JC: concurrent editing poses challenges for selection api. need multiple selections at the same time and assoc with other users

BP: seleciton is the user at this machine
... other user's seleciton should be a different object.

JC: it think we want the same evetns/api
... example... there are a lot of cucurrent editing coding sites. the purpose is to do a javascript challenge while I watch you edit it. need an api to make that concept accessible in general.
... whether or not you call it seleciton, it would be similar

BP: maybe it inherits from seleciton, but it's a different thing
... might need its own spec. might be related to annotations... or not...

JC: annotations are permenant and selection is temporary

MC: feels like a version 2 problem

JC: note that we are aware of use case but not dealing with it justnow

<jcraig> scribe: jcraig

<scribe> scribe: jcraig

JW: possibility of remote selection is to consider it a transient readonly annotation
... re: scope of WG events activiites: One approach would be to move some events into other groups. Another approach is move that work here.
... point being that it may be appropriate to modify the TFs to include other groups that are working on similar problems
... We want to avoid overlap and inconsistencies

BP: that is the plan for the explainer, to cover all the use cases

<Zakim> cyns, you wanted to say that wapa talks about virtualized list management and we could add twitter case

<jcraig_> CS: big problem for accessibility; for mainstream users; you can hack it

<jcraig_> BP: goal is to make this easy enough that it's desirable for authors to do it the right way, not the hacky way

<jcraig_> RS: ???

<jcraig_> CS: groups of intentions that are related to ???

<jcraig_> BP: explainer is the umbrella work; does not include all individual pieces

<jcraig_> MC: considering including IndieUI in the Editing TF

MC: Would like to sort out where all the discussions are happening

BP: We need to rename the TF to something like "editing and intents"

MC: Combining TF is an agenda item for near future.

<cyns> CS: User Intention Events is way of doing things. There are several groups of things we want to do this way, such as editiing, scrolling, ui widget state changes (aria), selection, media, etc.

<Ryladog> CSS providing sufficient contrast for placeholder text

<MichaelC> scribe: MichaelC

<jcraig> s/jcraig/jcraig/

Kim Patch requirements input

kp: Have worked on UAAG and Mobile TF

and have a speech recognition hands-free app

for people who need speech for everything

so command and control a priority

have been thinking about how touch and speech work together

orthodoxy has been that input methods need to work together

but my view is they can be separated

e..g., using speech and mouse

use the pointer, but then saying ¨click paste¨

pastes where the pointer is

jc: also could combine with eye tracker

kp: yes

using conventional and unconventional methods together

another example is gestures like sign language

whaddaya think? too soon?

jc: initial implementation is usually to mimic another device, like a keyboard

the interpreter does that

triggers a script / macro etc.

kp: what about when using two input methods together?

jc: sounds like ideal is to have an entire command / control interface controllable by a specific modality

which might be less critical for mainstream user though important to PWD

kp: we don´t know what people will adopt

jc: a11y features often adopted as mainstream, but not by majority

kp: take mobile phones

have separate touch interface, but have keyboard

but can´t say ¨ctrl-p¨ to print

an ideal browser would have that function so can plug in external keyboard

js: supporting keyboard on phone out of scope for IndieUI

but actuating intentional events via keyboard is in our scope

jc: take undo or copy/paste

have different key combinations on different platforms

or consider the click event that is effectively used for keyboard and mouse activation

or say on map site where wheel zooms instead of pans

that´s confusing

with IndieUI could map desired physical events to the ultimate action

and user could re-map

so taking speech command, the interpreter could figure out and fire the desired intended event

that´s the goal - though we´re not there yet

kp: what about supporting multiple modalities at once?

jc: web app would care about order events came in

in mouse plus speech example, doesn´t have to know about

kp: add gesture to turn on / off mic

jc: that´s outside this context

kp: gesture for click?

jc: app still doesn´t care

just gets the events

kp: anything special the mobile tf should examine?

often we look at keyboard issues and assume IndieUI is addressing

khs: are mobile interfaces out of scope of IndieUI?

jc: yes

kp: keyboard accessibility has been the base fallback input event

khs: hope that will migrate to intentions

jc: native platforms will handle

there may be many ways to acuate a given intention

the problem is when author uses non-standard control

ARIA allows us to get values

but hard for user to manipulate

and input values

kp: so I would be able to use speech, touch, gesture

and access both native and non-native controls

jc: yes

kp: is the chosen word ¨intention¨?

js: we´re sorting that out

it´s our working term

khs: so be careful of putting it in techniques

mc: use in draft, just recognize will need to update when the terms stabilize

jc: had sent comments to UAAG with a suggestion

kp: UAAG has a clarification on ¨keyboard accessibility¨ that it applies to independent controls

but the IndieUI terms might work

khs: mobile techniques not there yet?

kp: starting to look at that stuff

jc: you´re looking at ¨intent¨?

maybe ¨input independent¨ is better

or ¨device or modality independent¨

¨intent¨ has a slightly different meaning

kp: what else should we be thinking about?

waypoints we should be looking for?

mc: take a look at heartbeats

jc: suggest you observe our interaction with WebApps today

to get some non-WAI thinking

kp: what are your key work points coming up

jc: we´re sorting out overlap with other groups

looking to meet all the use cases - without worrying about what spec address which part of the use cases

kp: speech users struggle with single-key shortcuts

do you know the issue?

<silence/>

it´s not obvious to people

but it´s easy to accidentally trigger such an action

by happening to say a word that is a command

jc: so AT sends the letters, not the words?

kp: yes, it doesn´t know what´s what

jc: Then that's a bug in the AT. AT has way to differentiate, so it shouldn't be triggering accidental or raw input.

kp: it can be a pain in the arse

<tangents into details/>

js: we´ve been discussing that mappings could be packaged as a product

re-mappability is part of the design

jc, kp: <getting technical/>

jd: does this group have a speech recognition representative?

js: no, though there are other modalities not represented

jc: and some that are; would be nice to have that POV

jd: in example of ¨save as¨, faster to issue the command than to navigate the menu

kp: I can control a new app via the menu

if there are only commands, I have to learn them

so harder at the start, still need the menu

but easier once learned

keep in mind there are billions of possible speech commands

early attempts tried to include them all

run out of spacce

in the hardware brain and the bio-brain

jd: as an AT developer I am interested in core actions

have observed of emergence of a broader class of user intentions

e.g., ¨next tweet

even just ¨play¨ on a video without having to navigate to it etc.

there exists a broad class of action shared among applications

jc: much of this is in the menu, so controllable

jd: <combines too many adjectival clauses for scribe to capture>

if we could identify those, could meet needs of more users

kp: am hoping for a standard to help app folks have consistent approaches

make it easier to remember all the commands

jgw: mixing event types means could have conflict over what sort of events might be sent

and app developer will have to sort

but might not know specifics of the user environment

jc: yes, that is a concern

kp: yes that´s important

I´ve been assuming I can use a mix

jc: could have one intent trigger another trigger another

kp: want to get to final intent

but don´t want to lose the daisy-chain

kp: will review specs when they´re published

various: here´s a bunch of stuff you can read

kp: please send me refs

WebApps coordination continued

The IndieUI WG rejoined the WebApps WG to continue the discussion of intention events and editing.

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2014/10/29 15:57:50 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/trting/trying/
Succeeded: s/user has a greasemonkey script or browswer extension /I think what Rich is describing is that the user has a greasemonkey script or browser extension /
Succeeded: s/annoationtion/annotation/
Succeeded: s/JC: yes/JC: yes, presuming that those mainstream APIs provide enough introspection for acceessibility and AT needs/
Succeeded: s/acceessibility/accessibility/
Succeeded: s/hack/hack it/
FAILED: s/jcraing/jcraig/
Succeeded: s/jcraing/jcraig/g
Succeeded: s/AT should have way to differentiate/Then that's a bug in the AT. AT has way to differentiate, so it shouldn't be triggering accidental or raw input./
Succeeded: s/concrn/concern/
Found Scribe: Rich
Found Scribe: cyns
Inferring ScribeNick: cyns
Found Scribe: jcraig
Inferring ScribeNick: jcraig
Found Scribe: jcraig
Inferring ScribeNick: jcraig
Found Scribe: MichaelC
Inferring ScribeNick: MichaelC
Scribes: Rich, cyns, jcraig, MichaelC
ScribeNicks: cyns, jcraig, MichaelC
Default Present: Susann_Keohane, Marc_Johlic, John_Foliot, Katie_Haritos-Shea, Mary_Jo_Mueller, Rich_Schwerdtfeger, Kurosawa_Takeshi, Cynthia_Shelly, Janina_Sajka, Michael_Cooper, Joanie_Diggs, JasonJGW, Ben_Peters, James_Craig, Kim_Patch
Present: Susann_Keohane Marc_Johlic John_Foliot Katie_Haritos-Shea Mary_Jo_Mueller Rich_Schwerdtfeger Kurosawa_Takeshi Cynthia_Shelly Janina_Sajka Michael_Cooper Joanie_Diggs JasonJGW Ben_Peters James_Craig Kim_Patch

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 28 Oct 2014
Guessing minutes URL: http://www.w3.org/2014/10/28-indie-ui-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]