See also: IRC log
<patrick_h_lauke> btw i joined the call but just sorting out something noisy so on mute ;)
<Kathy> https://www.w3.org/WAI/GL/mobile-a11y-tf/wiki/WCAG_2.1_Success_Criteria_Submission_Requirements
Kathy: discussing edits
... the ones in the column essay reviewed by task force have
been reviewed and I think are finalized. M1-M10 will be
submitted as a full group
... that puts us mid-November when we get all of the submitted.
Our deadline is December 1 – that's a hard deadline
... Andrew said that if were still working on tweaking some of
the wording we can go ahead and update after we've
submitted
... we are in pretty good shape
... since Patrick is here M6 first
<patrick_h_lauke> https://github.com/w3c/Mobile-A11y-Extension/blob/gh-pages/SCs/m6.md
Patrick: updated 10 minutes ago
Patrick: short name input
agnostic, all text can be operated without requiring any
particular type of input mechanism, suggested AA
... superset SC that tries to cover more than just the keyboard
and mouse scenario.
... potentially working group may combine it something else but
this is the culmination of the various input related SCs - the
gold standard
... will check glossary definitions
... will fall under inputs with assistive technology although
it does cover other types
Kathy: also in the description this is really a superset for the keyboard. Note at end of description, task force feels that this is a superset of the keyboard one, if the working group would like to consider adjusting the keyboard input to have this one instead. Thinking of this as 2.1 or silver
Patrick: reading
description
... thinking about range – all input devices, all standardize
input devices
Kathy: may be supported
... the other one we talked about last week we tied to the
accessibility support – can we tie in something like that
Alan: can we go back to without requiring any particular type of input negative some type wording instead of all
Patrick: Yes that makes sense – last sentence… without requiring any particular type of input mechanism – that makes sense
David: seems like we're trying to address IndieUI but don't have the tools today. We have the standard for touch and pointer but we don't actually have anything implemented – trying to build a framework for something that doesn't exist yet
Patrick: suggested techniques further down – we can already do this by relying on high-level input agnostic events such as focus blur click…
David: last time I checked with talkback you're swiping through and you're listening for an on blur the swiping itself, the virtual cursor is not listening to that blur activity – am I right on that
Chris: no it's not even going to cause that because the talkback focus – you can only listen for that as part of a native API, it's not in the web browser
Patrick: focus and blur are fired
say a tap, touchscreen and talkback – it depends what exactly
you are implementing
... if your focus is currently on an element like a button and
you moved to another element and activate that blur is fired on
the element where the focus previously was
Chris: developers typically take advantage of that if input focus is cycling through elements – the only way for talkback user to put focus on something would be actually to select
Patrick: further clarification: focus and blur not fired when you would expect them in certain input mechanisms. Don't assume that a user can first focus and then separately click
David: language right now all functionality can be operated without particular – like say I want you to be able to write a letter without using any particular type of writing mechanism
Patrick: pen, dictating it…
David: 10,000 foot view of the keyboard thing
Chris: asking developers to be responsible for this level of agnosticism
David: we want to find universal accessibility, no matter what you are touching it with it's going to be activated, but I don't see how we can fix this – is going to take incredible bandwidth. This is a fundamental shift towards something that's not yet available
Patrick: I really don't believe
that it's not available. Maybe I'm missing something but the
same argument that we just had could be made – if it's not
mouse workable can't be made to be keyboard workable
... key handling, mouse handling, touch handling, pointer
events – not science fiction, building applications and not
assuming one type of input. I'm not seeing how it's not
workable right now. I'm doing exactly this already
David: I don't think you can say
without requiring any particular type of input mechanism on our
current success criteria. It's very strong normative statement
that's superwide. I'm going to sound like a Luddite here but I
don't know how you can go that wide.
... we are in a three-year window now, that seems to be the
definition of the new charter. Maybe in three years we will be
able to say just use the high level
Kim: I'd add speech input to the description – Mic as the device. I think we been talking about input agnostic for a long time now and should address it
David: large width
Kathy: what are the scenarios that would not be possible today – maybe we can address how we can look at that
David: pulling up a list of all
of our success criteria – I think the other ones address what
this is trying to do except this one is going whiter than all
the other ones. It's going beyond what our current technology
is I would say and it's getting at something that we don't yet
have. It's one thing to aspire to something but you can't
require – the role of WCAG in my understanding is...
... were not inventors. We vet things. We standardize things
that are emerging, that are working well. We don't generally
create momentum of something that doesn't exist
Patrick: I contend that that's
not the case – I'm not proposing something that doesn't
exist
... I think we need to submit it and see with the working group
says
Chris: let's take a look at the
technique. Focus and blur I would argue in android are not
input agnostic. They would not be usable the way people would
expect them to be usable using talkback on an android
... that part of this is very broken
David: is there something here Patrick that's not in your other proposals that exists today – we have five new success criterion over the past four or five weeks, they are all filling in – is this in place of all of those?
Patrick: this covers things that are directly mentioned in the other ones potentially making it future proof
David: we don't have a much of a
future for 2.1
... it's basically filling a few gaps for the last couple of
years of WCAG
Patrick: let's remove it, let's park for future versions and concentrate on pushing the others through
Kathy: previously we had this additional input relying on assistive technology. I just want to make sure were not going to miss something if we take this out. We got keyboards with AT and then this one was additional inputs – are we going to lose something
Patrick: potentially one aspect
we are going to lose is users switching between technologies in
a session
... maybe a failure under the proposed pointer one
Comparing existing techniques
David: We have the proposed pointer, we have keyboard already. In the next three years are we going to see something new come out that's going to blow this out of the water
Kim: speech plus mouse device is common and has been for a long time
Kathy: not limit the user but
draw the correlation that for the types of input that are
available for use within the website or application can be used
together and are interchangeable. That covers what Kim is doing
and that's the point that I heard from Patrick. So if we change
this – can interchange the different iinput methods that might
address Patrick's concern and your concern David,...
... and Kim's example
... you can start with speech then move over to pointer then
keyboard – not locked into one technology to do a task
<Alan_Smith> How about this: All functionality can be operated without limiting interaction to a particular type of input mechanism
<patrick_h_lauke> (noting that i just updated https://github.com/w3c/Mobile-A11y-Extension/blob/gh-pages/SCs/m6.md)
David: trying to think what problem we are trying to solve
Kim: more examples of important to be able to use two input methods at once
Alan: input – is human computer
interaction limited by input or should we be considering some
other term
... I don't see a definition for input – just want to make sure
that's clarified
Patrick: I think we can address
that as well – I've updated M6 as we have been talking.
Stripped down to core nugget that isn't covered by any of the
other SEs – this concept of concurrency. Concurrent input
mechanism might be a better short name for the new
stripped-down version
... all functionality can be operated with all input mechanisms
available to the user, and description about user being able to
switch between inputs – not assume
... that would cover the core negative my concern of a website
that says there is no touchscreen so I won't listen for
touchscreen input which would block a user who then tries to
use
... for the techniques added a clarifier saying that however,
note the peculiarities of specific input mechanisms – blur
different keyboard and mouse – that probably needs more
work
... concurrent input mechanisms?
David: when we say available to the user we just don't know what's available to the user
Kathy: do you have better wording
David: trying to be clear on what
problem we are trying to solve – sounds like we are trying to
solve a whole bunch of problems – person getting into their car
and switching to a different device. Also trying to solve the
problem of the universal input device that we all want but we
don't have yet. So we think that the way to meet this until we
get that universal device higher abstract...
... level than before. I have huge concerns because it's so
wide. If we were to get this through we really don't need 2.1
anymore so this is competing against existing criteria
Patrick: not necessarily –
refocused this only says you need to be able to do them at the
same time – other success criteria including other things. This
is just addressing that user might be mixing and matching input
methods in one session. The way in which you can satisfy this
SC can be relying on high-level agnostic input – doesn't mean
they have to. It may be they register different...
... handlers but make sure they register them all at the same
time.
... scenario phone and then attach a Bluetooth keyboard on
their phone – if the site doesn't follow the advice that at any
point the user may use a different type of input then that user
couldn't use that keyboard even though they've just attached
it. I think now that we've just refocused it I don't think it
makes the others redundant. All these things, but also make
sure they work at...
... the same time as well
David: so were really saying all functionality that's required and other success criterion, you need to be able to use them interchangeably. That's okay if that's what we are trying to say
Kathy: yes, that's what we are saying
David: I've got a touchpad from 1996 with a stylus on it – that's available to me I could plug it into my PS2 port and suddenly they're required to support this device
Kathy: maybe tired to accessibility support or supported input methods that have been defined
Patrick: if nothing actually happens if you plug the arbitrary input device into the computer that's not an input mechanism – supported by the operating system or something along those lines?
David: but then we are making a requirement to support every operating system. Making things tight enough. All functionality with all input mechanisms is wide.
Kathy: maybe we can put a note in this in the description stating to the working group what we are trying to accomplish, and then go from there. I don't think this is something the task force is going to solve.
David: I just would like to see a real user who's having problem on the Internet right now – a widespread dumb practice by developers that could be fixed by our requirement
Patrick: Yahoo did a very naïve if touchscreens are present just reacted touchscreens and all the sudden on mixed input devices such as the surface three things didn't work for miles users – that's exactly that scenario and a recent implementation – they did fix it after I alerted them to it
Kathy: another example educational interactive elements – a lot of times they are locking you into – if you start using keyboard you have to finish it with keyboard. If you started with a mouse same – they've changed the interface based on what you start out using
<patrick_h_lauke> video of the flickr "touch OR mouse" problem that yahoo fixed after i alrerted them to it https://www.youtube.com/watch?v=f_2GKsI9TQU
David: I can ssee a huge benefit in being able to swap devices in the middle of a task
Kathy: that's what this is addressing
David: we can pass it onto the working group with a note that says something like that. To me the language right now doesn't speak to this – big huge red flags all over the place even in the current language. I'd like to see a type thing that basically says you can swap out what you're using. And I don't know how to say that at this point
Kim: swap out is not wide enough for my scenarios (using speech and mouse device in concert)
Kathy: limiting to certain user groups – if you aren't able to use those methods you're in trouble because you can't do it?
David: I've never in 15 years of doing accommodations with people seen a function that should have been done with the mouse but couldn't
<patrick_h_lauke> proposed new SC text: "All functionality can be operated even when a user switches between input mechanisms"
Kathy: I've seen that
specifically in educational materials. We've done usability
studies with it as well and it ends up being problematic
... low vision users use keyboard also want to use the mouse in
one area, and then blocked from using keyboard and that's
frustration
David: it sounds like several people in the group want to see this go through to the next level with a working group. I won't stand in the way of that. It just doesn't seem like the same level of maturity of some of the others.
Patrick: being mindful of the time possibly a as a resolution – all functionality supported when a user switches between input mechanisms
<patrick_h_lauke> changes commited https://github.com/w3c/Mobile-A11y-Extension/blob/gh-pages/SCs/m6.md
Patrick: making that change
Kathy: Chris – draft of M14?
Chris: yes – it's essentially done
RESOLUTION: Patrick to make final edits and ready for working group
Kathy: next week we will jump into the orientation SC that Jon has drafted. Other two input SC's from Patrick?
Patrick: will work on them for next week
<patrick_h_lauke> did final edits i think https://github.com/w3c/Mobile-A11y-Extension/blob/gh-pages/SCs/m6.md
<Alan_Smith> Still wondering if this wording would give the developers control: All functionality can be operated without limiting interaction to a particular type of input mechanism
<patrick_h_lauke> changed the Principle/Guideline. don't think it now needs any glossary additions/changes
<Alan_Smith> ok, on the glossary additions.
This is scribe.perl Revision: 1.148 of Date: 2016/10/11 12:55:14 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) No ScribeNick specified. Guessing ScribeNick: Kim Inferring Scribes: Kim Present: shadi patrick_h_lauke Alan_Smith Kim David Kathy Alan Jatin marcjohlic chriscm Regrets: jeanne Jonathan Found Date: 20 Oct 2016 Guessing minutes URL: http://www.w3.org/2016/10/20-mobile-a11y-minutes.html People with action items:[End of scribe.perl diagnostic output]