W3C

- DRAFT -

User Agent Accessibility Guidelines Working Group Teleconference

14 Feb 2013

See also: IRC log

Attendees

Present
+1.609.734.aaaa, Jim_Allan, EricHansen, Jan, Jeanne, Greg_Lowney, Kim_Patch, kford, +1.425.381.aabb
Regrets
simon
Chair
JimAllan, KellyFord
Scribe
allanj, jeanne

Contents


<trackbot> Date: 14 February 2013

<allanj> http://lists.w3.org/Archives/Public/w3c-wai-ua/2013JanMar/0026.html

<allanj> http://lists.w3.org/Archives/Public/w3c-wai-ua/2013JanMar/0015.htm

<allanj> http://lists.w3.org/Archives/Public/w3c-wai-ua/2013JanMar/0039.html

<allanj> http://lists.w3.org/Archives/Public/w3c-wai-ua/2013JanMar/0045.html

<allanj> http://mothereffingtoolconfuser.com/

<Jan> http://mothereffingtoolconfuser.com/

Response to HTML5 Accessibility Taskforce Bugs

<allanj> http://lists.w3.org/Archives/Public/w3c-wai-ua/2013JanMar/0039.html

<allanj> scribe: allanj

Jim gives an overview of the bugs and response

jan; no concerns

kim: no concerns

jeanne: no concerns

<scribe> ACTION: jim to send bug response report to html5 A11y taskforce [recorded in http://www.w3.org/2013/02/14-ua-minutes.html#action01]

<trackbot> Created ACTION-799 - Send bug response report to html5 A11y taskforce [on Jim Allan - due 2013-02-21].

Jeanne Creating tests for SC verification in Candidate Review.

what the group will have to do after last call to get ready for candidate recommendation.

during CR, the spec is 'almost' complete. W3 says evaluate, send comments, begin implementation.

need to find 2 real world implementations, write a report and move to next level

need to think ahead to CR

1. decide how we structure the CR Exit Criteria. under what conditions is the spec successful.

the criteria must be approved by W3. the criteria and spec must be credible.

2. (big) need to write 'tests' for each SC across many products. we will use during CR.

<jeanne> http://www.w3.org/WAI/AU/2012/ATAG20tests/ATAG2-10April2012PublicWD-Tests-rev20121119

a) define test procedures

b) things the tester needs to know before they begin testing

c) specific test for each SC. each test has an Assertion - for 1 item. if multiple items in an SC, each needs and Assertion

<Greg> To clarify, an SC might have multiple tests even if it doesn't have sub-items.

<Greg> (e.g. ATAG A.2.1.1.)

UAAG can use WCAG tests items for the purpose of testing UAAG SC. to ensure the UA does what it supposed to do based on WCAG code.

we have a wiki page http://www.w3.org/WAI/UA/work/wiki/Extensions_needed_for_Implementation_Examples

if you find extensions, or implementations (mobile or otherwise) please put them here.

<jeanne> WCAG CR exit criteria -> http://www.w3.org/TR/2008/CR-WCAG20-20080430/#status

<scribe> ACTION: jim to create wiki page for SC testing criteria. [recorded in http://www.w3.org/2013/02/14-ua-minutes.html#action02]

<trackbot> Created ACTION-800 - Create wiki page for SC testing criteria. [on Jim Allan - due 2013-02-21].

ReChartering

http://www.w3.org/WAI/UA/2010/uawg_charter

need to recharter by mid-march

need to create a new time line.

kf: we are more aspirational in our charter. need to more realistic

js: need to spell out all the steps necessary to meet milestones

jr: time to write tests. UA and AU are very different. They are not machine checkable. It takes longer.

glossary gardening

proposed: *operating system (OS)*: Software that supports a computer's basic functions, such as scheduling tasks, executing applications, and managing hardware and peripherals.

- Note: Many operating systems mediate communication between executing applications and assistive technology via a platform accessibility service.

UA talks about OS as part of platform.

Update: *platform*: The software environment(s) within which the authoring tool operates. Platforms provide a consistent operational environment on top of lower level software platforms or hardware. *Non-web-based* platforms include desktop *operating systems* (e.g. Linux, MacOS, Windows, etc.), mobile operating systems (e.g. Android, Blackberry, iOS, Windows Phone, etc.), and cross-OS...
... environments (e.g. Java). *Web-based* platforms are other user agents. User agents may also employ server-based processing, such as web content transformations, text-to-speech production, etc..

- Note 1: A user agent may include functionality hosted on multiple platforms (e.g. a browser running on the desktop may include server-based pre-processing and web-based documentation).

- Note 2: Accessibility guidelines for developers exist for many platforms.

[REMOVE] operating environment --> platform

updated definition of user agent from Jan. incorporating comments from Greg.

-----------------

[UPDATE] *user agent*: Software that retrieves, renders and facilitates end-user interaction with Web content. If the software only performs these functions for time-based media, then the software is typically referred to as a *media player*, otherwise, the more general designation *browser* is used. UAAG 2.0 identifies several user agent architectures:

- *Stand-alone*: These user agents run on non-Web platforms (operating systems and cross-OS platforms, such as Java) and perform content retrieval, rendering and end-user interaction facilitation themselves. (e.g., Firefox, IE, Chrome, Opera).

- *Embedded*: These user agents "plug-in" to stand-alone user agents in order to rendering and facilitate end-user interaction for content types (e.g., multimedia), that the stand-alone user agent is not able to. Embedded user agents establish direct connections with the platform (e.g. communication via platform accessibility services).

- *Web-based*: These user agents operate by (a) transforming the web content into a technology that the stand-alone (or embedded) user agent can render and (b) injecting the user agent's own user interface functionality into the content to be rendered.

- *Webview Component*: These user agents are used to package web content into non-web-based applications, especially on mobile platforms. *If the finished application is used to retrieves, renders and facilitates end-user interaction with Web content of the end-users choosing, then the application should be considered a stand-alone user agent. If the finished application only renders a...

scribe: constrained set of content specified by the developer, then the application should not be considered a user agent.*

Note 1: Any of these user agent architectures may also employ server-based processing, such as web content transformations, text-to-speech production, etc.

Note 2: User agents may also include *authoring tool* features. (see "Relationship to the Authoring Tool Accessibility Guidelines (ATAG) 2.0" http://www.w3.org/WAI/UA/2013/ED-UAAG20-20130108/#intro-atag)

[ADD] User agent extensions (add-ins): Software installed into a user agent that has an extension feature in order to modify the behavior of the user agent. Two common capabilities for user agent extensions are the ability to *modify the content* before the user agent renders it (e.g., to add highlights if certain types of alternative content are present) and to *modify the user agent's own...

scribe: user interface * (e.g. add a headings view).

jr: AA app is data driven, some data structure. comes into the app, gets rendered. puts it on the screen. there is a big difference between doing the simple data rendering, and the user telling the UA where to go to get information.
... makes our work easier.

js: dividing line is not clear. understand the concern.

user is selecting the data by choosing the app to get specific data

or user following a link,

jr: difference between user choosing versus author choosing.

AA app, is a data view widget.

eh: distinction not very sharp.

<jeanne> scribe: jeanne

JA: it will be hard to write tests for apps. Apps are more dependent on the hardware platform. Should we fail all Android apps because it doesn't have a zoom?

JR: These are also covered by WCAG

<allanj> [add] second definition of user agent:

<allanj> 2. The software that is the subject of the conformance claim. This the meaning of “user agent” as referred to in success criteria.

<scribe> scribe: allanj

eh: UA vs app, and what are you evaluating to make a conformance claim. if you evaluate something against the UAAG20 guidelines then it is a UA

jr: under user extensions. the UA has the extension feature. something plugs into the extension feature.

eh: wanted to keep the boundary between UA and the extension. is an extension part of the UA or not.

jr: if the user agent has a feature that allows extension. then thing that use that feature are extensions. There are base user agents. extensions are separate from the borwser.

ISSUE: further discussion needed on mobile apps being UAs or not. Are they covered totally by WCAG

<trackbot> Created ISSUE-97 - Further discussion needed on mobile apps being UAs or not. Are they covered totally by WCAG; please complete additional details at <http://www.w3.org/WAI/UA/tracker/issues/97/edit>.

eh: time on glossary is very important.
... thanks to Jan for the efforts.

<jeanne> ACTION: Jeanne to add Jan's proposal on glossary cleanup with edits from Eric (editor's discretion) [recorded in http://www.w3.org/2013/02/14-ua-minutes.html#action03]

<trackbot> Created ACTION-801 - Add Jan's proposal on glossary cleanup with edits from Eric (editor's discretion) [on Jeanne F Spellman - due 2013-02-21].

close action-799

<trackbot> Closed ACTION-799 Send bug response report to html5 A11y taskforce.

close Action-800

<trackbot> Closed ACTION-800 Create wiki page for SC testing criteria..

Summary of Action Items

[NEW] ACTION: Jeanne to add Jan's proposal on glossary cleanup with edits from Eric (editor's discretion) [recorded in http://www.w3.org/2013/02/14-ua-minutes.html#action03]
[NEW] ACTION: jim to create wiki page for SC testing criteria. [recorded in http://www.w3.org/2013/02/14-ua-minutes.html#action02]
[NEW] ACTION: jim to send bug response report to html5 A11y taskforce [recorded in http://www.w3.org/2013/02/14-ua-minutes.html#action01]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.137 (CVS log)
$Date: 2013/02/14 20:11:26 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.137  of Date: 2012/09/20 20:19:01  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/Apps are more dependent on the hardware platform/Apps are more dependent on the hardware platform. Should we fail all Android apps because it doesn't have a zoom?/
Found Scribe: allanj
Inferring ScribeNick: allanj
Found Scribe: jeanne
Inferring ScribeNick: jeanne
Found Scribe: allanj
Inferring ScribeNick: allanj
Scribes: allanj, jeanne
ScribeNicks: allanj, jeanne

WARNING: Dash separator lines found.  If you intended them to mark
the start of a new topic, you need the -dashTopics option.
For example:
        <Philippe> ---
        <Philippe> Review of Action Items

Default Present: +1.609.734.aaaa, Jim_Allan, EricHansen, Jan, Jeanne, Greg_Lowney, Kim_Patch, kford, +1.425.381.aabb
Present: +1.609.734.aaaa Jim_Allan EricHansen Jan Jeanne Greg_Lowney Kim_Patch kford +1.425.381.aabb
Regrets: simon
Found Date: 14 Feb 2013
Guessing minutes URL: http://www.w3.org/2013/02/14-ua-minutes.html
People with action items: jeanne jim

[End of scribe.perl diagnostic output]