See also: IRC log
<trackbot> Date: 19 December 2014
<scribe> meeting: W3C SVG Accessibility Task Force
https://apps.na.collabserv.com/meetings/join?id=4789-1811
passcode scubamonkey
https://apps.na.collabserv.com/meetings/join?id=4789-1811
<fesch> https://apps.na.collabserv.com/meetings/join?id=4789-1811
<fesch> https://apps.na.collabserv.com/meetings/join?id=4789-1811
http://lists.w3.org/Archives/Public/public-svg-a11y/2014Dec/0007.html
<shepazu> https://talky.io/svg-a11y
<shepazu> http://describler.com/
<shepazu> http://describler.com/samples/aria-bar-chart.svg
<shepazu> http://describler.com/
<shepazu> http://describler.com/samples/aria-bar-chart.svg
doug: this is a demo of a 3D SVG
Bar Chart. It is marked up with ARIA
... tabbing to the chart and reading the description of the
chart
... I am using the web speech api
... I am overriding the keyboard events to speak the bar
chart.
... it acts like a screen reader
... I hit the d key on the access to get more details
... I have implmented 2 levels of detail
... the d key reponds with details
... if I hit “S” and “p” for play it sonifies the chart
... what I did was I created a trend line from the different
bars. and I created a cursor that followed the trend line. the
position sounds higher or lower based on the trend
... in a scatter plot you don’t want to hear all the points.
the trend line is a very good feature
... once we have the trend we can describe vocally the
trend
... the sonifier is a nice feature but it is a bit extraneous
as you can scribe verbally the trend
... I am using a structured svg file. I have made the order of
the file such that it is in the order it should be read
... describler is the app
... none of this is intended as production code
... we need to describe how these data visualization can be
done
... end users want to use their own screen readers vs. self
voicing apps
Amelia: it is a great starting point
doug: should just speak in chrome
<fesch> http://lists.w3.org/Archives/Public/public-svg-a11y/2014Dec/0009.html
fred: I would like to use this as
a starting point for our discussion
... I believe the goal is to define a minimal set of
concepts
... I would like to be able to cover 85% of graphics
... so that we convey the information to a blind user
jason: I think it is important to
define what is native to an assistive technology but I want
this to be extendable.
... I don’t mind taking an iterative approach
fred: that sounds great
doug: I think we are more or less
in agreement
... my document is a taxonomy and chart types
... we need to define interactions that we want
... starting with a minimal set of this - we are in
agreement.
... I just want to be clear that we need to look at a wide
variety of charts to do this.
... for example D3 has many things that are beyond 2 axis and
connectors
... we need to take a serious look at this broad range
... I would rather look at this early and not wish we had done
something essential early on
fred: I am not sure that we are going find a lot of new components tool …
<fesch> rich: mentions that accessibility has ignored graphics to date...
rich: we are going to have to define experiences and interaction models that are interoperable. mainstream ATs have not dealt with this
Amelia: we are going to have to
focus on data concepts and extensibility
... beyond 2 dimensional to a time dimension to create an
expandable frame of reference
Ameila: from there that can be the basis for describing what the interaction can be for the assistive technologies.
amelia: different charts will have different forms of interaction
<Zakim> shepazu, you wanted to mention aria roles, maps
doug: I agree. Accessibility is
not only about blind people
... we need to think about the visual as well.
... this is not solely about blind people
... we do need to focus ont the data
... the taxonomy bit that I spoke about in my document is a set
of proposed roles and attributes
... maps will be a different mode of interaction
... this should be different from just dat visualization
Leonie: the audio helps with the visualization
fred: I don’t know if it will
impact the taxonomy so much. … we are not saying how we are
going to present from the taxonomy but we do need to create the
core concepts.
... I don’t agree that we need to treat maps separately
... they are all going to have similar cores with synomyms
rich: we need to understand that this is beyond blind. people with attention deficit, and learning impairments benefit from different levels of verbosity
doug: benetech has a mathtracks (nasa funded project) we should have in the scope of the group but not necessarily all visualizations. we should think about equations too.
fred: we should talk about the
taxonomy next time
... we start on taxonomy January 9
... we could tallk about what can include or not include over
the holidays
amelia: beyond blindness we need to indicate which visual aspects impact data conveyance. some things may be decorative
doug: actually having the visual aspects (the blue bar) so that sighted colleagues can have a frame of reference when workign with a blind user
This is scribe.perl Revision: 1.140 of Date: 2014-11-06 18:16:30 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) No ScribeNick specified. Guessing ScribeNick: richardschwerdtfeger Inferring Scribes: richardschwerdtfeger Default Present: Rich_Schwerdtfeger, Fred_Esch, +1.609.759.aaaa, Doug_Schepers, LJWatson, +1.781.565.aabb Present: Rich_Schwerdtfeger Fred_Esch +1.609.759.aaaa Doug_Schepers LJWatson +1.781.565.aabb Found Date: 19 Dec 2014 Guessing minutes URL: http://www.w3.org/2014/12/19-aria-minutes.html People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]