Skip to toolbar

Community & Business Groups

Proposed objectives and next steps for this group

My name is Ed Summers. I am a blind software engineer and an accessibility specialist. I lead the accessibility team at SAS Institute. SAS is the market leader in Business Analytics software and the largest independent Business Intelligence software vendor. SAS also provides software to K-12 and higher education organizations at no cost or low cost.

This community group was formed as a result of a meeting on November 21, 2011. The meeting was hosted at SAS’ corporate headquarters. The participants included Dr. Bruce Walker from the Georgia Tech Sonification Lab, Dr. John Gardner from Viewplus Technologies, Doug Schepers from W3C, and several representatives from SAS Research and Development.

During this meeting, we reviewed recent advances in the accessibility of data visualizations and related multi-modal user interactions. We discussed next steps towards greater access to graphical representations of structured data on the web. We agreed that new web standards would greatly increase access to this information for users of all abilities and particularly users with visual disabilities. We agreed to form this community group to facilitate that goal.

Proposed objectives

Technology exists to serve humanity. There are pressing social needs that require immediate attention. I propose that we initially focus on these pressing social needs and iteratively expand our scope in a manageable fashion.

First and foremost, students with visual disabilities do not have equal access to graphical representations of mathematical functions and structured data in mainstream textbooks, e-learning systems, and standardized tests. This is an artificial and unnecessary barrier for millions of students in K-12 and higher education.

Second, personal navigation is inherently difficult for people with visual disabilities. Mainstream printed and electronic maps are not accessible so it is very difficult for them to build a mental map of spaces, recognize landmarks, etc.

These two problems combine create formidable obstacles for people with visual disabilities. For example, imagine a recent high school graduate with a severe visual disability as she arrives at a major university for her freshman year. Her first challenge is to learn to navigate a university campus and a new city. She must find new buildings and new classrooms at the beginning of each semester. She must acquire instructional materials in an accessible format in a timely manner. She will face calculus, statistics, economics, and other classes with limited or no acces to the images and graphics in her textbooks.

Next steps

I propose that we systematically define a set of initial requirements and use cases that capture the challenges faced by real students with disabilities. This pragmatic approach will limit the scope of the problem and allow us to make progress on proposals to solve them. We can then iteratively expand the scope in a manageable fashion.

Please share your thoughts on this proposal. I look forward to a productive discussion that will produce tangible proposals in a timely manner.

13 Responses to Proposed objectives and next steps for this group

  • Hi Ed, very nice to meet you and thank you for sharing the history of the group’s formation. I think your ideas for our initial scope of work sounds like a good start.

    How do you propose we get started. Do you think it would be a good idea to find out what requirements and standards are currently in place for making complex graphics in textbooks accessible and then expanding upon those requirements? I know there are some sort of requirements for alt text, but I doubt that it regulates meaningful descriptions of detailed images.


    • Hi Carla, I believe alt text is insufficient because it does not allow blind students and professionals to acquire knowledge first-hand. For example, all students must learn how to determine the slope of a line and determine the y-intercept of that line. I think we need standards that can support user agents that fully leverage sonifcation and the capabilities of mainstream mobile devices, gaming consoles, etc. I think use cases like the slope and y-intercept examples should drive additions to SVG, etc. In other words, let’s enhance the SVG format so an authoring tool can, by default, add enough semantic information to an SVG image of a line chart so a blind student can determine slope and y-intercept via non-visual modes such as touch or hearing.


  • Hi, great to meet all.

    The DIAGRAM center was awarded a grant from the Department of Education in the USA. We started with use cases and derived a set of requirements. We developed a content model, which can be found at I think the use cases and requirements are in a working group wiki, but if you folks wanted to see them, I am sure we could expose that information.

    The major point is not to duplicate effort.


  • Ed,

    Thanks for getting this group started. As you know, we are keen to work on solutions that put all students on an equal footing in education and subsequent careers. I agree on not dupliating efforts, so we will want to collate links to all sorts of resources. You already know about our automatically-generated descriptions of graphs, and our sonification and auditory graphing tools and research. We’re happy to contribute actively, in whatever manner helps move things along!


    Georgia Tech Sonification Lab Website:


    • Robert Muetzelfeldt

      Bruce: Can you give a direct link to your work on “automatically-generated descriptions of graphs”?



  • Robert Muetzelfeldt

    It might be useful to realise that solutions developed to help visually-impaired people ‘read’ the shape of mathematical functions or spatial patterns can also benefit sighted people.

    My field is ecological and environmental science, and just about every seminar has qualitative, verbal interpretations of graphs and/or of spatial displays of empirical or simulation results (e.g. global patterns resulting from climate change). Being able to automatically generate such interpretations for visually-impaired people is clearly highly desirable; but it is also highly desirable for the whole research community, since it requires deep understanding of just what constitutes the significant features of some set of results, and enables much more succinct communication of complex results. Moreover, these automatically-generated interpretations will be susceptible to being marked-up in XML, thus allowing the salient features of visually-presented results to be added to a Semantic Web of ecological and environmental research.

    In other words, there is a powerful synergy between two quite different use-cases for developing tools for automatic interpretation of visual information. This synergy can help to make a stronger case for research and development than either use-case alone.


  • Hi Ed,

    Just seen your video on Blomberg and have to congratulate you. In the UK we have been developing products for Blind and Partially sighted on the iPhone and iPad for everyday grocery items. We fully accept we have not developed the ideal system as we have limited funds but the fact is we have all the main UK Supermarket brands involved so maybe we could partner with our brothers in the US to get this helping more and more people. It seems the main institutes are promoting braille far more than technology so its a real shame, but its probably because we dont have the experienced trainers in the UK. We have numerous videos on youtube and even one of myself, so we would be very interested in your comments, Best Regards Neill Mennell


  • Dear Ed-
    I stumbled across a news item on SAS and you just by going on my Apple website. My husband’s vision is deteriorating to where he can no longer see to work on or repair computers. I would LOVE LOVE for you to talk to him! He is interesting in computer technology for the Blind and DeafBlind. We are working with resources in assisting him with learning Braille and so on. He already has a little bit of background as mentioned previously. Please, please contact us! 🙂 We would LOVE to hear from you! Thank you!


  • Congratulations, Ed, on your amazing and valuable work. My niece, Lindsay, is blind (and incredible). She recently graduated from Brown and is working at the MA Eye and Ear Infirmary and Schepens Eye Research Institute. They just had an article published in JoVE about a gaming program they developed for the blind. I thought you might be interested. The article, “Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind” can be found at: It is also being featured in Popular Mechanics. Hope you enjoy it. Thank you.


  • Hi Ed,

    My son and I are working to change policys to allow text to speech access on all high stakes tests.

    He is dyslexic is extremely successful because of the technology’s that allow him access on his iPad like Bookshare. To many students with print disabilities are not able to equally or effectively access these test materials. I have started a website highlighting this cause and will be traveling to DC later this month with my son to advocate for change.

    Our site is

    Your help by way of ideas to better get the message out would be appreciated.

    Any ideas would be helpful,


  • Hello Ed! My father has retinitis pigmentosa, I would like to get him trained on using an iPad. Any suggestions on where and when there are training sessions?


  • Hi Ed, It has been suggested that I contact you regarding our efforts at NYU’s School of Engineering to create user-informed and accessible code-learning modules. We would be thrilled to have you consult with us on the project. If you would be interested, please be in touch! I would like to chat with you whenever it is most convenient.


    Claire Kearney-Volpe


Leave a Reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet, the plugin we use to mitigate spam comments.