Augmented Reality and Accessibility

From Research Questions Task Force

Augmented Reality and Accessibility

by Scott Hollier

Introduction

The research in both Virtual Reality (VR) and Augmented Reality (AR) are generally considered to have a similar focus based on interface design. While there is certainly crossover between the two areas, the relatively new emergence of AR in the public realm has started to see more AR-specific accessibility-related research projects. These projects tend to focus on how big data about a user’s environment can be collated in real-time, how that data can be converted into an accessible AR experience and how AR can be used as an instructional tool for people with disabilities completing particular real-world tasks.

However, while AR-specific research in accessibility is growing, interest in the AR implications for people with disabilities remains relatively low. To illustrate the point, one research project examined 8889 studies in the field and discovered that less than one percent considered software tools for building multi-device inclusive environments. The research concluded that any solution needs to consider the  social conditions of users, such as illiteracy and people living in underserved communities and there is a vital need for the  identification of new research questions in the context of multi-device inclusive environments.(Bitten court et al., 2016). As such, the combination of the relatively few projects undertaken to date and the relatively recent arrival of AR in the mainstream has resulted in existing projects being quite exploratory and varied in their approaches. 

Interface design

One area of research undertaken by A number of projects has focused on how to create an accessible AR interface for people with disabilities to assist in their real-world interaction. There are two aspects of AR interface design: the first tends to focus on the best way to gather real-time spatial data into an accessible representation, the second tends to be the delivery of that information such as via haptic or audio feedback.  

A significant part of this focus has been in relation to the delivery of Scalable Vector Graphics (SVG) to people who are blind or visually impaired.  One study indicated that the conversion of SVG is the best option for delivering big data environmental aspects in real-time based on either pre-identified information or data gathered from the environment itself (Calle-Jimenez & Luján-Mora, 2016). Several other projects have used SVG in this context. 

A research project of particular interest in this regard related to providing support to people who are blind or visually impaired through the creation of a protype designed to enrich graphical information for better accessibility. The prototype converted descriptive and navigational information into standard Scalable Vector Graphics and also mechanisms to analyse and aggregate data. (Weninger, Ortner, Hahn, Drümmer, & Miesenberger, 2015). Similar projects include the conversion of navigation data from multiple data sources such as building floor plans. (Griffin et al., 2017) (Joseph et al., 2013).

Another significant area of research was how best to deliver the data in real-time so that it is useful to people with disabilities. One study specifically focused on the use of an audio display to convey geographical information. To identify the best method, three parameter mapping sonification methods were empirically evaluated to interactively explore discrete and continuous digital elevation models by auditory means (Schito & Fabrikant, 2018). The evidence suggested that participants could successfully interpret sonified displays containing continuous spatial data. Specifically, the auditory variable pitch lead to significantly better response accuracy, compared to the sound variable duration. The research indicated that while it was difficult to explain the data through a training mechanism, the more immersive the experienced soundscape, the better participants could interpret the sonified terrain. (Schito & Fabrikant, 2018)

There were several other preliminary interface-related AR projects that explored the best AR interface mechanism and how it related to existing methods. One such project  focused on the effectiveness of semantic spatialised audio (Katz et al., 2012), a second was a research project for people with an intellectual disability in which AR was found to be the most effective way for communicating maps when compared with Google Maps and a paper map (McMahon, Smith, Cihak, Wright, & Gibbons, 2015). A third research project focused on the use of haptic sensory substation for blind and deaf scenarios (Parisi, Paterson, & Archer, 2017) while a fourth looked at how best to Combines GPS, 3G, Wi-Fi, Bluetooth, and sensors to provide a more complete data picture of the surrounding environment (Rodriguez-Sanchez & Martinez-Romo, 2017).

Yet one particular research project which took a different approach was the creation of a configurable interfaced based on a person with paralysis. The concept was based on the idea that the AR interface provides the user with the ability to remove the parts of the interface which are not relevant due to a disability, e.g. deselecting the use of arms and legs from the interface setup. This would then  leave the AR interface  to make all functionality work with the remaining selections such as eye movement and head movements (Magee, 2012).

Broader research has also been undertaken to determine how to maximise interface design based specifically on the touchscreen given that this is the most likely R interface that people with vision disabilities will already have in their possession. The research has determined three methods that can be used to deliver spatial information. . The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games.(Grussenmeyer, 2017)

Instructional tool for real-world tasks

A significant research area in the literature is focused on how AR can be used to improve the understanding of real-world concepts through the use of additional media during particular scenarios. The purpose of this approach is  to reinforce educational learning mechanisms associated with real-world tasks. For example, a project focused on the use of AR to convey educational materials in specific real-world locations by providing an explanatory video in to people with an intellectual disability when the task needed to be completed (Benda, Ulman, & Smejkalová, 2015). While this approach had limited results, a similar approach for people with autism proved more effective whereby videos and other visual imagery was presented in AR during the process of brushing teeth. The data suggests that the use of AR was successful in improving this learning outcome.(Cihak et al., 2016).

Another project endeavoured to combine the benefits of VR and AR for training purposes.  In essence, a VR environment is designed based on a real-world model in which people with disability can train to undertake a particular experience, then AR is used while the same task is repeated in the real-world to create a similar interaction and familiarity of the interface. This can effectively bridge  the gap between the virtual experience and the real-world experience (Faller et al., 2017)

Communication tool

Another aspect of the research investigated how AR could be used to provide support to people with disability in everyday life. This included a study whereby people with down syndrome were able to access information via AR on how to find landmarks and receive general navigation assistance similar to other projects, but in addition the AR was used to provide a support mechanism whereby the person could always access help options via the AR interface should the person become disorientated or concerned with their surroundings (Covaci, Kramer, Augusto, Rus, & Braun, 2015). The research suggests that the always-available help feature in AR improves well-being and works well as an additional communication tool when needed. 

Another project where AR is used for communication is a focus on enabling Sign language to be seen by other AR users and understood via a real-time translation. The concept allows a person using sign to communicate to a person that does not understand sign. language, The visual movements of the sign language user are interpreted and translated to the AR user either visually or via audio (Deb, Suraksha, & Bhattacharya, 2018). This use of AR means that people that use sign language are able to communicate in their first language with a wider audience.

Given the differing mechanisms for interpreted real-world data and creating an accessible interface, one research study conclude that standards organisations should work towards a uniform set of guidelines in the AR space to ensure consistency for work in this area (Castillejo, Almeida, López-de-Ipiña, & Chen, 2014). This suggests that efforts such as the W3C AG Silver are likely to be beneficial in providing ongoing research.

References

Benda, P., Ulman, M., & Smejkalová, M. (2015). Augmented Reality As a Working Aid for Intellectually Disabled Persons For Work in Horticulture. AGRIS On-line Papers in Economics and Informatics, 7(4), 31-37.

Bittencourt, I., Baranauskas, M., Pereira, R., Dermeval, D., Isotani, S., & Jaques, P. (2016). A systematic review on multi-device inclusive environments. Universal Access in the Information Society, 15(4), 737-772. doi:10.1007/s10209-015-0422-3

Calle-Jimenez, T., & Luján-Mora, S. (2016). Web Accessibility Barriers in Geographic Maps. International Journal of Computer Theory and Engineering, 8(1), 80-87. doi:10.7763/IJCTE.2016.V8.1024

Castillejo, E., Almeida, A., López-de-Ipiña, D., & Chen, L. (2014). Modeling Users, Context and Devices for Ambient Assisted Living Environments (Vol. 14, pp. 5354-5391). Basel: MDPI AG.

Cihak, D. F., Moore, E. J., Wright, R. E., McMahon, D. D., Gibbons, M. M., & Smith, C. (2016). Evaluating Augmented Reality to Complete a Chain Task for Elementary Students With Autism. Journal of Special Education Technology, 31(2), 99-108. doi:10.1177/0162643416651724

Covaci, A., Kramer, D., Augusto, J. C., Rus, S., & Braun, A. (2015). Assessing Real World Imagery in Virtual Environments for People with Cognitive Disabilities.

Deb, S., Suraksha, P., & Bhattacharya, P. (2018). Augmented Sign Language Modeling(ASLM) with interaction design on smartphone - an assistive learning and communication tool for inclusive classroom. Procedia Computer Science, 125, 492-500. doi:10.1016/j.procs.2017.12.064

Faller, J., Allison, B. Z., Brunner, C., Scherer, R., Schmalstieg, D., Pfurtscheller, G., & Neuper, C. (2017). A feasibility study on SSVEP-based interaction with motivating and immersive virtual and augmented reality.

Griffin, A. L., White, T., Fish, C., Tomio, B., Huang, H., Sluter, C. R., . . . Picanço, P. (2017). Designing across map use contexts: a research agenda. International Journal of Cartography, 3, 90-114. doi:10.1080/23729333.2017.1315988

Grussenmeyer, W. (2017). Making Spatial Information Accessible on Touchscreens for Users Who Are Blind and Visually Impaired. In E. Folmer, F. Jiang, S. Dascalus, D. Feil-Seifer, T. Lamberg, & J. Snow (Eds.): ProQuest Dissertations Publishing.

Joseph, S. L., Xiaochen Zhang, I., Dryanovski, I., Jizhong Xiao, I., Chucai Yi, I., & Yingli Tian, I. (2013). Semantic Indoor Navigation with a Blind-User Oriented Augmented Reality.

Katz, B. F. G., Dramas, F., Parseihian, G., Gutierrez, O., Kammoun, S., Brilhault, A., . . . Jouffrais, C. (2012). NAVIG: Guidance system for the visually impaired using virtual augmented reality. Technology and Disability, 24(2), 163-178. doi:10.3233/tad-2012-0344

Magee, J. (2012). Adaptable interfaces for people with motion disabilities. In M. Betke (Ed.): ProQuest Dissertations Publishing.

McMahon, D. D., Smith, C. C., Cihak, D. F., Wright, R., & Gibbons, M. M. (2015). Effects of Digital Navigation Aids on Adults With Intellectual Disabilities. Journal of Special Education Technology, 30(3), 157-165. doi:10.1177/0162643415618927

Parisi, D., Paterson, M., & Archer, J. E. (2017). On haptic media and the possibilities of a more inclusive interactivity. New Media & Society, 19(10), 1541-1562. doi:10.1177/1461444817717513

Rodriguez-Sanchez, M. C., & Martinez-Romo, J. (2017). GAWA – Manager for accessibility Wayfinding apps. International Journal of Information Management, 37(6), 505-519. doi:10.1016/j.ijinfomgt.2017.05.011

Schito, J., & Fabrikant, S. I. (2018). Exploring maps by sounds: using parameter mapping sonification to make digital elevation models audible. International Journal of Geographical Information Science, 1-33. doi:10.1080/13658816.2017.1420192

Weninger, M., Ortner, G., Hahn, T., Drümmer, O., & Miesenberger, K. (2015). ASVG - Accessible Scalable Vector Graphics: intention trees to make charts more accessible and usable. Journal of Assistive Technologies, 9(4), 239-246.