Important note: This Wiki page is edited by participants of the RDWG. It does not necessarily represent consensus and it may have incorrect information or information that is not supported by other Working Group participants, WAI, or W3C. It may also have some very useful information.


Augmented Reality

From Research and Development Working Group Wiki
Jump to: navigation, search

Catalogue Entry – Augmented Reality DRAFT

Title:

Augmented Reality

Editor(s):

Categorization and Tags:

Augmented Reality, Location based annotation, positioning

Description:

Augmented reality is a view of the live real-world environment augmented or supplemented by computer-generated content to enhance the user’s current perception of the physical environment. In contrast, virtual reality offers a fully computer generated representation of a real world scene. A simple example for augmented reality is live scores on TV during soccer matches. A more sophisticated and interactive project is Google Glass [GOO13] which displays smartphone related information as overlay and also allows to display additional context sensitive information about objects or places nearby. Even though, augmented reality usually concerns the visual sense it is not limited to it. Augmented reality employing the hearing has already been tested too. Generally speaking, augmented reality provides overlays to perceptions of the real world, which offer additional information to the real world scene or parts of it, like objects. This technique can be used to assist people with disabilities in their everyday life to overcome barriers caused by a lack of adequate information.

Background and State-of-the-art:

Augmented reality can provide useful additional information for many different target groups. People with hearing impairment can benefit from information presented in a way they are familiar and comfortable with. As many people who are deaf since birth have difficulties with written language due to the completely different grammar of sign language, information represented in written language can be communicated in a for the user more natural way, like sign language or pictograms. People with cognitive disabilities or persons which are not familiar with a specific language would benefit from this kind of solutions too. In addition, highlighting important objects would especially benefit people with cognitive disabilities but also users with visual impairments. Augmented reality can also display additional accessibility related information to people with physical disabilities and limited mobility showing accessible buildings, entrances and routes, which can prevent them from many unnecessary ways and wasted time. An example for this kind of project is mapability [ASM13]. Even blind people can benefit from augmented reality. There are several projects concerned with the topic of augmenting the audible layer. vOICe aims at converting visual information to sounds interpretable by blind people [MEI13]. Apps like Google Goggles [GOO13a] or Word Lens [QUE13] bring hope that products translating signs and written text into spoken word are not far off. Augmented reality also shows potential to support mental rehabilitation and learning tasks by enriching perceived content and providing guidance and support in an easily understandable way. Especially the motivation of children could be boosted by more intuitive and interesting lessons.

Challenges:

As objects need to be tagged and the user needs to be located in many scenarios and applications where augmented reality is employed, precise localization of persons and objects is of elementary importance. Reliable and affordable indoor positioning and high-precision outdoor positioning is nowadays still not possible in many cases though. Especially for people with visual and cognitive disabilities a precise and unambiguous declaration of objects and the accurate positioning of the user are of essential importance. Currently, there are no widely applied standards to describe objects in augmented reality applications. Most applications use their own object definitions. However, there are promising approaches of how to provide location based annotations. For instance OpenStreetMap [OSM13] offers specifications for location based annotations of streets, buildings, and objects or places related to car, bike or pedestrian navigation. This set of specifications could be considered as basis for further accessibility related annotations, which is already be partly addressed in projects like Look-and-Listen-Map[OSM13a] or Wheelmap[SOZ13], which focus on blind and visually impaired users respectively wheelchair users. A unified and harmonized set of specifications for location based items would help to simplify and fuel the development and also the acceptance of augmented reality helping people with disabilities. Web of Things could provide the basis, as it promotes the usage of standard APIs to communicate with devices and objects of any kind.

Research Goals:

TODO

  • Prioritized list of issues to be addressed
  • Indicative timeline (short/middle/long term)
  • Methodological considerations (e.g. studies, guidelines, standards, prototypes, experiments, implementation, dissemination, market penetration, education)

Issues to be addressed

  • How accessible is augmented reality now?
  • Since places, events, products and in general things are being tagged and connected with additional information on the web, could this information also contain accessibility related data (is the bar across the street accessible through wheelchair?, does the movie in this cinema offer alternatives to the acoustic channel for people with hearing impairments?, etc)?
  • Could there be a global schema to describe an object’s accessibility related properties?
  • Could there be a global schema to describe a person’s capabilities?


References:

  1. [ASM13]Associazione Mapability, Mapability, http://www.mapability.org/mapability-augmented-reality-find-accessible-places-on-your-smartphone/, 2013
  2. [GOO13]Google Inc., Google Glass, http://www.google.com/glass/start/, 2013
  3. [GOO13a]Google Inc., Google Goggles, http://www.google.com/mobile/goggles, 2013
  4. [MEI13]Peter B.L. Meijer, vOICe, http://www.seeingwithsound.com/, 2013
  5. [OSM13]OpenStreetMap Foundation, OpenStreetMap, 2013
  6. [OSM13a]OpenStreetMap Foundation, Look-and-Listen-Map, 2013
  7. [QUE13]Quest Visual, Word Lens, http://questvisual.com, 2013
  8. [SOZ13]SOZIALHELDEN e.V., Wheelmap, 2013