Web Games Workshop Position Paper

From Accessible Platform Architectures Working Group
Jump to: navigation, search

This is a draft position paper for the W3C Workshop on Web Games - https://www.w3.org/2018/12/games-workshop/ - which Matthew hopes to attend. It is based on previous research, as cited.

The workshop has now concluded, the final paper, presentation and partial session videos are available via the workshop's site; also the workshop report is available too.

The submission deadline is the 10th of May. Here are some example papers from a previous workshop: https://www.w3.org/2016/06/vr-workshop/papers.html

Please note: the syntax help for the wiki mentions making references, but extension doesn't seem to be turned on, so for now I have just used numbers in square brackets for the references.

W3C Workshop on Web Games Position Paper: Adaptive Accessibility

This document summarises some ways in which we believe that game accessibility could be furthered on a technical level, complementing projects such as the Game Accessibility Guidelines. The key ideas of specificity of adaptations, and multimodality, come from a proposal for an Active Game Accessibility (AGA) framework, as detailed below.

Accessibility & games

The demands games put on users can be high; extremely complex input devices, control schemes that require a high degree of precision, timing and simultaneous action; ability to distinguish subtle differences in busy visual and audio information, having to juggle multiple complex goals and objectives, and so on.

But this isn't what makes games' relationship with accessibility unique. What makes it unique is that significant challenge is required. Remove all of these barriers and what you have left is no longer a game, and what the required challenge is varies entirely from game to game.

So there is no fixed bar of 'reasonably accessible', as what's reasonable is specific to each game. But this does not mean accessibility is not possible. There is a great deal that can be done to open games to a wider audience. Part of this landscape is support for assistive technologies (ATs).

Integrating with existing assistive technologies

We need to enable users' existing assistive technologies (ATs), such as screen-readers, screen magnifiers or specific input/output devices, to communicate with games/engines in much the same way as they do with browsers or other applications presently.

There's already a standard for denoting accessibility information in content and interfaces on the web—W3C's Accessible Rich Internet Applications (ARIA)—and there are standard OS-level interfaces for assistive technologies (ATs) to communicate with applications (on Windows there is Microsoft Active Accessibility (MSAA) and Microsoft UI Automation (UIA), for example).

To provide access to games, a standard API—or bridge to, or extension of, existing APIs such as ARIA—could be developed. It would function like MSAA does on Windows (and its equivalents on other platforms), providing a common interface for games and assistive technologies (ATs) to communicate.

Specificity of adaptations and personalisation

Games generally do not use operating system (OS) standard UI widgets and UX conventions, in order to present a particular game world that has its own look, feel and behaviour. Therefore, a major challenge is making the accessibility layer interoperable with as many games, game engines (middleware frameworks used by most games) and as many ATs, or adaptations, as possible. The adaptations themselves could be developed with a varying degree of specificity, as in the following examples:

  • General adaptations are completely separate from game/engine, for example: disabling haptic feedback, zoom, merging stereo channels into mono, parsing exposed UI text to assistive technologies (ATs) such as screen-readers, or built-in text-to-speech (TTS) features of the OS.
  • Engine-specific adaptations built in to a specific framework, so are available to any game that runs on that engine. For example: the heavy lifting for common considerations such as button remapping and subtitle presentation, and others that are currently less common—though vital, and necessary for compliance with the 21st Century Communications and Video Accessibility Act (CVAA) (Summary of CVAA)—such as scaleable UI and cross-platform support for text to speech – for platforms where screen-readers are used exposing the UI in a way that can be parsed, for platforms that rely on OS-level passive text to speech manually pushing text strings and metadata, and for platforms that have nothing, handling text-to-speech (TTS) at engine level.
  • Game-specific adaptations are implemented at game level. These may be a universal design choice that affects all users, for example using symbol as well as colour, tutorials that teach through play, clear language, adequate default text size, simple controls, avoidance of common epilepsy triggers. Or they may be options, allowing players to tailor the experience to their own needs and preferences. Such as captioning, game speed, steer or aim assist, configurable button layout, ensuring UI supports either internal or external text to speech for players who want or need it.

What are assistive technologies, adaptations and personalisation?

Assistive technologies could be traditional products such as screen-readers, screen-magnifiers, speech-to-text or physical hardware. But they could also be more minor adaptations, as implied above.

  • Adaptations range from existing "large-scale" assistive technologies (ATs) such as screen-readers (e.g. JAWS or NVDA) to new ones coded for specific engines, games, or platforms (e.g. game transcription). Adaptations could even be OS settings such as zoom.
  • The game could effectively be adapted simply by attaching certain hardware (e.g. adaptive controller, larger screen); using certain software (e.g. mapping voice input to keys); playing in a certain environment and so on. Other adaptations could be brought in by the game or the OS when needed.
  • Settings for adaptations the user may have already indicated a preference for in the OS (contrast, text size, caption presentation, ...) should be reflected by the games—this is personalisation. The accessibility layer could inform the game/engine of these settings, so those the game supports can be enabled for the user. By making personalisation across different devices use human capabilities as their basic units, they could be more portable than machine-/OS-specific settings.
  • Further to this, the game could change parameters such as required accuracy levels and time limits in response to the sort of devices the user is able to use to interact with it.

It should be noted that there are privacy implications involved in supporting specific hardware/software that is designed for people who may have certain disabilities, or is configured in a particular way. Presence or absence of certain hardware or software does not necessarily mean that the user has a specific disability (for example: someone using a screen-reader may have good sight, but find reading difficult—a common assumption is that someone using a screen-reader has a vision impairment). However, when the user allows personalisations to occur based on their system hardware, software or settings, some information is inherently revealed.

Multimodality, parameters and game mechanics

Another key part of the Active Game Accessibility (AGA) proposal involves classifying the mechanics of games and, thus, high-level accessibility requirements and hooks that might be needed. There have been accessible versions of some sorts of games, such as card games and abstract board games like Chess, for some time. There are other mechanics, such as exploration, action, etc. that may involve more immersive content, or precise timing requirements. There are some limits to how far this kind of classification can be taken; new game mechanics and new methods of input and output are not uncommon, and the impact of design intent and the desired emotional experience is also a factor, but often games are composed of a mix of existing known styles that have certain functional requirements and accessibility implications.

Multimodality may be used to render some of these more accessibly, e.g. there could be an adaptation that converts the task of aiming from a visual one into an audible or haptic one, in order for it to be accessible to a wider range of gamers. Multimodality can also assist with situational/contextual impairments (or the player's rendering preferences).

We could also bring in appropriate sets of adaptations in different phases of play, as the game mechanics change. Further, people could be teamed-up based on their abilities.

What is already possible?

It is worth considering what can already be achieved with thoughtful content design, when following established best practice such as the Game Accessibility Guidelines and the W3C's Framework for Accessible Specification of Technologies (FAST) and Media Accessibility User Requirements (MAUR), and Accessible Player Experiences.

The Framework for Accessible Specification of Technologies (FAST) advises creators of technical specifications how to ensure their technology meets the needs of user with disabilities. Whilst it focuses on UI and content the like of which are prevalent on the web, it may inform game UI accessibility and may be informed by the game experience.

The Media Accessibility User Requirements (MAUR) document presents the accessibility requirements users with disabilities have with respect to audio and video on the web, which relates to games also, and contains recommendations such as providing separate background music, sound effects and audio descriptions, so that they can be filtered out and adjusted independently.

The educational game Railway Hero (Building an Accessible Math Learning Game: The Railway Hero Case Study) was developed with these best practices in mind, and references some of the research cited below, to provide features such as:

  • Providing separate sound streams (as mentioned above).
  • Using keyboard control instead of drag-and-drop.
  • Self-voicing UI; audio description and captions for videos.
  • Allowing captions and audio description at the same time, for reinforcement.
  • Customisable text size and contrast within.

Despite being not interfacing with existing assistive technologies (ATs) such as screen-readers (rather the game is self-voicing), features such as these can provide an accessible experience to many people, and demonstrate that thoughtful content design is key—however, much more can be possible if we are able to interface with gamers' existing assistive technologies (ATs), desired personalisations and favourite hardware, as discussed above.

Getting from here to there

The ideas presented above range from relatively low-hanging fruit to much larger efforts. How might we begin to implement them?

Concrete beginnings

  • Would a good starting point be to concentrate on 21st Century Communications and Video Accessibility Act (CVAA) compliance and UI accessibility? If so, then would existing Accessible Rich Internet Applications (ARIA) roles and APIs be appropriate? It seems that fairly standard (for the web) issues of custom controls and things like pop-up dialogs and live regions, would apply. Given that ARIA is technology-agnostic, we may find that it can be made to work with non-HTML environments such as games via some sort of fairly simple engine-to-ARIA bridge—possibly via implementing the Accessibility Object Model.
  • Would any extra roles/APIs would be needed for the extra information that games need to convey during gameplay?

Keeping things immersive, fair and fun

  • Immersion/fun: if the assistive technologies/adaptations are too obvious, this could detract from the gaming experience. For example, in some circumstances it may be better to use 3D audio effects to convey where something is in the “game universe” than to have a standard screen-reader announcement. To what extent does this apply to other areas such as text-to-speech? In some circumstances it may be desirable to have game console/event message be announced using an in-universe voice (e.g. GLaDOS rather than Eloquence). (This relates to the specificity of adaptations, as above.) In some circumstances it would not be desirable, impeding efficient navigation and so detracting from rather than supporting the experience.
  • Further, appropriate use of an accessible rendering for a given gamer will vary, depending on the person, game and situation: a quick-reaction scenario would not benefit from a precise and detailed textual or semantic description, but a game of Chess could. Games are, as above, intended to be a fun experience and selecting an appropriate rendering to provide enough information to allow someone to enjoy the experience is key.
  • How much information should be conveyed to disabled users? When working on AudioQuake (a project that made the first-person shooter Quake accessible to gamers with vision impairments), we originally conveyed more info than a sighted player might get, as we thought this could be helpful. Examples included warnings about enemies who may be off-screen. This confused people (as they couldn't reach those enemies) and in the end we took it out. We also only told people about obstacles that were directly in front of them—they don't need to know about walls to their left/right, only one in front of them—but we did also alert gamers to openings when they pop up either side of them, so they can explore. This also applies to text to speech; when UI navigation is a way in to the content rather than the content itself verbosity can be a real issue, the level of information communicated for documents and applications is not desirable for some users in some circumstances. Could level of verbosity be configurable?

Further scope

Questions for discussion

There seems to be a progression of questions, starting with low-hanging fruit and moving to more blue-sky exploration.

  • Can existing accessibility or anticipated standards and APIs support the existing 21st Century Communications and Video Accessibility Act (CVAA) requirements?
  • How might existing assistive technologies (ATs) be integrated with engines so that this support can be realised?
  • What extra information do we need to enable certain games to be accessible that existing assistive technologies (ATs) are currently unable to accept? What types of games are there?
  • Does it make sense to add the ability to convey those types of information through the accessibility layer, or is it better to produce/highlight best practices for games to implement them in a more “in-keeping with the game world” style? Some—perhaps most—issues are better solved at the design/content/rendering level, with the help of the Game Accessibility Guidelines.
  • Could research into abstract games, particularly around presenting different “skins” to different players, be of help to us?
  • What adaptations for specific engines might be produced that could start assisting people quickly?
  • How can we make authoring and modding tools accessible?

Suggested outputs

Some thoughts on possible output from the workshop:

  • It seems that some of the work would need to be done as a technical effort like an MSAA layer, and maybe require the specification of additions to APIs or standards such as ARIA, and coding adaptations for specific engines/games. However other things, in order to remain immersive, would need to be handled in game design on a case-by-case basis, and that's where the Game Accessibility Guidelines would come in, as they provide guidance.
  • As with WCAG, perhaps there could be "Techniques" documents produced for fulfilling the guidelines in different types of games (be that genre, platform or I/O-device specific)? (The Game Accessibility Guidelines already does this to a degree via the best practice examples, but perhaps we could eventually produce some reference implementations/more detailed technical guidance on some of these techniques?)

Acknowledgements

Authoring, review, amendments and feedback: Matthew Tylee Atkinson (The Paciello Group); Ian Hamilton; W3C Accessible Platform Architectures Working Group and its Research Questions Task Force (Main APA/RQTF discussion thread); Joe Humbert and Kit Wessendorf (The Paciello Group).

The key ideas of an MSAA/UIA-like layer, specificity of adaptations and the benefits of multimodality, were developed as part of the Active Game Accessibility (AGA) proposal, led by Dominique Archambault and Klaus Miesenberger and their research groups.

References

  1. Game Accessibility Guidelines
  2. More Than Just a Game: Accessibility in Computer Games
  3. Towards accessible interactions with pervasive interfaces, based on human capabilities
  4. W3C's Framework for Accessible Specification of Technologies (FAST)
  5. W3C's Media Accessibility User Requirements (MAUR)
  6. Accessibility Object Model
  7. Proof-of-concept 3D level creation tool for blind gamers

Contributing literature

This is a list of papers arising from the research groups that contributed to the development of the Active Game Accessibility (AGA) proposals.

  • Ossmann, R., Miesenberger, K.: Guidelines for the Development of Accessible Computer Games, in: Miesenberger, K., Karshmer, A., Klaus, J., Zagler, W. (Eds.): Computers Helping People with Special Needs, 10th International Conference, ICCHP 2006, Linz, Austria, July 11-13, 2006, Proceedings.
  • Ossmann, R., Archambault, D., Miesenberger, K.: Computer Game Accessibility: From Specific Games to Accessible Games, in: CGames 2006, University of Wolverhampton, 2006.
  • Ossmann, Roland, Miesenberger, Klaus and Archambault, Dominique, User profiles in computer games designed for all, in: Proceedings of CGames'07 (11th International Conference on Computer Games: AI, Animation, Mobile, Educational & Serious Games), pages 144-148, La Rochelle, France, 2007.
  • Ossmann, R., Miesenberger, K., Archambault, D.: Tic Tac Toe - Mainstream Games Accessibility: Guidelines and Examples, in: Eizmendi, G. et al (ed.): Challenges for Assistive Technologies, 9th AAATE 07, pages 791-795, Amsterdam 2007
  • Ossmann, R., Miesenberger, K., Stoeger, B.: Usability Evaluation of Accessible Games: The Adaptation of the Heuristic Evaluation for Games Designed for All, in: Accessible Design in the Digital World (ADDW), new media, new technologies, new users, University of York, 2007.
  • Archambault, D., Ossmann, R., Gaudy, T. and Miesenberger, K.: Computer Games and Visually Impaired People, in: Upgrade, volume VIII, number 2, pages 11 pages, ISSN 1684-5285, 2007.
  • Archambault, D., Gaudy, T., Miesenberger, K., Natkin, S., Ossmann, R.: Towards generalised accessibility of computer games, in: Pang, M.: Edutainment 2008, Springer, 2008.
  • Miesenberger, K., Ossmann, R., Archambault, D., Searle, G., Holzinger, A.: More than Just a Game: Accessibility in Computer Games, in: Holzinger (e.d.): USAB 2008 - HCI &Usability for Education and Work, Proceedings, Springer, Heidelberg.
  • Ossmann, R.; Nussbaum, G.; Parker S.; Archambault, D.; Miesenberger, K.: Bring the Users to the Games by the usage of the Assistive Technology Rapid Integration & Construction Set, in: 1st Workshop on Game Accessibility: Xtreme Interaction Design (GAXID'11), online 2011: http://ga.fdg2011.org/papers.html.
  • “Pyvox 2: an audio game accessible to visually impaired people playable without visual nor verbal instructions” Thomas Gaudy, Stéphane Natkin and Dominique Archambault — In: Z. Pan , A. Cheok, W. Mueller and A. El Rhalibi (Eds.). Transactions on Edutainment II, Springer LNCS 5660, 2009. pp 176-186.
  • “Playing audiogames without instructions for uses: To do without instruction leaflet or without language itself?” Thomas Gaudy, Stéphane Natkin and Dominique Archambault — In: Proceedings of CGAMES'06 Conference (9th International Conference on Computer Games: AI, Animation, Mobile, Educational & Serious Games) in Dublin, Ireland, Nov 2006.
  • "Tampokme: a Multi-Users Audio Game Accessible to Visually and Motor Impaired People” Thomas Gaudy, Stéphane Natkin, Cécile Le Prado, Thierry Dilger and Dominique Archambault — In: Proceedings of CGames'07 (11th International Conference on Computer Games: AI, Animation, Mobile, Educational & Serious Games), La Rochelle, France, 2007. — pp.73-78.