This extended abstract is a contribution to the Easy-to-Read on the Web Symposium. The contents of this paper were not developed by the W3C Web Accessibility Initiative (WAI) and do not necessarily represent the consensus view of its membership.

Reading Adaptations for People with Cognitive Disabilities: Opportunities

1. Problem Description

Some people with cognitive disabilities have difficulty with aspects of reading other than seeing and decoding text. The aim of this note is to bring to the Symposium a number of opportunities for research that may lead to ways to adapt textual content so as to make it easier to read for these people.

2. Approach

We've collected suggestions from the literature, from colleagues, and from our own work. Where possible, we've tried to suggest at least something about technical approaches, as well as describing the problems that people face.

3. Opportunities for Research

Spoken presentation

For many people, having text read to them, while also seeing it, aids comprehension (Blattner and Glinert 1996). The research challenge here may not be deep, but it is important: How can we shape the infrastructure for content presentation on the Web so that it is trivial, or even automatic, that readers who want it can get spoken presentation, for any text?

Although screen readers can address a portion of these concerns, they often utilize keyboard shortcuts or device-specific input to navigate content and may rely on a user to understand underlying content structures to be used effectively. Possible areas of improvement include using mouse, touch, voice, or gesture-based interactions to receive and control spoken presentation on many types of devices without requiring user configuration. This could also include speaking text embedded in photographs, charts, and video.

Access to definitions

Self advocate Nancy Ward (Lewis and Ward 2011) has this high on her list of requests: make it easy to get a definition of any unfamiliar word while you read. While there are various makeshifts to do this, we don't have this facility built in to the infrastructure everyone uses. A deeper aspect of the problem is that what one really wants is a definition that provides the sense in which a word or a phrase is used on a specific occasion, not a grab bag of possible definitions. This would seem to require deep linguistic analysis, but perhaps techniques like Latent Semantic Analysis (Landauer and Dumais 1997) could be used to select a likely sense and even replace unknown phrases with known ones automatically based on an individual's preferences. Can we automatically rewrite content, on demand, to replace words not in a reader's vocabulary or provide alternative forms of content?

Topic extraction

If one isn't a good reader, it's useful to know if a text contains something important, or interesting, so that one can take the trouble to read it carefully, or can get help in reading it. Can we make automatically available to readers who want it a readable summary that would make this judgment easier to make? How about automatically extracting a list of keywords? Experiments by Kirill Kireyev, the author, and collaborators, using Latent Semantic Analysis gave disappointing results but other analysis techniques are possible including leveraging blogs, comments, and user-generated summaries (Gamon et al. 2008; Hu et al. 2008). Perhaps automatic outlining of content can make content easier to understand, navigate, and read? An interface that outlined content and allowed selective expanding and collapsing of content could help users that have difficulty reading long amounts of text at once.


We believe it is T.V. Raman who has said, “The best interaction is the one you don't have to have,” pointing out that systems can often use context to determine what we want done, rather than requiring us to specify it. In somewhat the same vein, sometimes the best text is the text you don't have to read. Can we find ways to use information about an individual reader, and the reader's situation, to eliminate material that's irrelevant to them? For example, many informational pages have long lists of addresses, of which only the closest few are likely relevant or business listings that may not be open based on the time of day. Descriptions of eligibility for services (this is important information for people with disabilities) often include contingencies that can be determined to be irrelevant in a particular case.

Presenting choice architecture

Often text is intended to support making a choice. However, it is often not easy to see what the possible choices are, or what considerations should guide the chooser. Further, as (Sunstein and Thaler 2008) argue, choice architecture, the way choices are presented, for example what the default choice is, has a big influence on what people do. Bob Williams (personal communication, September 14, 2012) has asked, could we extract the logic of choices from texts, and present this in (for example) diagrammatic form? Moving the other way, some people with cognitive disabilities find it difficult to understand abstract arguments. Could we automatically construct a narrative that provides a concrete illustration of a path through the choice logic that a text represents?

Supporting workflows

It is clear that the Web is not just about consuming content, but increasingly about producing, interacting, and sharing it too (Lenhart et al. 2010). With interactive content, an important aspect of being easy-to-read is knowing what to read next, especially when the layout of content is not linear or sequential. Can we help users by making implicit workflows explicit and showing them what content to read and respond to next? Can we provide guided navigation systems that take users step-by-step through filling out online forms, purchasing products, responding to emails, and contributing to social media streams? A strategy for accomplishing this may include providing mobile and tablet-oriented web interfaces as alternatives to desktop websites (Hoehl and Lewis 2011). Adapting a workflow to a particular user's ever-changing needs poses a challenge not only as users learn and become more adept but also as users become less adept with a system, perhaps through aging-related disabilities or increasing cognitive impairment.


  1. Blattner, M. M. and Glinert, E. P. (1996) Multimodal Integration. IEEE MultiMedia 3, 4 (Dec. 1996), 14-24. DOI: 10.1109/93.556457
  2. Gamon, M., Basu, S., Belenko, D., Fisher, D., Hurst, M., and Konig, A. C. (2008) BLEWS: Using Blogs to Provide Context for News Articles. In Proceedings of the Association for the Advancement of Artificial Intelligence (ICWSM '08).
  3. Hoehl, J. and Lewis, C. (2011) Mobile web on the desktop: simpler web browsing. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 263-264. DOI: 10.1145/2049536.2049598
  4. Hu, M., Sun, A., and Lim, E.P. (2008) Comments-oriented document summarization: understanding documents with readers' feedback. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR '08). ACM, New York, NY, USA, 291-298. DOI: 10.1145/1390334.1390385
  5. Landauer, T. and Dumais, S. (1997) A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240.
  6. Lenhart, A., Purcell, K., Smith, A., and Zickuhr, K. (2010) Social Media & Mobile Internet Use Among Teens and Young Adults. Pew Internet & American Life Project.
  7. Lewis, C. and Ward, N. (2011) Opportunities in Cloud Computing for People with Cognitive Disabilities: Designer and User Perspective. In: Stephanidis, C. (ed.) Universal Access in Human-Computer Interaction: Users Diversity. Berlin/Heidelberg: Springer Verlag, pp. 326-331. DOI: 10.1007/978-3-642-21663-3_35
  8. Sunstein, C., and Thaler, R. (2008) Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press.