Making Media Accessible to All


Personalisation, Accessible Media, User Profile

1. Problem Addressed

Selecting an accessible media service is often a binary option – either on or off, where one option is supplied to all no matter the degree or need. Audience requirements are very different and range from 100% loss of a sense to occasional need for assistance. Furthermore, accessible media services continue to only address sight and sound assistance, which does not help participation for those with reduce motor functions or with understanding or learning difficulties - often more than one condition is present leading to co-morbidity.

Developers need to understand and incorporate the wide range of requirements for people with a range of abilities. A ‘one-size-fits-all’ approach can be the easiest option to implement, rather than developing different options for the same website, application or audiovisual media, for people with a range of abilities. Solutions are often not scalable when applied to platforms with a range of accessibility options.

The role of the ITU Audio Visual Accessibility Group is to investigate and suggest options and solutions that can be applied to any form of media no matter how produced, distributed, or reproduced.

2. Relevant background

Researchers have explored ways to adapt the same content to meet the needs of different users based on a user profile. The SUPPLE project at University of Washington, IBM Web Adaptation technology and AVANTI browser are notable examples, mostly working for people with different range of visual and motor impairment.

ISO-FDIS 9241-129 published a standard on managing user profiles for software individualization in 2010. The European Union Virtual User Modelling and Simulation (EU-VUMS) cluster took an ambitious attempt to publish an exhaustive set of anthropometric, visual, cognitive, auditory, motor and user interface related parameters for adapting man-machine interfaces of automobile, consumer electronics, audio-visual media and so on.

ISO/IEC 24756 published the concept of Common Access Profile for accessible and assistive devices.

ITU Focus Group on Smart Cable Television Technical Report. ITU Study Group Work

ITU-T Study Group 9 carries out studies on the use of telecommunication systems in the distribution of television and sound programs supporting advanced capabilities such as ultra-high definition and 3D-TV. This work covers the use of cable and in conjunction with other groups, hybrid networks – primarily designed for the distribution of television and sound programs to the home – as integrated broadband networks to provide interactive voice, video and data services, including Internet access.

ITU-T Study Group 16 is responsible for studies relating to ubiquitous multimedia applications, multimedia capabilities for services and applications for existing and future networks, including the coordination of related studies across the various ITU-T SGs. It is the lead study group on multimedia video coding, systems, and applications; multimedia applications; telecommunication/ICT accessibility for persons with disabilities; human factors; intelligent transport system (ITS) communications; digital and e-health; Internet Protocol television (IPTV) and digital signage; and e-services.

ITU-R Study Group 6 looks at programme production and content exchange between distributors and broadcasters and the delivery of broadcasts to users. Information on the progress of current work on some of the techniques and technologies related to accessibility being applied to programme production is described in ITU-R Reports (including ITU-R BT.2420 and ITU-R BT.2447). These now extend from the original Sight and Sound based accessible technologies to include haptic and cognitive related studies and trials. All three Study Groups have formed joint Rapporteur Group (IRG-AVA) to combine expertise and ideas with representatives from interest groups, administrations, and industry.

3. Challenges

To make media accessible to all, the entire chain from script to reproduction device must understand and contribute to the Quality of Experience of the intended audience. To cater for a diversity of needs, simple on/off systems are a very course option and do not take advantage of the capabilities and features new and developing technologies offer.

Many of these options are not directly targeted at accessibility, technologies such as object-based media offer ideal opportunities to personalise media to a user’s needs. The challenge is to define the “language” that describes the options available through a personal common user profile that can be applied to any device. The standardisation of the form of such user profiles is an important objective.

Four horizontal bars labelled Hearing Seeing Participating and Understanding. There is a vertical line indicating the position along each bar a user can select. Under each horizontal bar there are options the user can select to enhance the content

Fig 1. Ranges of human needs for sharing the media experience

As computer-based devices evolve, people who need assisted access to information through multiple devices will use different sets of applications and software platforms. Ideally, an accessibility service should be available on any device and application irrespective of underlying hardware. Responsive design of applications and web pages can be considered as an example of automatic adaptation of layout based on screen size and platform of deployment.

This requires information about a user’s needs to personalize the content with respect to their range of abilities. A user profile can be defined as a representation of a user model while a user model can be defined as a machine-readable description of user. After creation of a single user profile a model can,

Examples of interface adaptation across multiple devices and platforms using a common user profile format are possible are show in Figure 2.

Figure 2 shows how graphics and text can be adapted and enhanced to assist visual impairment. Examples for a computer, Smart TV, Smart Phone and simple mobile phone display are shown.

Figure 2. Example of interface personalization using common user profile format

Profiles can save details such as color contrast, font size, inter-element spacing of icons etc. which can be applied across IP and traditional smart TVs, desktop and laptop computers, smartphones and low-end mobile devices.

Developing content that can exploit a Common User Profile requires authors to explicitly state font size, color contrast for general content as well as for closed caption or subtitles.

Storing and sharing information about users always bring security risks, and unintended use not authorized by the end user. Security and privacy must be fully integrated into all aspect of a Common User Profile when gathering personal information. Sharing of actual content is not necessary, the personalization algorithms can run on user profiles stored on local devices allowing the user to choose if and how information is shared between their own devices connected to their own secure data services. Standardization ensures personalization without the risk of wider sharing of an individual user’s data.

4. Outcomes

The target outcome is to provide a common data set which can describe how accessible media options are created, exchanged, distributed, and consumed by providing,

5. Future perspectives

The internet and internet connected devices have become a vital part of media. The standardisation of user profiles represents a major step toward greater media personalisation.

The next step would be the automation of personalisation rather than by direct and possibly repeated human user input. Future work should include the development of AI techniques to make intelligent and adaptive service profiles that will allow the automatic alignment of user needs with the accessibility services available. These needs may change with content type, age of the user and the environment the user is in at the time (home, public transport, office…). The experience and needs arising from a live sport programme may be very different from a pre-recorded drama or a live entertainment programme.

Today’s accessibility services are limited to subtitle/captions, audio description, signing and audio subtitles/captions. The future will see the development of services based on the interaction through different modalities such as haptic signalling, brain computer interaction (BCI), or gestures, and the interaction needs of cognitive or reduces motor. Services will make ever greater use of Machine and AI Learning.

Our target is to “leave no-one behind!”.


This work has been partially funded by the European Commission funded project Media Verse: A universe of media assets and co-creation opportunities at your fingertips with the grant number 957252. Pilar Orero is part of the TransMediaCatalonia research group (2017SGR113).


  1. Recommendation: ITU-T Rec. H.430.1 Requirements for immersive live experience (ILE) service:
  2. Recommendation: ITU-T Rec. H.702 Accessibility profiles for IPTV systems:
  3. Technical Paper: ITU-T FSTP.WebVRI Guideline on web-based remote sign language interpretation or video remote interpretation (VRI) system:
  4. Report: ITU-R BT.2420 Collection of usage scenarios of advanced immersive sensory media systems:
  5. Report: ITU-R BT.2447 Artificial intelligence systems for programme production and exchange:
  6. Report: ITU-T Focus Group Technical Report of the Focus Group on Smart Cable Television:
  7. Activity: ITU-T Work Programme Common user profile format for audio-visual content distribution: