What is the overall goal of this proposal?
Status: closed. This was resolved in the 1.0 on 12/4/2013
It is a draft proposal that has been in development since January 2013, and made available for comment on April 15, 2013. There have been a number of questions on the scope of this proposal, as the term accessibility is quite open. Here's a quick view.
- It focuses on creative work (content:books, video, audio, images, web pages and software applications) to describe the accessibility attributes
- It does not not focus on physical places and their accessibility characteristics, nor does it do this for events.
- While it will aid finding content appropriate for a specific users preferences, we do not have the matching algorithms of CC/PP of Access for All's PNP in mind. Additional metadata can be added to the content to take advantage of this. It;'s for this reason that we have tried to stay as close to ISO 24751 and existing best practice as possible, coordinating closely with both ISO 24571/AfA and Dublin Core Metadata
The goal is easily described as making accessible content discoverable. While WCAG, PDF/UA and other efforts strive to create all content as accessible content, there is still immense amounts of content that was not designed with accessibility in mind, whether by cost, ignorance or legacy. Finding accessible content is often akin to finding a needle in a haystack of inaccessible material.
To accomplish this, it is important to focus on making the metadata easily specified, with a minimal number of properties. This is both so that it's easy for web content creators to characterize their content and for users to have a minimal number of terms for search. There is much more detail that could be represented in these properties, but they create such a large number of properties and values to render them less than useful for people looking for their type of accessible content.
Charles suggested a better standard, which is to make simple things simple and hard things possible. This may be an extension of our original goal.
accessHazard - Ok as is, or should it be negated in sense or allow a "none"?
Status: closed on 10/7/2013 with the addition of three negative assertions as well as the existing positive assertions.
The decision was to create three negative assertions to match the positive assertion. All three of the negative properties should be set if there are not any of the hazards known to date. If not properties are set, the state of accessHazards is unknown, rather than no hazard.
Also, here are notes from the 10/7 meeting
- Currently, accessHazard defines positively. The resolution is that we have negative assertions that match the positive assertions. and not a "none"
- The reasoning is that the maths works out better: 10% of content marked correctly as having hazard if it does means there is a 90% chance of hitting a hazard.
- The definition in our spec should refer to seizures, nausea or other physical (Myers should look at AfA and make sure that I capture the meaning of AfA... Matt and I may have been overly aggressive in editing). I checked the wording after the call and believe that it is fine.
Below is the original dialog.
Note that, since the dialog started, this has been constrained to just the hazards defined in WCAG 2.3. The proposal has been updated.
There are two opinions running on this at the moment. We need to decide.
- accessHazard is fine as stands. Only content that has an accessHazard should have these properties set
- the values should be set in the negative. I think we want to replace accessHazard with a negatively expressed version ("doesNotHaveAccessHazard") but in the meantime I think adopting the existing accessHazard would be a good idea.
Madeleine: I believe we need both accessHazard=flashing and accessHazard=noFlashing, etc.. This is because there are three cases we'd like to distinguish:
- checked and it's fine
- checked and it is NOT fine
- didn't check
"Didn't check" can be signified by no metadata -- this will be most of the content on the Web. In cases where someone has checked, let's record both positive and negative states.
There was a larger debate about other poorly designed inaccessible content. This property was intended just for items that can induce seizures.
I've also reproduced a reply I wrote in email on style and a drive to adoption.
I'd just like to point out that we have two competing aspects here... Precision and adoption. And then an understanding of who the audience is for these tags.
Precision: Yes, specifying the exact three hazards that we know and specifying them is the most correct way to specifiy the information
Adoption: If we tell every video producer (e.g., Khan Academy) that they should be adding
<meta accesshazard="noflashing"> <meta accesshazard="nomotionsimulation"> <meta accesshazard="nosound">
I'd settle for making it easy for them, if they know there are no hazards (which, I am sure is true for 99.99+% of their content). I'd have trouble to convince them to do
But that's more likely, at least.
Audience: The people who will be adding these tags are ones who are accessibility advocates. If a new accessibility hazard that prompts seizures is discovered and they have done the work to tag their content already, they would be the most likely group to do this.
Our greatest challenge with this specification is not going to be one of getting accessibility advocates to do the right things. It's going to be getting the regular web masters and the like to add accessibility tagging at all. And we can help to achieve this by making this as simple as possible, while not compromising information content.
This challenge on precision vs. adoption runs through the specification. If we ask people to do too much, they'll throw up their hands and not do anything at all, or do it wrong. We've tried to error towards adoptable.
What is the goal of mediaFeature? (conforming or informational) Do we have this right?
status: closed in 1.0 on 12/4/2013 with the updated description of the different types of accessibilityFeature
Charles raised the question of whether these attributes are a declaration of conformance (as in alternativeText means that "all of the photographs and other media have alternate text") or just whether the author of the content (or adapted version of the content) used alternate text on the significant parts of the content to the best of their abilities. The intent of these are the latter. Since this metadata is being added by people who care about accessibility, we have to trust that they will apply their best efforts before they'd add the attribute.
A framework to understand the accessModes and mediaFeatures has been written on the main wiki page (now in OldContent) , and should serve to answer most of these issues. See especially the second numbered list in this, which outlines the two main classes of mediaFeatures:
- Display or Transformative or just Transform - restyling, adjusting layout, while staying within the same access mode. This is row 3 of the table. These adaptations are enhancements or alterations in presentation that do not require intellectual interpretation or alteration of the content within the same accessMode.
- Augmentation or Content - adding captions, descriptions, alt-text to augment an accessMode to another accessMode. This is the bulk of the table, from row 4 column 4, down and to the right. This class of mediaFeature does require a human or other intelligence to interpret the intellectual content and make it available in another accessMode.
Charles McN, on the mailing list, wrote the following (in italic, with the response upright).
It isn't clear what the requirements and use cases are for mediaFeature.
Are these just cataloguing information, similar to noting that a physical book has 5 pages of color photographs, or a guarantee that e.g., each image has alternative text, or a longer description is provided as necessary?
We thought that implementing AfA or ISO 24751 verbatim for the purposes of web search and for use by web masters would be too complicated: it needed to be a small set of properties that a user could deal with. We decided to combine various ideas into mediaFeature. mediaFeature simply states that the author/publisher or someone who modified that book, tried to take advantage the stated accessibility features. For example we make no promise that every image was described appropriately, since it is hard to define that by a single person and actually different consumers of the information may have different opinions about the quality and coverage of the image descriptions.
It seems that they are the former - interesting information that might lead to expectations of the resource described. They don't seem well-enough specified to know if I can rely on them. And the list seems somewhat arbitrary.
It isn't clear, for example, how to classify a resource that assumes the ability to touch several points of a screen at once, which is now a common feature on devices but poses an obvious potential problem to some users such as those who don't have very many fingers.
In this use case, the resource that needs to be described is an application, not the content. If the accessibility API of this application supports multi-touch with the user’s AT, then the proposed ATCompatible and accessAPI properties should be relevant. Furthermore, we also have a property called controlFlexibility, which also should provide information about whether the application can be fully controlled either via audio, mouse, keyboard, touch or video/gestures.
There are specific properties for interactive applications (softwareApplication, which includes web applications). These are ATCompatible, controlFlexibility, and accessAPI. The mediaFeatures are for the full breadth of CreativeWork, but just focus on the content, not the interaction.
Finally, Charles McN proposed that we use a proposed display/transformative property of "audioHighContrast" as a though experiment. My take is that such a property, if existent (would love a reference) would be in the fifth column, third row, as +audioHighContrast. It's a transform that would have been applied to the source, much like largePrint is a transform on visual when it is an image.
This has been handled with enhancedAudio.
A significant amount of content here, used in the development, has been placed in an old content area. WebSchemas/Accessibility/Issues Tracker/OldContent#Access_Mode_and_Media_Feature_Framework:_a_tabular_view