Skip to toolbar

Community & Business Groups

Draft – Report of Silver Design Sprint

W3C

Silver Design Sprint Report

W3C Community Group Report 23 April 2018

Latest version
https://www.w3.org/community/silver/draft-final-report-of-silver/
Editor:
Jeanne Spellman

Abstract

The Report of the CSUN Design Sprint is intended for people interested in the W3C accessibility guidelines standard development, particularly the preliminary work being done to design the successor to WCAG 2. The Silver Design Sprint of March 19 & 20 2018 held at San Diego State University addresses problem statements identified by the year of research completed by the Silver Task Force, the Silver Community Group, and their research partners.  This summary of the work done by the Design Sprint participants is organized into sections with a link to each section:

This is probably a TLDR (too long, don’t read) paper for most people.  Please feel free to go to the Suggestions section for the essential information.

It is important to note that these are only recommendations.  The Silver project participants will be testing and evaluating these recommendations as they develop prototypes.  They may be changed or removed from the final prototype.

Comments are welcome by email to: public-silver@w3.org

Status of this document

This report was published by the Silver Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

The Silver Project

The Silver Task Force is part of the Accessibility Guidelines Working Group of the W3C. The Silver Task Force and the W3C Silver Community Group are performing the preliminary work for the successor to the Web Content Accessibility Guidelines (WCAG). The guidelines will have a new name that reflects the anticipated broader scope beyond web content. The name Silver comes from the acronym for Accessibility Guidelines, AG.  Ag is the chemical symbol for the element silver.

Silver is currently in the Ideation and Experimentation (prototyping and user testing) phase of the original Design Plan for Silver. The Silver Task Force is partnering with researchers who are studying different aspects of the current WCAG. Research related to the structure of Silver was completed by March 2018. A summary of the structure-related research is publicly available. Many of the individual papers are also public and are linked from the summary. Research related to Silver content will be on-going.

Participation in the Silver Task Force is open to the public. Interested people can join the W3C Silver Community Group to be placed on the mailing list and join conference calls.

Silver Design Sprint

The Silver Design Sprint held prior to the CSUN 2018 AT Conference, was the result of a year of research into the structural improvements for the existing W3C Accessibility Guidelines indicated by the research projects.  The summary of the research completed to date with links to the individual reports is publicly available.

The Silver group brainstormed categories of roles within digital accessibility that they wanted to invite and individually invited participants who provided expertise in each role, and overall created a balanced and varied group of perspectives.  About 30% of the experts invited also have a disability (for privacy reasons, we did not track exact statistics). Inclusion of experts that also have a disability is an essential part of the success of the Silver project.

  • Accessibility influencers
  • Information architects
  • UX professionals
  • Developers (e.g., Web content, Document, Authoring tool, User Agent, Assistive Technology)
  • Legal specialists
  • Policy specialists
  • W3C Process experts

27 experts participated in the Design Sprint, which was hosted by San Diego State University and supported by donations from the University of Illinois Urbana-Champaign Libraries and Google.  The Design Sprint was moderated by Camron Shimy of Google, who is an expert in leading software design sprints using agile techniques.

Design Sprint Participants

The Design Sprint process began with a presentation of the research results organized by problem statement.  Participants were asked to create questions starting “How Might We…” as they heard the research results.  The participants worked in groups of 4-5 to decide on problem statements they wished to work on, brainstorm many solutions and start narrowing down solutions.  The groups then developed a prototype (usually paper) and user tested it with others outside the group, and refined the prototype based on feedback.

Problem Statements

This research was used to develop 11 problem statements that needed to be solved for Silver.   The detailed problem statements include the specific problem, the result of the problem, the situation and priority, and the opportunity presented by the problem.  The problem statement were organized into three main areas:  Usability, Conformance, Maintenance.

Usability

  • Too Difficult to Read and translate.
  • Difficult to get started  for beginners.
  • Ambiguity in interpreting the success criteria. Different accessibility experts will interpret the guidelines differently.
  • Persuading Others to follow WCAG is difficult mostly because of the perception that Accessibility is something added at the end of the development process and is costly.

Conformance Model

  • Constraints on What is Strictly Testable provides an obstacle to including guidance that meets the needs of people with disabilities, but is not conducive to a pass/fail test.
  • Human Testable (related to Ambiguity) also relates to differences in knowledge and priorities of different testers achieve different results.
  • Accessibility Supported is a conformance requirement of WCAG 2  that is poorly understood and incompletely implemented
  • Evolving Technology of the rapidly changing web must constantly be evaluated against the capabilities of assistive technology and evolving assistive technology must be evaluated against the backward compatibility of existing web sites. .

Maintenance

  • Flexibility to provide more easily discovered, more helpful information that can be updated as technology advances.
  • Scaling the accessibility guidance so it can be updated  to include new and changing technology
  • Governance of the accessibility standards has not kept pace with changing processes of standard development.

For a more complete discussion of the Problem Statements, see the Silver Problem Statements paper.

Design Sprint Results by Problem Statement

It is important to note that most of the solutions proposed or prototyped combine many aspects of the Problem Statements.  The prototypes developed by the small groups are organized in this report by the main problem statement they wanted to address.  There was uneven coverage of the problem statements, so some statements still need prototype work.  The Design Sprint generated 100s of post-it notes with brainstorming ideas which are captured under the section “How Might We” ideas.

Too difficult to read

  1. Every group proposed a solution that included writing Silver in simple language that was easy to understand and translate.
  2. “Plain language” already has definitions related to secondary education level published by other standards groups like ISO, whereas “simple language” may not be (as) constrained by other  definitions, and may better describe the intent.
  3. It is difficult to write success criteria that have enough data and information to make it through the review process.
  4. We need to move from conformance to usability; guidelines as written drive the wrong behavior because people are addressing accessibility at the end of the development process.
  5. Develop a framework to write success criteria that provides a consistent structure and helps people provide sufficient data and information.
  6. Create check points at each phase of the development process to make sure that accessibility is being systematically addressed throughout.
  7. Create entry points by organizational role (e.g developer, designer, procurement, AT developer, etc)  to provide a quick way to filter the Silver guidance to essential information for that role, with links to additional information. Roles for Silver prototype was based on the Stakeholder Job Stories.

Difficult to get started

  1. How do you layout content for understandability?  How would you layout the actual information in the spec?  The group recommended using the Mozilla Developers Network (MDN) as an inspiration for a useful format. >
    1. Each spec would have it’s own web page and then each part of those pages would address the impact to the user:
      • how the user interacts with it
      • code samples
      • best practices and then
      • the actual statement of the spec.
    2. Each spec would have its own page, but would not necessarily be numbered – this way you could add to each page as its own separate entity and therefore would not have to be tied to everything else, waiting for updates that need to be done to keep pace with technology as it develops.
  2. How do you actually find what you need (Discovery)?. The group decided to develop a way for someone to read through the entire spec if they wanted to do so — this would be important for people involved in standards work, policy etc., but for others we thought we needed to develop a faceted search feature that could be split into three different sections:
    1. Allow the user to choose their role (e.g. designer, developer, project manager, etc.). Based on the role, information would be targeted to what that role needs to know
    2. After choosing the role, you could then narrow the information presented by choosing the problem you’re trying to solve (e.g. keyboard navigation, labels, alt text, accessible media, etc.
    3. After choosing the problem to solve, you could again narrow the information presented by choosing the delivery platform (mobile, web app, etc.).
    4. Develop a Q&A wizard that would ask questions to help people find what they need (e.g. do you have a team, what platform are you working on, what problem are you trying to solve, etc.).  This would have to be developed over time, but we thought that a Q&A wizard to help people get started would be really helpful.
    5. Develop a design-pattern library – “these are the things you have to have in order for things to work with users.”  We keep forgetting the user.  People have to remember what the user is trying to do.  Silver should include personas and relate back to those personas in each part of the spec.

Constraints on What is Strictly Testable

  1. The group looked at how testers would go about assessing a web application using measurements along a gradient of accessibility as opposed to a strict pass/fail result. The tool to put together a report of the assessment puts an emphasis on:
    1. personas: users’ needs come first
    2. task-based assessment, rather than component-based assessment. A properly marked up button doesn’t help anything if the user can’t complete the task at hand.
    3. Note: this also makes a good midpoint of grading between component/tag assessment and full page / complete processes compliance in the WCAG conformance model.
  2. Prototype for task-based assessment

Human Testable

The group consensus agreed that if no two experts would reach the same conclusion for a manual accessibility audit, that fact isn’t inherently a bad thing and should be embraced using a peer-review methodology.

Accessibility Supported

There are different approaches to accessibility supported:

  • Helping the engineering community understand which assistive technology support the code they are writing. WCAG 2.0 planned a database of support for features across AT and languages, but it never was done.  Resources needed to create and maintain a database were too daunting.
  • Helping the assistive technology developers understand what is needed to support Silver.  The example of PC Talk screen reader in Japan does not support ARIA (and only recently got support for headings) because the PCTalk developers don’t speak English and the spec is very difficult to translate. Create test files for AT developers to use. Test results can be public.  There was discussion whether AT providers can be held responsible for their support of Silver.
  • Helping the browser developers understand what is needed to support people with disabilities who do not use assistive technology and people with disabilities who are poorly served by standard assistive technologies.  There are features that content authors struggle to provide (for example, font size, spacing, color, enlarged printing, overflow controls that display text no matter what size, single column, and other features that improve the visual readability of rendered text) that are best provided by browsers.
  • Determining when an assistive technology provides real accessibility support.  For example, screen magnification does not provide support for reading, but is held out as accessibility support.

The group consensus was to remove author responsibility for accessibility supported and create guidance for assistive technology developers and browser developers that would provide information that would help design and prioritize implementation of features in their products that would support Silver. Several people argued strongly for requirements for assistive technology and browser requirements.

The group decided to focus on Accessibility Supported for Assistive Technology developers for their first prototype, and included some of the ideas from Difficult to Read discussion.  The second version focused on User Agent (browser) developers, and the 3rd version focused on Procurement officers attempting to purchase accessible technology and determine if their mobile app met the accessibility guidance.
Details of the accessibility supported prototype with links to photographs of the paper mockups.

Flexibility

A “database all the things” approach may help all or at least many of the problems. It would be easier to search and locate content for new audiences / novices as well as experts and other stakeholders. By parsing content into smaller simpler units, it would be easier to onboard and likely easier to translate. By being based on a tagged structure, it would be easier to identify related criteria as well as those with potential conflicts. It would also be easier to scale, govern and keep current if more of an open source model.

Suggestions from Silver Task Force and Community Group

These suggestions come from the members of the Silver Task Force and Community Group after discussing the results of the Design Sprint. The context of the suggestions is specific to the Design Sprint and will intersect with other needs of the project. Therefore, not all suggestions will necessarily be implemented as stated, but provide an important base of input to future planning.

Usability

  1. Take existing WCAG 2.1 guidance and rewrite it in plain language using editors with simple language or plain language experience.  The existing success criteria may need to be updated, but most of WCAG 2.1 guidance is still valid.  It needs more clarity, ease of reading and ease of translation.
  2. Organize the data in small snippets that can be coded and categorized so they can be assembled dynamically to meet the needs of the person looking for information.
  3. Create a comprehensive view for W3C Technical Report purposes, and for those who need to view the total document.
  4. Create a solution that addresses the needs of people to find information by role, problem, by disability, and by platform.  How can people discover what they need to know?.
  5. Design a homepage that is oriented toward helping beginners that is separate from the W3C Technical Report.  Include shortcuts for expert users who know what they want (e.g a code sample for an accessible tab panel)

Conformance

  1. Design a conformance structure and style guides that shift emphasis from “testability” to “measureability” so that guidance can be included that is not conducive to a pass/fail test.  Pass/ fail tests can be included, but they are not the only way to measure conformance.
  2. Develop scorecard or rubric measures for testing task accomplishment, instead of technical page conformance.
  3. Develop a point and ranking system that will allow more nuanced measurement of the content or product: e.g. a bronze, silver, gold, platinum rating where the bronze rating represents the minimal conformance (roughly equivalent to meeting WCAG 2 AA), and increasing ranks include inclusive design principles, task-based assessment, and usability testing.
  4. Include a definition and concept for “substantially meets” so people are not excessively penalized for bugs that may not have a large impact on the experience of people with disabilities.
  5. Remove “accessibility supported” as an author responsibility and provide guidance to authoring tools, browsers and assistive technology developers of the expected behaviors of their products.
  6. Develop a more flexible method of claiming conformance that is better suited to accommodate dynamic or more regularly updated content.

Maintenance

  1. Develop a core of rarely-changing requirements (normative) with modules of platform oriented advice, examples, tests, and support materials that can be updated as technology changes.
  2. Develop a method for accessibility experts to contribute new content, such as design patterns, codes and tests, where the experts vote material up and down without waiting for working group approval.
  3. Change the working group process to include Community Group participation.
  4. Improve access to specification development tools (e.g. Github) so that people with disabilities can more easily participate in spec development, whether through new or modified tooling. There are existing efforts that can be incorporated and improved on.
  5. Develop specification content a small amount of guidance at a time, and fully develop the content before including it in the spec.  Keep a public schedule when issues will be worked on, so the public can contribute in a timely manner.
  6. Keep a changelog of all changes to the spec so it is easy for reviewers to find the changes.

Next Steps

All the problem statements (and other ideas) need prototypes for the group to evaluate.  The following problem statements need more detailed ideas and prototypes:

  • Ambiguity
  • Persuading Others
  • Evolving Technologies
  • Scaling
  • Governance

The Recommendations all need more design and development into working prototypes.  The Silver project participants invite you to share ideas and prototypes with us.

Interested in helping contribute? Join the Silver Community Group and reach out to the Silver Task Force at public-silver@w3.org


Appendix:  Resources and Notes transcribed from Design Sprint

Each table did a presentation of their prototype which was captured on video (MOV files):

  • Table 1 Part 1 and Part 2 (How to create success criteria with a form for public input)
  • Table 2  (Accessibility Supported and Difficult to Read)
  • Table 3 (How to make information easier to discover)
  • Table 4 (How guidance about video accessibility could be displayed)
  • Table 5 (How to measure the usability of content for people with disabilities)

The notes from each table were captured  and transcribed. Each table has a folder with photographs of the paper notes and diagrams, and a transcript of the notes that could be typed up.

The general folder where all the documents, photos and videos are stored is also publicly available. The  Raw results from the Design Sprint are public for archival purposes.

“How Might We” (HMW) ideas

  • Redefine success
  • Create tools to support inclusive design
  • Give regulators something that will drive the right behaviors and outcomes
  • Get CEOs to champion inclusive design
  • Redefine the entire product life-cycle to produce more usable results
  • Give developers a sense of “done”
  • Balance between regulators, devs and all users (testable vs usable)
  • Decouple testable from conformance
  • Get rid of “accessible” to be inclusive
  • Define testability to be about user goals and needs
  • Flexibility to Testability
  • Write down your own testability statements & claim accessibility
  • Consume
  • Personalization
  • Look beyond language
  • Simulate non-readers (3 VOTES)
  • Further use plain language (1 VOTE)
  • Speak to people in their own language (2 VOTES)
  • Different approaches for different audiences
  • Write a spec about guidance and scenarios for phases (4 VOTES)
  • Motivate difference audiences Write guidelines for “planning”
  • Write guidelines for “design”
  • Write guidelines for “coding”
  • Write guidelines for “build and deploy”
  • Write guidelines for “testing”
  • Write guidelines for executives
  • Write guidelines/guidance for designers
  • Write guidance for developers
  • Write guidance for people who rely on AT?
  • Write guidelines for browsers to incorporate personalization for non-readers
  • Write guidance for guidelines writers?
  • Create videos to help designers do more accessible design?
  • Use humor to make ally more accessible, while being inclusive to other languages/customs?
  • High level goal and technique combos?
  • Use guidelines to drive inclusive design – beyond accessible
  • Redefine testability to be about usability
  • Conform usability
  • Select regions for their WCAG, not just states of translated language version
  • Guide for project manager (ex)
  • Select option for their role to read WCAG (head space)
  • Simplify for non-readers
  • Work with (bcds) who has WCAG knowledge to incorporate WCAG to existing regional system
  • Use (diztioning) and summarization
  • What is strictly testable
  • What is accessibility supported
  • How might we support users who do not use AT?
  • How do we define when AT does provide accessibility support?
  • How do we determine accessibility support for personalization?
  • What role should WAI have with accessibility support?
  • How do we go backward from WwD needs?
  • Human Testable: How do we test?
  • How do we remove the fear from irritation?
  • How might we identify reservability? (couldn’t read)
  • How might we provide techniques and keep up with Tech?
  • How can we provide a reading sequence for varying levels of technology understanding?
  • How can we identify the problems of different user roles?
  • How do we find experience editors?
  • How do we reconcile technology with different technical capability?
  • Carefully identify the necessary user stories and create a learning sequence for each role.
  • For people who use specialized AT and mainstream tech. There must be a process for PwD to identify non-support.  Detail expectations for AT should be developed.  Drop Accessibility Support from Content.
  • IMG-2043 is out of focus.  How do we keep the ?? so that ?? disabilities are ?? Accessibility support must include what a PwD can use content for the understood purpose.  I must generate support.  Into platforms / UA /Hardware
  • How should personal needs for users with disabilities be coordinated between content and user agents?
  • How should we drop accessibility supported?
  • How might users who do not use AT be given accessibility support
  • How might accessibility support for personalization be built into user agents?
  • How might basic Accessibility support features be built?
  • Complex interactions of content / browsers / AT must be resolved.
  • Wayne’s paper on Accessibility Support (IMG_2048 and 2049)
  • Every requirement should be tied to (and easy to understand by) the role of the People accountable for it and the Needs of people who depend on it.  All people should be able to ENTER thru their NEEDS and responsibilities.
  • HOW prioritize without giving disabled people Less
  • HMW (re: difficult to read) have standards that reflect the end result of a11y — i.e. are inclusive in design.
  • HMW have layers of guidance by role
  • HMW Split out technology from Guidance? Core principles.
  • HMW Recruit editors to translate existing guidance to plain language
  • HMW give expectations and instructions
  • HMW create test files for assistive tech
  • Strictly Testable
    • User centered spec
    • Fuzzy variable spectrum of PwD failure
    • Personalization that isn’t useless to developers
    • If it can’t be tested, it is useless to developers
  • HMW get buy in from other standards?
  • HMW create a forum for exchanging information about new technology?
  • HMW get early feedback from PWD for emerging technology?
  • HMW get all PWD paid for money to do testing?
  • HMW make AT ready to accept new technologies?
  • HMW make ourselves aware of new technologies?
  • HMW integrate a11y from the beginning?
  • HMW increase empathy?
  • HMW make accessible technology aspirational/”sexy”?
  • HMW get people to understand the flags/ramifications of accessibility?
  • HMW define success criteria more as a UX instead of a tech solution?
  • HMW decide a process for creating SC for a new technology?
  • HMW frame the guidelines around the process for writing new SC?
  • HMW determine when new tech does not fit existing SC?
  • HMW build the next generation of assistive technology?
  • HMW be inspired by other code/linking(?) tools and provide more flexibility?
  • HMW make the conformance structure amenable to innovation?
  • HMW strike a balance between comprehensiveness and concision?
  • HMW cross pollinate between different parts of specification and allow things to have more than one home?
  • HMW move forward and acknowledge standard might be wrong without invalidating it as a standard?
  • HMW make public participation easier and more democratic?
  • HMW be inspired by MDN?
  • HMW make intent of each SC clear?
  • HMW be inspired by web specification style?
    • Normative
    • Non-normative
    • Informative
    • Examples
  • HMW “socialize” the guidelines?
  • HMW make the contribution process more visible?
  • HMW use GitHub in this process?
  • HMW take inspiration from other democratic processes/standards orgs(?)?
  • HMW determine when a decision gets made?
  • HMW make the judicial process more democratic?
  • HMW structure to scale to new additions like COGA (+ ~ 70 SC)?
  • HMW structure guidelines to be consumable by other tools?
  • HMW make testing more deterministic? (Pass/Fail)
  • HMW show a11y is more impactful to the business than they think? Cheaper?
  • HMW present guidelines as a resource for good design rather than rules to obey?
  • HMW show how these guidelines improve the experience of real, non-theoretical people?
  • HMW get these guidelines into a state easily consumable by tooling?
  • HMW present the guidelines in a more digestible way?
  • HMW get tech people clear on a11y principles to fill gap before specific guidelines can be written?
  • HMW let people decide what is important to them?
  • HMW provide potential strategies for integrating a11y into project process?
  • HMW allow community members to contribute if they do not feel like “experts”?
  • Parse and chunk content in a way that supports scanning / reading in multiple ways?
  • Find a global source / standard for “plain language”?
  • Find a global source for statistics on each human problem?
  • Assign a date to individual success criteria?
  • Keep techniques open and not fixed?
  • Reframe criteria as opportunities in support of problems?
  • Place more emphasis on criteria currently designated as AAA?
  • Identify and measure an individual preference?
  • Keep multiple expert opinions in scope?
  • Design onboarding to specs for multiple roles and disciplines?
  • Tag content in a way that best supports humans and machines for search?
  • Add accessibility to the patent process of new technology (content / assistive)?
  • Create a system that enables content tech to make itself known to those that may need it?
  • Make testing and validation contextual to a need and not a criteria?
  • Use individual judgments as a voice of participation and contribution?
  • Create a unit of measure that is not a binary pass / fail?
  • Use the new ACT Rules doc format for human and manual testing?
  • Eliminate the need for AT by greater feature / support of user agents?
  • Have video players handle captions (not intimidating)?
  • 3D flythrough version of WCAG with person as aliens in cool adapted smart houses?
  • Getting an editorial voice in plain language? Do that later.
  • A/B testing descriptions?
  • Sprint too short for complex but necessary step (merge another problem space).
  • Singular problem.
  • Dry explanation of cause.
  • Need acceptable implementations.
  • Knowledge base (group tag)
  • Build a corpus of test results to mine and review
  • Describe broad base and when experts disagree
  • Describe levels and testability
  • Have human testing determine conflicts with other tests
  • Include problems that don’t fit a success criteria
  • Standardise understanding of effects of disabilities
  • Human testing (group tag)
  • How (group tag)
  • Report it using simple language
  • Get the same results irrespective of who is testing
  • Test setup environment and requirements
  • Ensure tests cover the whole WCAG
  • Have testing tools that are accessible
  • Write the success criteria
  • Support testing where the tester cannot do it alone
  • Use the new ACT Rules Format for human testing
  • Ensure experts and PwD as experts are certified?
  • Create an alternative to certification to qualify humans for testing?
  • Ensure PwD are part of testing?
  • Resolve differences between peers?
  • Create a peer review model that includes Pwd?
  • Difficult to get started: “on-ramp”, “roles”, findability, clearer
  • HMW meet everyone where they are. Something I can do and understand in my context
  • HMW solve  the simple problems like Alt text, tables, etc.
  • HMW make sure to connect to the goals from the start so people know why
  • HMW teach the fundamentals in the right way/right time
  • HMW teach the basic things with support/training material
  • HMW progress from fundamentals to more complex issues
  • HMW fundamental module that covers basics/tools/ etc.
  • HMW make using Silver a good experience
  • Flexibility: Adapt to new tech and new barriers to PwD
  • Flexible Process
  • Governance: Easier updating, learn from others, faster response to new demands
  • HMW Have a moore flexible view of the conformance. “Substantially meets”
  • HMW get Silver to connect meaningfully
  • HMW Define the requirements for WCAG so they are both clear and flexible
  • Publish conformance not in VPAT
  • HMW avoid being so picky about the small things
  • HMW focus WCAG on the lived user experience instead of the tech requirements
  • HMW focus on tasks not pages
  • How we set pass/fail levels
  • HMW test meaningfully
  • HMW get the lawyers out of the center of WCAG compliance
  • Ambiguity in Interpreting the Success Criteria – agreement about exp, level of abstraction, difficulty of learning success criteria
  • HMW be open to multiple possibilities
  • HMW embrace the ambiguity in performance
  • HMW let people what’s important to them
  • HMW Avoid the cul-de-sac of consistency.  (See CUE in usability)
  • HMW help testers understand users not tests.
  • HMW see a11y as UX for a wider range of people
  • HMW not focus so much on screen readers
  • HMW find connection to prefs for all and not just AT.
  • HMW create usability requests(?) not conformance.
  • HMW separate content conformance from user agent and AT provider (?)
  • HMW place needs of users first
  • HMW differentiate between removing barriers and optimizing experience
  • HMW prioritize barriers then enhanced experience.
  • HMW hold AT to the same standards as the “content”
  • HMW connect regs to guidelines and principles so they are testable in an understood context.
  •  Think about domain
  • HMW Be flexible about conformance depending on the context of use
  • Accountable
  • Text all the players: content/ UA/ AT
No Comments |