W3C

- DRAFT -

Silver Task Force & Community Group

12 May 2020

Attendees

Present
jeanne, Chuck, ChrisLoiselle, JF, sajkaj, JakeAbma, Lauriat, Joshue108, bruce_bailey, CharlesHall, Makoto, Detlev, KimD, OmarBonilla, Rachael
Regrets
Chair
Shawn, jeanne
Scribe
ChrisLoiselle

Contents


<scribe> Scribe:ChrisLoiselle

Follow-up on action items from virtual face-to-face

<Joshue108> /me That is a marvellous flowcart..

Jeanne: Depreciation scoring was looked into. Please provide feedback asynchronously to the email thread on summary of your proposals.
... I will put information into a survey

<Zakim> bruce_bailey, you wanted to propose we draft survy Qs in wiki?

BruceB: Can we normalize the questions around the proposals?

Jeanne: Proposal questions will be around are we going to deprecate the scoring points .

<Lauriat> +1 to Bruce for consistency

BruceB: Placing on wiki may be a great idea to host these questions.

JF: Depreciation , not deprecate. Depreciation over time.

Jeanne: Can we put this on this week's survey ? Please have the proposals by Thursdsay for wording for survey

<JF> URL for the wiki page?

Check-in with subgroups

<CharlesHall> was a document created to contribute user needs to?

<Lauriat> https://www.w3.org/WAI/GL/task-forces/silver/wiki/Main_Page#Silver_Guideline_Content_.28Active.29

<CharlesHall> based on the FTF conversations and the ask for contributers

Shawn: to Makoto Any update to Alternative Text ? Makoto: I'd like to make sure we are on correct track. I'm using markdown to write the content. I'll share the file later

<Chuck> chris: We are meeting this Thursday, to discuss current status of Visual Contrast of text. Next tuesday we will have more info on next steps.

Jeanne: The SC for clear language has work that needs to be achieved , volunteers are welcome to contribute.

<srayos> @jan @ jeanne I would love to help with Clear Language. Please contact me.

Shawn and Jeanne: Headings - no particular name associated with this group. Headings are usually worked on in a group as our demo / test examples. Jake: I will like to work on that success criteria to address items I'd like to work on.

BruceB: Instead of audio description, are we trying to pursue the virtual reality path into the write up of audio description?

<CharlesHall> i think the Xaur expands the methods

JF: and Janina: I'd like to contribute to the virtual reality path as well.

<Joshue108> https://www.w3.org/TR/xaur/

JF: Traditional audio description, time sync , pre recorded. BruceB: Should we also include the real time / XR , ways of providing audio description in real time etc.

Janina: Maybe a sub call on this topic of XR and audio description?

BruceB: We should treat XAUR approach as a main talking point

<jeanne> https://www.w3.org/WAI/GL/task-forces/silver/wiki/Survey_on_depreciating_scores

Josh: JF, I think your perspective would be great on the XAUR

<JF> The MAUR (Media Accessibility User Requirements): https://www.w3.org/TR/media-accessibility-reqs/

XR Accessibility User Requirements = XAUR

user needs and requirements for people with disabilities when using virtual reality or immersive environments, augmented or mixed reality and other related technologies (XR)

<Lauriat> WCAG to Silver Outline Map, to also potentially spark some thoughts on what to work on: https://docs.google.com/document/d/1aCRXrtmnSSTso-6S_IO9GQ3AKTB4FYt9k92eT_1PWX4/edit

proposal from Jake on conformance

Jeanne: to Jake...

Jake: We talked to conformance model and way of getting there. I modified Jeanne's diagram / flow chart. Conformance Chart - WCAG 3 Silver Architecture - Jake https://docs.google.com/drawings/d/1BvgdjJGvv9mgscTKX4JJrbBCsl4SClKnILfNtd3EHoo/edit

<bruce_bailey> Flowchart looks great! Thank you @Jake for this work.

Testing should be based on two different pillars , technical / absolute way of testing , test against benchmarks.

On right hand side of chart, totally different challenge. COGA lives within this area. This area is relative. Baseline , then benchmark based on methods. Total score. There is ability to combine both technical and usable to a total score, but can be kept separate as well.

Technical is based on qualitative.

JF: Where are the 7-9 functional user needs?

<CharlesHall> i would add that the usable column is BOTH quantitative and qualitative

Jake: that would be on authors , however our baseline is based on our knowledge of those users., i.e. COGA, Low Vision, etc. Functional needs then test against Functional outcomes.

JF: How do you know what the functional need is if we don't know what the activity is?

<bruce_bailey> I am not clear why qualitive is strongly associated with Technical and quantitive with Usable. That seems reversed to me.

JF: Different websites will have different functional outcomes. We as a working group won't know what every website is for?

<jeanne> Bruce, Jake explained it to me yesterday, it is based on the number of testers are needed to provide a baseline -- for technical, you don't need multiple testers. To create a usable benchmark, you need a lot of testers to provide a baseline.

Jake: We know from functional needs on the technical side, what functional outcomes may be. It is up to create benchmark. I.e. top tasks, certain processes.

We have benchmarks already in some places, ACT rules for headings for example. Heading has accessible name. (technical side of chart). Heading is descriptive, is a more usable test vs. technical.

JF: How about hierarchical structure of headings? Where does that live?

Jake: ACT rules can be created based on benchmarks and include the functional needs of users. Baseline based on functional needs is necessary. We can only create off of functional needs but won't know all functional outcomes , that would be up to author

<CharlesHall> functional outcomes that are task specific and outside of any benchmark provided (by wcag3 or ACT or EM) could simply be another block in this flow chart between scope and baseline

Detlev: Benchmarking is similar to what wcag has done already, i.e. operationalize wcag. 2.4.6 headings and labels. We have a data point , does it define what it does? Technical vs. usable . My understanding is benchmarking is defined a task. On that level, if a user can't log in. There are a number of steps the user would need to go through to get to their outcome of logging again.

Jake: Top task , for example would be logging into a website.
... For usable part, logging in is the path you'd go through vs. a full page analysis. Benchmarks are larger chunks of tests.

<Zakim> Lauriat, you wanted to question the existence of top tasks in relation to scoring. How do differently shaped apps fit into this?

Shawn: For top tasks, I thought benchmarking was geared toward granularity. How do differently shaped apps fit within this scoring?

Jake: I think Jeanne talked to precedent, most generic tasks then move toward granular / specific tasks. Subtasks may be a section of a task.

<Detlev> The benchmarking of a task like login (including a password reset) would capture an aggregate of atomic parts (like descriptiveness of labes, accessible name available, keypoard operational etc.). So the question regardingfunctional outcom would be, across user a11y needs and requirements 1-9 derived from these, is the user (1-9) able to reach the functional outcome (i.e. complete the task)? ,

Jake: Not just testing happy flow, it has to be entire flow / task.
... End result of 4 tasks will show score against each of those tasks. How many top tasks can we come up with, say for filling in a form, reading a blog posts etc.?

JF: The concern is we need to determine all the failure paths and that comes back to scoping. How is that a known vs. an unknown?

Jake: We'd need to state somewhere, completing a process is a top task , but we won't talk to a specific process. We should talk to what should be accomplished in a process. I.e. going back and forth in multi step , abstract flow of completing a process.

The top 20 mistakes for example, for filling out forms would be what we'd be basing benchmarks on.

We can request surveys to user groups to help us build out the benchmark top task.

<JF> +1 to Detlev

<CharlesHall> i think we need a phrase like “representative tasks” for these benchmarks, because there will always be a gap

Detlev: Tasks being part of testing is a good way of working through this. On the other hand, different situations arise during testing. 50 criteria across all pages and pass/fail . Once you take into consideration tasks, variability comes into play. Variation such as filling in forms and complexity arises.

<Lauriat> +1 to Detlev

I.e. skip links. What are the sections marked up for landmarks? Not being prescriptive , then extending to tasks , which may or may not be similar, tasks would extend the testing framework. I.e. what is successful completion?

<CharlesHall> the part that is consistent across all tasks is that there is a start, during and end and that the end is the functional outcome.

<JF> @CharlesHall - so is the middle... if I cannot get to the 'end' then it's a failure as well

Jake: UX (User Experience) has done this for many years. Industry standards against each market and other companies that all have benchmarks. I think it can be done. How to test a web page , on web success criteria. We also talk to why it is a problem , which is the usability issue on top of accessibility failure. Best practices come into play.

Usable paths could talk to best practices . Technical review would follow a wcag em methodology.

<Detlev> agree that a lot of task-based stuff is already implicit in current testing

JF: Where do the numbers start plugging in on Conformance Chart - WCAG 3 Silver Architecture - Jake https://docs.google.com/drawings/d/1BvgdjJGvv9mgscTKX4JJrbBCsl4SClKnILfNtd3EHoo/edit?

Complex testing structure and final score may need to be reviewed further.

<Zakim> bruce_bailey, you wanted to ask for 5 min to discuss wiki page for survey

BruceB: Conformance review on scoring needs to be reviewed. Am I correct on what we are pointing out to the parent group?

Jeanne: We can include the adjectival scoring into the survey, but may need to work that out a bit more. Depreciation score has been talked to in various ways. I think sending that to parent group for comments is worthwhile.

Makoto: ALT Text: Latest Draft of Guidelines Explainers and Methods (markdown files: work in progress ) https://www.dropbox.com/s/cej6axgmga7d2pj/ALTtext_GuidelinesExplainers-Methods_May12_2020.zip?dl=0

<Lauriat> Thanks, Makoto!

<Lauriat> Thanks, Chris!

<Makoto> Thanks, Chris!

<Detlev> Jake if you are still on the call, shall we discuss whether to sketch up a task tree?

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2020/05/12 14:31:17 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/Jeanne talked to juris prudence/Jeanne talked to precedent/
Succeeded: s/msg Chuck you around after this?//
Present: jeanne Chuck ChrisLoiselle JF sajkaj JakeAbma Lauriat Joshue108 bruce_bailey CharlesHall Makoto Detlev KimD OmarBonilla Rachael
Found Scribe: ChrisLoiselle
Inferring ScribeNick: ChrisLoiselle

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]