<scribe> scribe: Wilco_
JYM: Community effort would be
better than having everyone do it on their own. Not sure how to
proceed on that.
... Think we can start small, and have it grow. Make sure the
setup we're building works, and that we agree on the
approach.
... Either we reuse publicly available work, or we'd spend
effort doing the testing.
... The first question for me is how do we maintain an open
source list of pages that we can use for checking.
... We're look at them, maybe there's a legal issue to pick a
web page from a large company. Anonymising the content may be
too difficult.
... If we create test pages it will not be real-world
anymore.
... If we want to compare, we'd need an archived version of the
page.
Carlos: One question, pages are publicly available; if we store them privately and use those to test, but don't show the pages, is that a problem?
Wilco: Could we use the internet archive? That way we have pages already stored.
<dmontalvo> Wilco:
Helen: How would you choose?
Would you maybe randomise?
... How do we avoid doing someone's testing for free?
JYM: Good question. If we make
sure we check only one page per site, we can grab pages out of
the WebAIM Million.
... If we take few pages per company it might not be
sufficient.
Cliff: Is that a test on one or two rules at a time, or all the rules?
JYMM: I think we have full manual test of the page and run whatever we want against that.
Wilco: I don't think we can test manually ourselves. It's costly, but also if we test this ourselves we'd be comparing to our own work.
JYMM: I agree, to start with it would be easier if we have pages with publicly available test results.
Cliff: Is it alright if we find a page we want to use and change the content?
JYMM: Don't know
... A page may need to have its color changed, which changes
the outcome of the page
Wilco: that's a bunch of work. It might be easier for us to start by just asking for public data
JYMM: We might want to ask for reports in EARL
Wilco: WCAG-EM report tool outputs EARL
Daniel: There are different ways
to get the results, excel is common.
... For legal, we could approach this from the perspective of
taking a page at a specific date
JYMM: I think this could work,
we'll want to freeze the page. That also means we can start,
take a page as it was a year ago.
... We'd not saying a page is currently good or bad. It's what
it was at a certain date.
... Maybe we ask organisations to ask for test data from a few
years go.
Wilco: We'd have to make sure to match the data with the page
Daniel: I'm not sure how the internet archive works. Does it copy exactly everything?
Wilco: I don't think it stores the exact page
JYMM: We'd need to get all the assets to render the page
Helen: Do we have anyone with a legal background who can advise us?
Daniel: I could try to find advise
Wilco: We'll need open test data, and publicly available pages.
Helen: If you look in VPATs, some
have historical data.
... Some websites do publish their findings
... It's a question of knowing where to look.
Carlos: Some countries monitoring in Europe have published data
JYMM: We'll need issue-level data, not aggregates.
Wilco: Not everyone reports every issue, and WCAG isn't explicit about when something is one or multiple issues
Helen: If it's a repeating pattern you may say it's one bug
Wilco: The Dutch government is collecting a lot of data, and require reports for higher status levels, I think A and B
https://www.toegankelijkheidsverklaring.nl/register
Helen: I've often done audits on development sites rather than live sites
Wilco: The granularity could be a real problem. The Deque study works because it used very granular data that was collected in a very consistent way. If we don't have that, drawing conclusions can be difficult unless we have a very large amount of data.
JYMM: If we did the test
ourselves, we could ask for the development version of the
page, so they can fix it before it is published
... We'd have to do the audit ourselves which would be time
consuming.
Wilco: What about asking the Accessibility Internet Rally for their results?
JYMM: That's probably a good place
Daniel: Carlos and I can talk to Sharron. We can probably put something in writing first, see what she thinks.
JYM: I probably still have the
slide deck from Kristian
... I'll send that
Wilco: We can start asking people
to share data with us.
... I think we need an up to date report.
Helen: Or they have a report with the pages attatched
Wilco: We could send to WAI-IG to
ask for reports
... I could also reach out to Dutch audit companies, ask if
they'd share reports that they know will be published
JYM: Should we try to meet in 2
weeks again?
... I'll send out an invite tomorrow then
Wilco: I'd suggest one of the chairs sends out to WAI-IG
Daniel: I'll write a draft for that
Carlos: I'll e-mail Sharron today
This is scribe.perl Revision VERSION of 2020-12-31 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: s/The internet Archive does that. Couldn't we use it?// Default Present: (no one) Present: (no one) WARNING: Fewer than 3 people found for Present list! Found Scribe: Wilco_ Inferring ScribeNick: Wilco_ WARNING: No "Topic:" lines found. WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: No "Topic: ..." lines found! Resulting HTML may have an empty (invalid) <ol>...</ol>. Explanation: "Topic: ..." lines are used to indicate the start of new discussion topics or agenda items, such as: <dbooth> Topic: Review of Amy's report WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]