See also: IRC log
editor draft: http://www.w3.org/WAI/ER/conformance/ED-methodology-20131129
Disposition of Comments: http://www.w3.org/WAI/ER/conformance/comments-20130226
<shadi> https://www.w3.org/2002/09/wbs/1/WCAG-EM-20131129/
shadi: full editor draft - survey for public
comments
... thanks to Vivian, Moe, and Kathy for their valuable copy edit comments.
... most new work in the editor draft is in the steps themselves, mostly Step
4 audit the selected sample
<shadi> http://www.w3.org/WAI/ER/conformance/ED-methodology-20131129#step4
shadi: Step 3 - factors that influence sample
size has been cleaned up and reorganized; content pretty much unchanged
... WCAG working group comments - We need the approvals from the WCAG WG and
ERT WG
... needs comments from our group for fixes before publication - suggestions
to reduce confusion, misrepresentation
<shadi> [[
<shadi> priority: [mild/medium/strong suggestion]
<shadi> location: (such as: "under Introduction heading, third paragraph")
<shadi> current wording:
<shadi> suggested revision:
<shadi> rationale:
<shadi> ]]
shadi: also add comments for fixing after publication, but make sure to clearly indicate those comments; use the 'priority' format
<ericvelleman> survey: https://www.w3.org/2002/09/wbs/1/WCAG-EM-20131129/
shadi: timeline - desire to publish before end of
year - reading by the fireplace, lol
... timeline - feedback by Dec 13th; resolve issues by Dec 17 for WCAG, and
Dec 18 for ERT, and then publish Dec 20 hopefully
eric: agenda items addressed in Shadi's comments
shadi and eric: initial reactions?
<shadi> https://www.w3.org/2002/09/wbs/1/WCAG-EM-20131129/results
<Detlev> I would still like to have a little discussion of Step 5.d: Provide a Performance Score (Optional)...
mike: mainly editorial comments, not substantial issues
kathy: wondering where the info is on
incorporating assistive technology into the testing
... using AT and the approach for organizations
<shadi> http://www.w3.org/WAI/ER/conformance/ED-methodology-20131129#step1c
<shadi> Step 1.c: Define an Accessibility Support Baseline
kathy: lots of discussion within Federal, state, and businesses about integrating AT into the testing protocol
shadi: is this topic appropriate for this document? include your thoughts in the survey
Allistar: We need to indicate how to create a baseline, e.g., accessibility support
Alistair: maybe this needs to be discussed with
the WCAG 2 team
... there will be a lot of questions about this, but can hold until after the
public comments
<shadi> +1 to Eric's suggestion
Eric could add a comment in the public editor draft
shadi: if someone declares that JavaScript is needed on the site, is that an accessibility issue? How much do these issues relate to our mission?
kathy: offers to send her webinar link to the list
<Kathy> http://www.howto.gov/training/classes/use-assistive-technology-to-comply-with-section-508
<shadi> http://www.w3.org/WAI/ER/conformance/ED-methodology-20131129#step5d
detlev: Section 5d - performance score - wants to discuss this section - per instance score may be difficult to implement, and may not reflect the priority of that failure
<Detlev> http://www.dingoaccess.com/accessibility/accessibility-barrier-scores-2/
<Detlev> http://www.bitvtest.eu/bitv_test/intro/overview.html
detlev: different ways of creating scores, but as currently written it appears to be the only recommended approach
<Detlev> Per site and per page (pass/fail) seem fine to me!
shadi: didn't we try to put in a combined score, rather than multiple ones?
<Richard> Per site is the only one that works for me
Mike: remembers discussing this, but the group
wasn't sure which one to use; no closure from the group
... could we post the options in the resources section?
Eric: remembers wanting to keep this section more
general, but group didn't come to a conclusion
... could do performance scores for complete website, web page, web page
state, or per instance - depends on the goal of the evaluation
detlev: criticality of the failure is very
important
... wants a score that goes beyond pass/fail, but speaks to criticality
... wants to identify other approaches - later would be ok (maybe a note in
the public editor draft)
... we don't want to 'outlaw' other approaches
richard: Priority A, AA, AAA are already a rough priority standard, but trying to specify critical issues until you get into the testing process
detlev: methodology shouldn't define criticality, but the test score should reflect a way of giving a priority
<Detlev> +1 to Richard
richard: recommends not going into too much detail into this section
Mike: suggests putting a placeholder indicating that we want comments on
eric: everything is a draft, but we can add an
editor note
... likes the idea of keeping this section more flexible as suggested
today.
<Detlev> fine
shadi: per instance scores become more subjective and involve weighting and other considerations; recommends dropping the per instance score and keeping the per website and per page pass/fail
<Detlev> lets have a quick survey in this telco of this suggestion, Shadi!
<Detlev> +1
<Mike_Elledge> +1
<Liz> 0
<Kathy> +1
<agarrison> +1
<MartijnHoutepen> +1
<ericvelleman> +1
<MoeKraft> +1
+1
<MaryJo> +1
eric: we
... we'll make this change in the doc before publishing the public editor
draft
<shadi> http://www.csun.edu/cod/conference/2014/sessions/
shadi: meeting at the csun conference? this might coincide with the next public draft (the final public draft)