Wilco: So we left off last week with a discussion on the manifest
<Wilco> https://github.com/w3c/wcag-act/issues/194
Wilco: I think the bit where we left off is
whether we want to include test cases
... I was cautious about test cases
... Shadi was pro test cases
Wilco: We also talked about this one last
week, but didn't come to a conclusion
... I think we have generally agreed that we need at least 4 things from
an organisation when they submit a rule
... what we left in the middle of, is whether we want a structured
format for the test cases
Shadi: what we concluded last week is that
we need to find out which meta data we need for the test cases. Wilco
suggested just the pass/fail information in the test case name. What I
understand from Alistairs email is that we need much more data, e.g. on
test environment
... It seems that we need to find a balance between not having to write
up too much meta data versus being able to process things more
automatically
Wilco: I think we should try to keep our test cases as simple as possible, not overcomplicating things before we know if we need it
<Kasper> We're having some network issues
Alistair: Level Access want to be open and
provide our text cases to the world
... We need an interface, so others can hook up to our test cases. The
only reason not to do it, is if we want to have a central repository
instead, and we've seen how that approach works with the techniques
... if we put our html out but don't specify which browser we run it in,
other people could test on the same stuff and get completely different
results
... I have tried using your mold, shoehorning it in, since you guys have
already started using it. I want to add some non-breaking changes to the
format
Stein Erik: Reading this email, it seems that it will impact some of the other agenda items, so we might want to take it up now
Wilco: So you want to say which browser your test case will run successfully on?
Alistair: I just propose adding to the top of the test case some information about the environment it was run in
Wilco: Seems fine to me. Should this be required?
Alistair: If this is going to be something others are going to use, especially big organisations, even for large scale monitoring, this is something that we need
Wilco: I'm happy with the changes you propose, but I think it requires a more thorough look
Alistair: The ideal solution would be to
just have an outcome box that displays an array
... these are the outcomes and these are the things we are testing
... then you would always have an outcome field that you would have to
fill in
... I just want to get it out there. I proposed this in December, and
it's now April, so I just want to get something out there as
non-breaking changes.
... Will put it out on the wiki
Wilco: I think that was the only thing left to discuss on these formats
<Wilco> https://github.com/w3c/wcag-act/issues/195
Wilco: So the suggestion is to add test case meta data, which would be Jason. Issue is updated. I will put out a proposal for us to look at
<Wilco> https://github.com/w3c/wcag-act/issues/194
Wilco: The idea behind the manifest is that
implementors write one file, declaring for their implementation which
rules and versions they support, and for each rule whether it's
implemented as manual, semi-automated or automated.
... and the question is if we need more information from the
implementors.
Shadi: I would add the test mode for each
test case
... We can discuss if the test mode is for each rule or each test case
Wilco: I'm hesitant because then we have to add unique identifiers for each test case, whereas now we can just add a little snippet of code
Shadi: I think the URL could serve as an identifier
Wilco: Only partially, because pages can have more than one pass or fail on them
Stein Erik: Isn't the whole idea of test cases to show off how you parse them, to show that you have implemented the test correctly
Alistair: I agree, for every test we need test cases
Wilco: Do we need organisations to provide
information on that they handled all test cases correctly
... then we would all have to create some kind of automation to create
this manifest
Alistair: Having a manifest like this is the only way to know that all test cases are being looked at. Otherwise, there is a risk that you forget to look at something. I would recommend having a list that ensures that we have looked at everything
Stein Erik: To Siteimprove this is the central reason to participate in this work, and if our CEO asks why we are spending time on this, this would be one of the arguments
<shadi> +1 to SteinErik -- need to find the right balance
scribe: I think demonstrating the handling of test cases is one if the central points about this work
Wilco: I don't know if these records should live with the W3C, as part of the Rules repository
Shadi: The implementation is created and maintained by the vendors, and then linked to from the repository
Alistair: You could just update it
automatically
... We would be willing to provide this to show off what our tool can do
Wilco: Do we want this website to start
pulling in data from different sources?
... I was thinking that organisations write up this file, and then
create a pull request to get the file into the repository
... If we just want to load this data on the fly, we would need another
way to do it. If this is the way we go, it's probably easier to just do
it test case by test case
Alistair: It sounds like a pain to have a file in your github that we have to update all the time, but you could just pull it in using an API call
Shadi: I'm just thinking about having a
plain URL...
... that is easy to check for security, and in JSON
Alistair: This is the kind of thing that you would include in your VPAT
Stein Erik: I agree with Wilco that if this has to maintained through pull requests, having test cases in there is too much
scribe: this is all voluntary, so they can just choose not to share it
Shadi: What is the reason for doing it through pull request?
Wilco: Just for simplicity, to avoid caching
etc. - and what if one is down?
... We need volunteers to build this...
Alistair: So you want a centralised set of
test cases? That is a night mare to maintain...
... I think you will end up with a big blockage if you try to have a
centralised set
Wilco: The idea is that we all have our own test cases, but that a rule has test rules that shows tests for basic implementations
Alistair: The techniques are a demonstration of what you get when you try to centralise it
Shadi: That's why we have taken it out of this group and put it out in the decentralised work in a community group, and then it can come back for approval
Alistair: you are not putting a lot of faith in voluntary work
Shadi: We hope that tool vendors will be interested in bringing forward their test cases to brag about what they can do
Alistair: The last few weeks we have corrected 10 test cases based on feedback. There's no way we can push that through an old-fashioned waterfall model
Wilco: You don't have to bring everything through
Wilco updated issue in Github
<Wilco> Conclusion: We want organisation to say what test cases they passed. We want organisations to manage their own manifest on a separate URL.
Romain: I have another question for the manifest: Do we want to have a manifest per tool or per checker engine?
Wilco: I'm interested in checker engines, not tools
Romain: Some tools could tweak the results from the checker engine. We should think about that. On one hand you will have very similar manifests, but you might miss some things if you just do it per engine
Shadi: My take is to leave it open. Some checkers will not modify the results of the engine, then they can just take the manifest from the engine. If you are modifying the results, you might want to create your own manifest
Wilco: The idea with the manifest was to prove that we have different implementation. I don't care if they run in different places, or even if they have been tweaked
Alistair: I would like to see reports on which tool is being used, not which engine
Stein Erik: There are tools that combine results of multiple engines
Romain: But as Wilco says, the ultimate goal is to show that there are enough implementations. It's rigging the game if we report that there are more implementations, but under the hood they are the same
Alistair: I suggest you include details about both engine and tools in the environment data
Stein Erik: Engine is the primary thing, tool as environment data
Alistair: WI would definitely add platform and browser too
Stein Erik: The discussion could also involve screen sizes etc.