See also: IRC log
<azaroth> trackbot, start meeting
<trackbot> Meeting: Web Annotation Working Group Teleconference
<trackbot> Date: 02 September 2016
<azaroth> Chair: Tim_Cole, Rob_Sanderson
<TimCole> Meeting: Web Annotation Working Group Teleconference
<trackbot> Sorry, ivan, I don't understand 'trackbot does all the rest for you'. Please refer to <http://www.w3.org/2005/06/tracker/irc> for help.
<ShaneM> working on it
<trackbot> Sorry, dwhly, I don't understand 'trackbot, get coffee'. Please refer to <http://www.w3.org/2005/06/tracker/irc> for help.
<ivan> scribenick: bjdmeest
TimCole: Let's get started
... first, we'll talk about the exit criteria of CR
... then, about extending the WG to get through CR, PR..
... then, we'll talk about testing
... other topics?
<TimCole> PROPOSED RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/08/26-annotation-minutes.html
<azaroth> +1
<ivan> +1
<TimCole> +1
<Jacob> +1
+1
<takeshi> +1
RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/08/26-annotation-minutes.html
azaroth: we had a request
... we should publish the exit criteria
... that's required
... we have done that
... there are new versions of the 3 specs (each with an appendix about
the exit criteria)
... implementations of the model, implementations that the vocabulary is
internally consistent and can be used to go from json-ld to json
... for the protocol, 2 implementations of all the interactions
... retrieving an annotation, deleting, etc...
... they will be republished on 6th of September
ivan: we also wanted to link to the test
cases themselves, but they are not clearly available yet
... everything is done, the publications are checked, they will be
published on Tuesday
... that's that for CR
TimCole: we are trying to do an extension request to extend the WG to get through CR and PR
ivan: I gave Ralph(?) an overview
... we hope to be able to cover all the exit criteria by the end of
October
... that's one month extra
... that + the problem of Christmas in the middle
... my pessimistic deadline would be to publish the recommendation by
the end of January, so I asked to extend until the end of February
... hopefully, we will get it
... in any case, the more we can show as readiness, the better
... we should get initial implementation reports on our pages
... they don't need to be complete
... but at the moment, the reports are placeholders
... if we have (partially) tested implementations (e.g., Rob's,
Benjamin's)
... showing them is critical
... ideally by next week, realistically by the week after
TimCole: test reports will show, preferably next week
ivan: they will look at those test reports, as they are in the CR documents
ShaneM: about results: I can now merge to the repo
<TimCole> https://github.com/w3c/test-results/pulls
ShaneM: I will push results for our implementation, right now
<TimCole> https://github.com/w3c/test-results/tree/gh-pages/annotation-model
TimCole: there's a W3C test results repo on
github
... there's a small typo: for ==> fork
... There's an open pull request
TimCole: Model testing:
... we have about 100 assertions covering body, target, ..
... I need to add a separate folder for specificResource
... those are in the test-dev repository
... you can now use those tests
... you go to the w3c test site
... you input annotations
... you get reports
... those reports, you can add using a pull request to the test-results
repo
ivan: what ends up in the test-results/implementation reports are a set of json files?
<TimCole> https://github.com/spec-ops/wptreport
ShaneM: that and a report
TimCole: the current report doens't mention the implementation, you do know who did the pull request
<ShaneM> CH53.json
ShaneM: as a convention, tests name the file
as the name of the implementation and the version
... I jusked as that to the current pull request
ivan: all implementers we currently have, should get some kind of name?
Shane: whatever name that makes sense is
fine
... I'll modify the instructions so that is clear
TimCole: the downloadable portion of the generator requires two characters and two numbers for the file.json
ShaneM: apparently yes
<Zakim> azaroth, you wanted to discuss names
azaroth: is it possible to have additionale
information about the things with names?
... e.g. a link for every implementation? a registry?
ShaneM: we can put that in the readme
TimeCole: The pull requester could add extra files, no? Then we could tell them what we want extra
ivan: does the report make an automatic count, i.e., how many implementations per test, for the CR, or do we have to create that afterwards?
ShaneM: it creates as separate report
... if we want to make changes we can, but I don't want to change the
environment too much
... there are other players in the field
TimCole: we have about 45 assertions that we
expect every annotation to pass, the MUSTs
... and then we have about 100, which are designed to catch optionals
... so, if someone only implements an optional body, and a simple
target, it seems as if they fail a lot of tests (the optional target
tests)
... can we catch that some way, explain that to people, that they don't
'fail' as much as it seems?
ShaneM: this is a meta-conversation about what to do about optional features
<azaroth> +1 to that reduction
TimCole: I reduced the tests a bit, e.g. for text direction, it doesn't depend on which type of body, so that helps a bit
ivan: how do we do the testing and reporting on the vocabulary?
ShaneM: by hand
<TimCole> for example, we may not decide to consider each kind of selector a separate feature requiring testing, this would reduce the number of tests.
ShaneM: we take a template that looks like the current report, and fill in the rows
ivan: we need to decide which validation
tools we use
... for RDF vs JSON
azaroth: there are tools, the Python RDFlib, and the JSON-LD tool from digital bazaar
ivan: what would be the other independent
toolset?
... what's the situation with json-ld tools?
azaroth: it has implementations in most
languages
... ruby is pretty good, also for RDF
ivan: maybe we can ask greg? from json-ld POV, he would be a logical choice
azaroth: what about javascript-based?
ivan: RubenVerborgh has a lot of JavaScript
tools
... if he could run those few tests, via his toolkit
... then we have 3 mature toolsets
... azaroth, can u ask greg?
azaroth: yes
ShaneM: I don't care about how you would give them, we just need to input them into the html file
<TimCole> http://w3c-test.org/tools/runner/index.html
ShaneM: we need implementations for testing the annotation model
TimCole: two parts of the question
... could you generate annotations conforming to the annotation model
... if so, could you input those json-ld in the test runner, generate
the json file test results, and do the pull request?
nickstenn: I'm not sure our client will spit
out the correct JSON-LD in the near future
... but our server could render them as JSON-LD
... I'm very happy to test those using the test runner
tilgovi: if it's important to have client-side javascript that generates conforming json
TimCole: you have to do one annotation at a time
tilgovi: ... I'll have a look at that
<ShaneM> Updated result reporting instructions at https://github.com/w3c/test-results/tree/gh-pages/annotation-model and https://github.com/w3c/test-results/tree/gh-pages/annotation-protocol
TimCole: it's important to have test results published
bigbluehat: about protocol testing: it's
about exercising a server, and exercising a client
... there's a pull request pending
... there is one test, you give it the url to your annotation server,
and a url to one annotation in that server
ShaneM: I've only ever run that against the
basic python server
... https is a should, and the python server doesn't implement that
... about client-side protocol testing
... there are basically no requirements
... I found one about sending a pref header for a certain use case, but
that doens't really have anything to do with the client
azaroth: because HTTP doesn't require a specific format, and we don't extend HTTP, there are no testable assertions for the client
ShaneM: I would like to either have someone test against a server, or give me links to a server, and I'll run the tests
ivan: so we need to reach out to the various implementers, such as Europeana
azaroth: they have one, after a slight
update
... it would take some time to have it up and running somewhere
accessible
<ShaneM> http://testdev.spec-ops.io:8000/tools/runner/index.html?path=/annotation-protocol
ShaneM: you can do it yourself, they're in test-dev right now
<ivan> adjourned
<TimCole> Adjourn
TimCole: hopefully, by next week, we have some reports, and more specifics about the vocabulary testing
<ivan> trackbot, end telcon