See also: IRC log
<TimCole> PROPOSED RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/08/19-annotation-minutes.html
<ivan> +1
<TimCole> +1
<ivan> scribenick: Jacob
<azaroth> +1
<ShaneM> +1
RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/08/19-annotation-minutes.html
<azaroth> Rob's proposed text: https://rawgit.com/w3c/web-annotation/c5f2fdeeb7faad37af534e5b057ce03d92aada44/model/wd2/index.html#candidate-recommendation-exit-criteria
ivan: will make an editorial change to put the exit criteria in all three documents
TimCole; what needs to go into that summary? do we need some ancillary documentation to clarify the text that goes in?
ivan: essential part would be the only part in the documents, can link to other documents for greater details
TimCole: once a test is up and running, may be worthwhile
to get the summary of assertions, e.g., what are we testing?
... what things are we claiming are features [of the model]?
<ShaneM> I have put a PR into the test results tree: https://github.com/w3c/test-results/pull/32
TimCole: fairly easy to summarize at a high-level but as
has been pointed out, we reuse properties on different objects
... is the feature the property or the combination of property on a
particular object?
<Zakim> ShaneM, you wanted to ask "feature of what?"
azaroth: had previously decided to treat the combination of properties on objects to be the feature, e.g., the name on an agent
ShaneM: feature of model? feature of vocabulary? feature of
what?
... need more context
azaroth: need exit criteria for both; if they are not the same then not sure what the criteria for vocab would be
ShaneM: focus on model for now
... need to test the properties in their contexts, e.g., target at top
level means something different than target at a deeper level
TimCole: would treat agent as creator of annotation as a feature, agent as creator of a specific resource as a feature, etc.
<ShaneM> can't we just say "2 independent implementations of each feature" ?
<ShaneM> Long discussion of draft text for CR exit criteria. Rob had proposed text in github:
<ivan> proposal for the protocol
<ivan> proposal for the model
ivan: what to do about the vocabulary?
azaroth: could reuse the same proposal as the one for the model; hard to test the vocab separately from serializations
ivan: need to test the model when expressed in ttl is valid rdf
<Zakim> ShaneM, you wanted to note that we delivered a serialization of the vocab. testing its implementation is testing to make sure that multiple processors can parse it.
ShaneM: our implementation is the json-ld context
azaroth: context + ontology, those two together
<ShaneM> need to pull the context document into various JSON-LD processors
ivan: did we systematically check for validation of the context, check for production of valid rdf, etc.?
<azaroth> https://github.com/w3c/web-annotation/blob/gh-pages/model/wd2/check_egs.py
TimCole: hasn't been done systematically yet
... do we need to include annotations submitted by implementers?
<ShaneM> transitivity supports the validity
ivan: no, if the context doc is okay then their annotations will be ok
<TimCole> PROPOSAL: Go forward with Rob's drafts and our discussion about Vocabulary Exit Criteria (the last will be reviewed via email).
<ivan> +1
+1
<azaroth> +1
<TimCole> +1
<ShaneM> +1
<ShaneM> Should it be in a branch?
<TimCole> Ivan will make sure these get published once ready in gitHub
<azaroth> It's in exit-criteria at the moment
RESOLUTION: Go forward with Rob's drafts and our discussion about Vocabulary Exit Criteria (the last will be reviewed via email).
<ShaneM> http://w3c-test.org/tools/runner/index.html?path=/annotation-model
TimCole: discussed emailing implementers last week, have
yet to move on that
... (@implemeters on call) first set of tests are up, more going up
every later today and report
... want implementers to start using tests
... who should we prod to use these?
<ShaneM> Anyone can run tests here: http://w3c-test.org/tools/runner/index.html?path=/annotation-model
dwhly: building a universal client architecture
... will take the summary here and ask tech team what they need to move
forward
... want to test interoperability
TimCole: Nick's feedback on whether or not the testing process makes sense will be helpful, even if a bit of a distraction
<ShaneM> For example, is this page usable? http://w3c-test.org/annotation-model/annotations/annotationAgentOptionals-manual.html
<tbdinesh> I can use that too so we can also start with tests
PCiccarese: still updating the client aspects of domeo and annotea, so not doing useful implementations atm
azaroth: no distinction of where the implementation is (client-side or server-side)
PCiccarese: so if old server could produce a new annotation, that would count as an implementation?
<Zakim> ShaneM, you wanted to point out that the protocol tests just speak the protocol.
<tbdinesh> but to read it back might be harder (paolo)
<azaroth> Protocol implementations won't be a problem, I think.
<ShaneM> azaroth: yay!
<azaroth> Benjamin and I each have one, plus I know of two others
TimCole: if tests are in good shape next week (which seems to be the case), will contact several other implementers to engage with the tests
<Zakim> ShaneM, you wanted to note that I think that the protocol tests include a server implementation.
TimCole: also adding to the web repo readme some info about
how to use the schemas locally
... Rob helping with the python
... Jacob already gave some text describing the ajv / node.js process
... want to have some experience with implementers by the end of next
week so that our extension request has some basis
ShaneM: protocol test q: submitting the server wpt first,
is it okay if the server tests up initially?
... e.g., tests exercising the server are nearly ready to submit, but
the ones testing a client are farther away from readiness, is it okay to
push on with out the other?
ivan: need to get the extension request to w3c sooner
rather than later, need to demonstrate that we have things that can be
relied on
... would be great if by 2 weeks from now the testing/implementation
repo is not empty
... even incomplete results are good
... so whatever we have
ShaneM: so we could begin generating reports later today
... concerned that we haven't actually looked at the tests
... need to make sure the results match our expectations
TimCole: model tests, excepting the annotation collections
should go up over the weekend
... do need to look at the validation (pass/fail), how to generate the
report to make clear the differences between the examples from the
documentation, e.g., ex.1 (no substantial features) vs. ex. 42 (an
annotation collection
... vs. ex. 44 (many substantial features)
<ivan> trackbot, end telcon