W3C

Web Annotation Working Group Teleconference

10 Jun 2016

Agenda

See also: IRC log

Attendees

Present
Rob Sanderson (azaroth), Sarven Capadisli, Ivan Herman, Tim Cole, Dan Whaley, Shane McCarron (ShaneM), TB Dinesh, Benjamin Young, Ben De Meester, Takeshi Kanai, Doug Schepers (shepazu), Paolo Ciccarese
Regrets
Jacob
Chair
Tim
Scribe
Rob (azaroth)

Contents

  1. Contents
    1. CR Transition
    2. Last Week's Minutes
    3. Testing
    4. Extra Admin
  2. Summary of Action Items
  3. Summary of Resolutions



CR Transition

TimCole: 9 people with +1 by email for the CFC to go forward with CR
... Do we take a vote now?

ivan: Lets do that, as there are people who didn't vote on the list who are here
... before we do that, lets agree on the publication date

TimCole: July 5th?

ivan: Even if we issue call for transition today, it takes a week before the transition call, which would be around the 20th
... we can try for the 23rd, something might come up on the transition call
... Week of the 27th is a moratorium week, which pushes out to the 5th of July
... would like to be that week. 23rd is living dangerously

TimCole: Don't want to live dangerously this morning
... Will you put it in as a proposal

<ivan> Proposed RESOLUTION: The WG asks the Director to authorize the publication of the Protocol, Model, and Vocab documents as Candidate Recs, with a publication date on the 5th of July, 2016

<ivan> +1

+1

<csarven> +1

<PaoloCiccarese> +1

<takeshi> +1

<TimCole> +1

<tbdinesh> +1

<bjdmeest> +1

<bigbluehat> +1

<ShaneM> +1

RESOLUTION: The WG asks the Director to authorize the publication of the Protocol, Model, and Vocab documents as Candidate Recs, with a publication date on the 5th of July, 2016

<ivan> Remaining editorial issues: https://github.com/w3c/web-annotation/milestones/V1%20CR

ivan: All of the issues are minor, but must be done
... most complicated is need URI with a mockup of the implementation report
... Otherwise the rest Rob [and editors] can do

ShaneM: I'll see if Gregg can do the mockup of the implementation report

ivan: Great, as soon as the issues are closed I'll start the process for the call

Last Week's Minutes

<TimCole> PROPOSED RESOLUTION: Minutes of the last week's call are approved: https://www.w3.org/2016/06/03-annotation-minutes.html

<ivan> +1

ivan: Shane, do you want to be on the call for the testing issues

<PaoloCiccarese> +1

ShaneM: I'll be there :|

ivan: That's it :)

RESOLUTION: Minutes of the last week's call are approved: https://www.w3.org/2016/06/03-annotation-minutes.html

TimCole: Neither Rob nor I are available next Friday 17th
... proposal is to cancel the call unless there's someone who wants to lead it?

ivan: I can't do it either.

Shane: I'm also out

shepazu: I'm happy to have Friday off :)

TimCole: We'll pick up the calls on the 24th

Testing

ShaneM: Overview of where I'm at.... lots of pieces to the puzzle. Been focusing on the model testing infrastructure
... Largely complete
... Thing I'm working on is an OR clause for a set of assertions. So long as one feature passes, then the overall test passes
... The other piece of the puzzle is bugs in the WPT. Got those fixed and have been checked in.
... Good because it has primed the pump of working with the maintainers of the framework. So future integration should go more smootly
... Benjamin and Tim should talk about their stuff

bigbluehat: I had switched out to doing protocol testing

<bigbluehat> https://github.com/BigBlueHat/web-annotation-protocol-tester

bigbluehat: Tim and friends seem to be doing a good job with the schemas
... ^^ this link is a protocol client as javascript mocha tests
... Mostly a toy but hopefully useful

<bigbluehat> https://github.com/Spec-Ops/web-platform-tests/pull/3

bigbluehat: built WPT-serve, ^^, python based http client
... so code in that PR that implements the core of the annotation protocol
... such as the prefer headers and responses. Thus an implementation inside WPT to be integrated as part of the testing process
... javascript code then exercises the server
... easier than loading REST-client and running tests by hand

<TimCole> https://github.com/w3c/web-annotation-tests

TimCole: A general question ... in the model testing we have web application tests on github ^^
... is that where we're supposed to be working, or should we be in the spec-ops area

ShaneM: Infrastructure in spec-ops, annotation specific tests in web-annotation-tests
... there's a webhook that pulls those in to the deployment
... don't want to mess with the tests at the same time as the infrastructure as they're independent

TimCole: So an implementation that wants to test, like Europeana, where do they go?

ShaneM: Couple steps before we get there, but W3C has a test server
... URI escapes me at the moment
... that's the canonical place to run tests from. Can also bring up the framework themselves if they want

<bigbluehat> http://www.w3c-test.org/

ShaneM: framework doesn't record what they do, you record it and provide in the implementation report

bigbluehat: the protocol pieces so far are in the spec-ops repo as PRs
... web annotation tester repo is under my GH account right now
... didn't want it to seem more official :)
... could be set up to run in a browser, but once it's more complete, along side the server, could be live where ever.
... Will let the mailing list know when it's useful for more than just me

TimCole: Have you talked with Nick since Berlin

bigbluehat: I haven't since then no

TimCole: Rob created a spreadsheet before Berlin of the keys / features of the model
... have used that as a starting point

<TimCole> https://docs.google.com/spreadsheets/d/13LRf2-OCJlKplQE5MTV3breguuRhUyhQW8IZ_jQMBjw/edit?usp=sharing

TimCole: and working with Shane to get schemas into folders. Revising the spreadsheet, which I sent to the list
... will take a week or two to get it fully populated, but moving along okay
... still using v0.4
... others can edit and improve the schemas
... one gap is a set of negative examples that should NOT validate
... Getting the schemas to run with help from Shane

Shane: Have a core question - remembering that the tests are manual, we want to have the fewest number of tests that give the greatest coverage
... You're keeping that in mind as you group the assertions together?
... Sent a proposal to semi-automate lots of tests with the same input.

TimCole: We can write a script that will use all of the schemas as a single test
... Can run a few then skip a bunch that aren't relevant
... thought we might end up with one test per major folder, so 5-6 tests
... maybe what you're suggesting will address it

ShaneM: single test per major feature area could make sense
... but one test per way that a feature is used
... if there's orthogonality in a feature it should be broken up into two tests
... we have a way to automatically repopulate the manual test input window for the annotation when the next test loads
... you paste in the annotation, and there's a checkbox to copy it to the next one.
... so you don't paste it again, you just click go again
... to reduce the clunkiness
... if there's 6, I don't care, if there's 100, I care about clunkiness

TimCole: We might end up with about 10?
... 5 kinds of bodies: bodyValue, embedded text, external resource, specific resource, choice/set
... some of them then follow on to other tests, like for specific resource or choice
... would have the same (almost) 5 things for targets
... so the major features are about a dozen

ShaneM: convenience feature might not make sense?

TimCole: I think it would

ShaneM: Oh not because it's a small number, but because the input would be different

TimCole: I might have multiple bodies that a single annotation implements multiple features
... dont' think people should break up their annotations
... not sure I have a good use case in mind
... if what you're saying is not hard to do, it would be nice
... multiple bodies that demonstrate different features seem useful
... any questions at this point?
... In terms of documenting the test process, have a good readme file
... is that the kind of docs we need. Need the report from Gregg. What else do we need?

ShaneM: Definitely need docs. Readme is guidance for test authors, not testers
... need a thing to say how to run the tests and capture the results
... some is just part of WPT
... Have a couple mechanisms to get from tests to implementation reports
... both are fine, just need to pick one

<ivan> https://github.com/w3c/web-annotation/blob/gh-pages/admin/CRTransitionRequest.md

ivan: One of the things I forgot. Have created CR transition request text
... supposed to present about testing and implementation on the call
... don't have to have a detailed presentation
... but a draft description would be good to make the request smooth
... want to send request monday or so

<ShaneM> ACTION: ShaneM to write up drafty test process document for model, server, and clients [recorded in http://www.w3.org/2016/06/10-annotation-minutes.html#action01]

<trackbot> Created ACTION-33 - Write up drafty test process document for model, server, and clients [on Shane McCarron - due 2016-06-17].

TimCole: a little worried about richness?

ivan: doesn't need to be rich, just have to have it in writing that we have the main testing blocks
... this is what they are, and that's maybe all we need, but I don't know where they are now

TimCole: Shane has volunteered to help
... both Rob and I are travelling tomorrow

<Zakim> ShaneM, you wanted to ask about how we expect to test an implementation of an annotation server

ShaneM: I know how to test an annotaiton client ... wondering about testing an annotaiton server
... is the work you've been doing so far Benjamin something we can use to exercise a real server

bigbluehat: That's the hope :)

<bigbluehat> https://github.com/BigBlueHat/web-annotation-protocol-tester/blob/master/test/musts.js

bigbluehat: Actual javascript ^^ it uses Chai and structures tests in MUST and SHOULD and refs lines from the spec
... copy and pasted. Then tries to write a test for the specific MUST/SHOULD. Focusing on specific stuff
... good if Rob could test against MangoServer
... and anyone else with an implementation
... how close the testing and protocol code are. Unit tests for the server I'm writing.
... could rewrite in python

<Zakim> ShaneM, you wanted to ask how difficult it would be to put this in a browser

https://github.com/azaroth42/MangoServer

ShaneM: Your tests are in JS, can we wrap it to run in a browser with an HTML file to give it the endpoint and just click go

bigbluehat: Should be fine to do that
... can be incorporated with other testing frameworks. Could import to WPT. Distance is unknown

ShaneM: that makes our story consistent, which is important

TimCole: Any questions?
... Interop question about client A sends annotation to a server and then client B reads it in some fashion
... do we understand how that's going to work?

ShaneM: Don't need to do it, so don't put it in the plan

ivan: Yes, lets not require it in the official documents
... but the director would love to see it
... if we can do it, even as partially a mock up, that would be great

<shepazu> +1

ivan: We know Europeana have a server. Need clients.

TimCole: Server seems easier than getting clients that annotate the same content

ivan: Yes. Europeana have annotations on images. Maybe Rob can pick up one of their annotations
... to display and reuse the annotation. That would be already great. Clearly independent

<ShaneM> note that bigbluehat is implementing a server right now in WPT

ivan: Not sure how much work it would require

TimCole: Have some content here that might be shared with Europeana

Extra Admin

ivan: One thing we need to resolve is to set a date for end of CR period can't be end of September as the charter runs out
... I propose the end of September but maybe there are other dates in mind?

TimCole: Availability of implementations to test
... schedule in July/August is hard

ivan: Can't set the date earlier
... and can't make it later

TimCole: So 3 month CR

ivan: which is quite reasonable
... sometimes it's longer, but it's reasonable

TimCole: A little optimistic, but that's what we've got to do

ivan: If we can't close CR in terms of proving all the features, then it stays open until we get it. The end date is just that implementers don't have to rush

TimCole: What happens in september if we're not there?

ivan: We ask for an extension, and leave the CR open
... horror stories about groups with CR open for 2 years

shepazu: if it gets to be 6 months and we haven't exited CR, can re-examine the criteria and drop features or postpone them
... would be more important to have a REC than a perfect one

<Zakim> ShaneM, you wanted to ask about dropping features

ivan: Yes, that's fine. If we need another month, that's easuy

ShaneM: Curious about process in the W3C for dropping features

ivan: We reissue a CR

ShaneM: That's too bad

shepazu: We're very close to completing some of its deliverables, if we request a bit more time that won't be controversial
... 99% odds that they'll keep it open while we try to finish

ShaneM: don't disagree. Let's say there's 20 features, and 1 doesn't demonstrate interop, was hoping to say you could just drop the feature without going back to the beginning

shepazu: Can do if we mark the feature at risk
... if we mark something as at risk, and when we transition we remove the feature

ivan: We have two features at risk -- one is the social web work on activitystreams, the one from us is Composite/List/Independents
... so date is fine, for my planning, when do we think it will be done?
... meaning there's actions on shane, gregg and a few editorial things

bigbluehat: AS2.0 is moving to CR ... still

<ivan> https://github.com/w3c/web-annotation/milestones/V1%20CR

ivan: all of them are minor

<ivan> https://github.com/w3c/web-annotation/issues/251

<Zakim> ShaneM, you wanted to ask about moving them to non TR space

ShaneM: For 251, I wouldn't put them in TR/ but anywhere else
... might want to update them in the future

ivan: Shane when do you think you can get yours done

<shepazu> ShaneM++

ShaneM: Before the end of the day

<Loqi> ShaneM has 4 karma

ivan: So can go to the direction on Tuesday

TimCole: Discussion around vocab for
... what do we do to validate the vocab document

ivan: Not really testing of it, it's abstract that's serialized at least into JSON-LD as per the model
... not sure what we'd test
... we could test that the json-ld context against a processor produces turtle

<Zakim> ShaneM, you wanted to say that technically the implementation of the vocab is the context

ShaneM: Implementation is the context
... way you demonstrate interop could be feeding it to three JSON-LD processors and make sure that they accept it
... we did that for HTML5 modularization

ivan: Know of two processors

TimCole: Lets put that in

ShaneM: Will put that in to the document

TimCole: Let's adjourn and talk in 2 weeks

bye all!@

<ivan> trackbot, end telcon

Summary of Action Items

[NEW] ACTION: ShaneM to write up drafty test process document for model, server, and clients [recorded in http://www.w3.org/2016/06/10-annotation-minutes.html#action01]
 

Summary of Resolutions

  1. The WG asks the Director to authorize the publication of the Protocol, Model, and Vocab documents as Candidate Recs, with a publication date on the 5th of July, 2016
  2. Minutes of the last week's call are approved: https://www.w3.org/2016/06/03-annotation-minutes.html
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/06/10 16:11:21 $