14:38:18 RRSAgent has joined #annotation 14:38:18 logging to http://www.w3.org/2016/09/02-annotation-irc 14:38:20 RRSAgent, make logs public 14:38:20 Zakim has joined #annotation 14:38:22 Zakim, this will be 2666 14:38:22 ok, trackbot 14:38:23 Meeting: Web Annotation Working Group Teleconference 14:38:23 Date: 02 September 2016 14:38:34 Agenda: http://www.w3.org/mid/05b001d204a2$1334dbd0$399e9370$@illinois.edu 14:39:02 Chair: Tim, Rob 14:39:08 Regrets: Dinesh 14:47:46 azaroth has joined #annotation 14:48:27 rrsagent, start meeting 14:48:27 I'm logging. I don't understand 'start meeting', azaroth. Try /msg RRSAgent help 14:48:35 trackbot, start meeting 14:48:37 RRSAgent, make logs public 14:48:39 Zakim, this will be 2666 14:48:39 ok, trackbot 14:48:40 Meeting: Web Annotation Working Group Teleconference 14:48:40 Date: 02 September 2016 14:49:09 azaroth has changed the topic to: Agenda: https://lists.w3.org/Archives/Public/public-annotation/2016Sep/0018.html 14:49:19 Chair: Tim_Cole, Rob_Sanderson 14:49:27 Regrets: TB_Dinesh 14:49:34 Present+ Rob_Sanderson 14:56:34 TimCole has joined #annotation 14:59:18 Jacob has joined #annotation 14:59:18 Meeting: Web Annotation Working Group Teleconference 15:00:43 Present+ Dan_Whaley 15:00:54 Present+ Tim_Cole 15:01:13 bjdmeest has joined #annotation 15:01:24 Sorry, ivan, I don't understand 'trackbot does all the rest for you'. Please refer to for help. 15:01:38 working on it 15:01:42 Sorry, dwhly, I don't understand 'trackbot, get coffee'. Please refer to for help. 15:01:46 Present+ Ben_De_Meester 15:01:55 Present+ Jacob_Jett 15:02:18 Present+ 15:02:20 present+ ShaneM 15:03:03 scribenick: bjdmeest 15:04:06 TimCole: Let's get started 15:04:23 ... first, we'll talk about the exit criteria of CR 15:04:40 ... then, about extending the WG to get through CR, PR.. 15:04:48 ... then, we'll talk about testing 15:04:53 ... other topics? 15:04:54 takeshi has joined #annotation 15:05:20 PROPOSED RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/08/26-annotation-minutes.html 15:05:25 Topic: Minutes 15:05:28 +1 15:05:29 +1 15:05:33 +1 15:05:34 +1 15:05:36 +1 15:05:38 +1 15:05:46 RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/08/26-annotation-minutes.html 15:06:05 Present+ Takeshi_Kanai 15:06:07 Topic: CRs update 15:06:12 Present+ Nick_Stenning 15:06:27 tilgovi has joined #annotation 15:07:17 Present+ Randall_Leeds 15:07:29 azaroth: we had a request 15:07:39 ... we should publish the exit criteria 15:07:44 ... that's required 15:07:53 ... we have done that 15:08:13 ... there are new versions of the 3 specs (each with an appendix about the exit criteria) 15:08:50 ... implementations of the model, implementations that the vocabulary is internally consistent and can be used to go from json-ld to json 15:09:08 ... for the protocol, 2 implementations of all the interactions 15:09:16 ... retrieving an annotation, deleting, etc... 15:09:31 ... they will be republished on 6th of September 15:09:54 ivan: we also wanted to link to the test cases themselves, but they are not clearly available yet 15:10:13 ... everything is done, the publications are checked, they will be published on Tuesday 15:10:25 ... that's that for CR 15:10:39 Topic: Extension request 15:11:13 TimCole: we are trying to do an extension request to extend the WG to get through CR and PR 15:11:29 ivan: I gave Ralph(?) an overview 15:11:46 ... we hope to be able to cover all the exit criteria by the end of October 15:11:50 ... that's one month extra 15:12:21 ... that + the problem of Christmas in the middle 15:13:04 ... my pessimistic deadline would be to publish the recommendation by the end of January, so I asked to extend until the end of February 15:13:12 ... hopefully, we will get it 15:13:26 ... in any case, the more we can show as readiness, the better 15:14:26 ... we should get initial implementation reports on our pages 15:14:39 ... they don't need to be complete 15:14:50 ... but at the moment, the reports are placeholders 15:15:10 ... if we have (partially) tested implementations (e.g., Rob's, Benjamin's) 15:15:18 ... showing them is critical 15:15:33 ... ideally by next week, realistically by the week after 15:15:50 TimCole: test reports will show, preferably next week 15:16:11 ivan: they will look at those test reports, as they are in the CR documents 15:17:08 ShaneM: about results: I can now merge to the repo 15:17:14 https://github.com/w3c/test-results/pulls 15:17:27 ... I will push results for our implementation, right now 15:17:35 https://github.com/w3c/test-results/tree/gh-pages/annotation-model 15:17:50 TimCole: there's a W3C test results repo on github 15:18:22 .. there's a small typo: for ==> fork 15:19:00 ... There's an open pull request 15:19:00 Topic: Testing 15:19:22 TimCole: Model testing: 15:19:36 ... we have about 100 assertions covering body, target, .. 15:19:48 ... I need to add a separate folder for specificResource 15:20:09 ... those are in the test-dev repository 15:20:47 ... you can now use those tests 15:21:06 ... you go to the w3c test site 15:21:11 ... you input annotations 15:21:15 ... you get reports 15:21:19 q+ 15:21:32 ... those reports, you can add using a pull request to the test-results repo 15:21:33 ack ivan 15:22:02 ivan: what ends up in the test-results/implementation reports are a set of json files? 15:22:09 https://github.com/spec-ops/wptreport 15:22:10 ShaneM: that and a report 15:23:15 TimCole: the current report doens't mention the implementation, you do know who did the pull request 15:23:35 CH53.json 15:23:43 ShaneM: as a convention, tests name the file as the name of the implementation and the version 15:23:59 ... I jusked as that to the current pull request 15:24:12 ivan: all implementers we currently have, should get some kind of name? 15:24:23 Shane: whatever name that makes sense is fine 15:24:27 q+ re names 15:24:33 ... I'll modify the instructions so that is clear 15:25:11 TimCole: the downloadable portion of the generator requires two characters and two numbers for the file.json 15:26:01 ShaneM: apparently yes 15:26:03 ack azaroth 15:26:03 azaroth, you wanted to discuss names 15:26:36 azaroth: is it possible to have additionale information about the things with names? 15:26:55 ... e.g. a link for every implementation? a registry? 15:27:03 ShaneM: we can put that in the readme 15:27:08 uskudarli has joined #annotation 15:27:21 q+ 15:27:37 TimeCole: The pull requester could add extra files, no? Then we could tell them what we want extra 15:27:46 ack ivan 15:28:28 ivan: does the report make an automatic count, i.e., how many implementations per test, for the CR, or do we have to create that afterwards? 15:28:40 ShaneM: it creates as separate report 15:28:56 ... if we want to make changes we can, but I don't want to change the environment too much 15:28:58 q? 15:29:05 ... there are other players in the field 15:29:33 TimCole: we have about 45 assertions that we expect every annotation to pass, the MUSTs 15:30:15 ... and then we have about 100, which are designed to catch optionals 15:31:21 ... so, if someone only implements an optional body, and a simple target, it seems as if they fail a lot of tests (the optional target tests) 15:31:59 ... can we catch that some way, explain that to people, that they don't 'fail' as much as it seems? 15:32:33 ShaneM: this is a meta-conversation about what to do about optional features 15:32:56 +1 to that reduction 15:33:23 TimCole: I reduced the tests a bit, e.g. for textdirection, it doesn't depend on which type of body, so that helps a bit 15:34:11 q+ 15:34:17 ack ivan 15:34:36 ivan: how do we do the testing and reporting on the vocabulary? 15:34:40 ShaneM: by hand 15:35:05 for example, we may not decide to consider each kind of selector a separate feature requiring testing, this would reduce the number of tests. 15:35:19 ... we take a template that looks like the current report, and fill in the rows 15:35:38 ivan: we need to decide which validation tools we use 15:35:45 ... for RDF vs JSON 15:36:26 q? 15:36:34 azaroth: there are tools, the Python RDFlib, and the JSON-LD tool from digital bazaar 15:36:48 ivan: what would be the other independent toolset? 15:36:56 ... what's the situation with json-ld tools? 15:37:03 azaroth: it has implementations in most languages 15:37:13 ... ruby is pretty good, also for RDF 15:37:33 ack ShaneM 15:37:43 ivan: maybe we can ask greg? from json-ld POV, he would be a logical choice 15:38:24 azaroth: what about javascript-based? 15:39:04 ivan: RubenVerborgh has a lot of JavaScript tools 15:39:25 ... if he could run those few tests, via his toolkit 15:40:04 ... then we have 3 mature toolsets 15:40:10 ...azaroth, can u ask greg? 15:40:17 azaroth: yes 15:40:54 ShaneM: I don't care about how you would give them, we just need to input them into the html file 15:42:04 q? 15:42:37 http://w3c-test.org/tools/runner/index.html 15:42:56 ... we need implementations for testing the annotation model 15:44:36 TimCole: two parts of the question 15:44:58 ... could you generate annotations conforming to the annotation model 15:45:33 ... if so, could you input those json-ld in the test runner, generate the json file test results, and do the pull request? 15:45:52 nickstenn: I'm not sure our client will spit out the correct JSON-LD in the near future 15:46:00 ... but our server could render them as JSON-LD 15:46:14 ... I'm very happy to test those using the test runner 15:47:10 tilgovi: it's important to have client-side javascript that generates conforming json 15:47:32 TimCole: you have to do one annotation at a time 15:48:12 s/it's/if it's/ 15:48:37 tilgovi: ... I'll have a look at that 15:48:44 Updated result reporting instructions at https://github.com/w3c/test-results/tree/gh-pages/annotation-model and https://github.com/w3c/test-results/tree/gh-pages/annotation-protocol 15:49:03 TimCole: it's important to have test results published 15:50:08 bigbluehat: about protocol testing: it's about exercising a server, and exercising a client 15:50:14 ... there's a pull request pending 15:50:39 ... there is one test, you give it the url to your annotation server, and a url to one annotation in that server 15:51:16 ShaneM: I've only ever run that against the basic python server 15:51:41 ... https is a should, and the python server doesn't implement that 15:52:01 ... about client-side protocol testing 15:52:31 ... there are basically no requirements 15:52:34 q+ 15:52:56 ack azaroth 15:53:11 ... I found one about sending a pref header for a certain use case, but that doens't really have anything to do with the client 15:54:12 azaroth: because HTTP doesn't require a specific format, and we don't extend HTTP, there are no testable assertions for the client 15:54:46 ShaneM: I would like to either have someone test against a server, or give me links to a server, and I'll run the tests 15:55:03 ivan: so we need to reach out to the various implementers, such as Europeana 15:55:16 azaroth: they have one, after a slight update 15:55:35 ... it would take some time to have it up and running somewhere accessible 15:55:58 http://testdev.spec-ops.io:8000/tools/runner/index.html?path=/annotation-protocol 15:56:00 ShaneM: you can do it yourself, they're in test-dev right now 15:58:04 adjourned 15:58:05 Adjourn 15:58:05 TimCole: hopefully, by next week, we have some reports, and more specifics about the vocabulary testing 15:58:40 trackbot, end telcon 15:58:40 Zakim, list attendees 15:58:40 As of this point the attendees have been Rob_Sanderson, Dan_Whaley, Tim_Cole, Ben_De_Meester, Jacob_Jett, ivan, ShaneM, Takeshi_Kanai, Nick_Stenning, Randall_Leeds 15:58:48 RRSAgent, please draft minutes 15:58:48 I have made the request to generate http://www.w3.org/2016/09/02-annotation-minutes.html trackbot 15:58:49 RRSAgent, bye 15:58:49 I see no action items