14:40:56 RRSAgent has joined #annotation 14:40:56 logging to http://www.w3.org/2016/03/25-annotation-irc 14:40:58 RRSAgent, make logs public 14:41:00 Zakim, this will be 2666 14:41:00 I do not see a conference matching that name scheduled within the next hour, trackbot 14:41:01 Meeting: Web Annotation Working Group Teleconference 14:41:01 Date: 25 March 2016 14:41:10 Chair: Tim Cole 14:41:30 Agenda: https://lists.w3.org/Archives/Public/public-annotation/2016Mar/0104.html 14:42:09 rrsagent, draft minutes 14:42:09 I have made the request to generate http://www.w3.org/2016/03/25-annotation-minutes.html ivan 14:50:22 ivan has joined #annotation 14:54:39 TimCole has joined #annotation 14:57:48 Jacob has joined #annotation 14:58:14 Agenda: https://lists.w3.org/Archives/Public/public-annotation/2016Mar/0104.html 14:59:18 Kyrce has joined #annotation 15:00:30 Present+ Jacob_Jett 15:00:36 Present+ shepazu 15:00:41 present+ ShaneM 15:00:43 ivan has joined #annotation 15:00:43 Present+ Kyrce_Swenson 15:00:47 Present+ Tim Cole 15:00:50 Present+ Benjamin_Young 15:00:58 Present+ Ivan 15:01:34 present- shepazu 15:01:37 tilgovi has joined #annotation 15:02:08 bjdmeest has joined #annotation 15:02:16 Present+ Ben_De_Meester 15:03:15 ivan has changed the topic to: Agenda: https://lists.w3.org/Archives/Public/public-annotation/2016Mar/0104.html 15:04:05 Regrets: Nick, Dan_Whaley 15:04:09 tbdinesh has joined #Annotation 15:04:13 Present+ Randall_Leeds 15:04:33 scribenick: bjdmeest 15:04:58 Topic: Acceptance of Minutes: https://www.w3.org/2016/03/18-annotation-minutes.html 15:05:11 PROPOSED RESOULTION: Minutes of the previous call are approved: https://www.w3.org/2016/03/18-annotation-minutes.html 15:05:23 TimCole: any concerns? 15:05:27 s/RESOULTION/RESOLUTION/ 15:05:36 takeshi has joined #annotation 15:05:43 RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/03/18-annotation-minutes.html 15:05:45 PaoloCiccarese has joined #annotation 15:06:05 Topic: Results of the CFC 15:06:27 ... CFC was sent out last week 15:06:40 ... 13 plus ones since a couple of minutes ago 15:06:44 ... no 0 or -1s 15:06:49 ... so CFC is accepted 15:06:51 q+ 15:06:59 ... any concerns on the call? 15:07:04 Present+ TB_Dinesh 15:07:13 Present+ Paolo_Ciccarese 15:07:21 https://github.com/w3c/web-annotation/issues/186 15:07:33 ... the CFC was under the assumption that the minor editorial issues would be addressed before publishing 15:07:47 ... everything done except for one 15:07:49 Present+ Takeshi_Kanai 15:08:01 ivan: that's fine now, remaining is for the future 15:08:23 ack ivan 15:08:42 ... if we decide to make a FPWD, there are two (minor) consequences 15:09:00 ... first: history of the splitting up is lost 15:09:17 ... second: patent policy would require to start from scratch for that document 15:09:30 ... so at least 6 months needed between FPWD and REC 15:09:32 ... that's not ideal 15:09:37 ... I discussed 15:09:40 ... result: 15:10:00 ... Vocab doc is published as FPWD, and previous version is the Model doc 15:10:12 ... so consequences are resolved 15:10:39 ... practical consequence is a small editorial change 15:11:22 Proposed Resolution: publish all 3 as Working Draft, with previous draft for Vocabulary being the earlier Data Model draft 15:11:43 Resolution: publish all 3 as Working Draft, with previous draft for Vocabulary being the earlier Data Model draft 15:12:09 ivan: question is: are the documents as they are today final and ready to be published? 15:12:15 bigbluehat: yes 15:12:35 ivan: also: have they been checked by link checker and html checker etc.? 15:12:46 paolociccarese: We can do it again 15:13:20 ivan: I will have to check it to be safe, but if you guys could do that by Monday, I can do the rest of the admin on Monday. 15:13:47 ... PaoloCiccarese, also change the previous version of the Vocab doc to the Model doc, as discussed 15:13:53 ... I will pick it up from there 15:14:24 ShaneM: Ivan, should SoTD be updated to say this is a split? 15:14:38 ivan: Paolo can do it as well, but yes! 15:15:06 ... in the status of the Vocab document, there should be an extra sentence that this is a split from the Model doc 15:15:24 Topic: Welcome ShaneM 15:15:59 ShaneM: I've been with W3C since '97 15:16:09 ... I;m with spec ops, doing standards related work 15:16:19 ... Shepazu contacted me about testing 15:16:26 ... I have a ton of questions 15:16:44 ... I've been doing standards work since '85, and testing since '87 15:16:45 q+ 15:17:17 ack ivan 15:17:26 ivan: Shane is modest. He was one of the main editors for the RDFa spec 15:17:32 ... which might be useful 15:17:41 ... he also co-maintains respec 15:18:10 TimCole: other anouncements? 15:18:30 tantek has joined #annotation 15:18:42 It is the section entitled "Status of this document" 15:18:54 Topic: Testing 15:19:11 https://www.w3.org/2016/03/18-annotation-minutes.html#test 15:19:12 TimCole: There are some notes in last week's minutes 15:19:47 ... we have to look into our documents, find the features that are described, and provide for a strategy to test these features 15:19:58 ... and make sure they are unambiguous and implementable 15:20:01 ... I welcome help 15:20:19 ... Identifying the features is important, implementing them also 15:20:39 ... particularly the selector approach might not be implemented in every system 15:20:48 ... we have to have at least two implementations for each feature 15:21:06 ... how do we go on with this? 15:21:11 q+ 15:21:18 ack ivan 15:21:47 ivan: the real question (for the model) is: what is really what we want to test? 15:22:04 ... it is a vocabulary for a specific usage, we have to identify what we want to test 15:22:21 ... the current direction is that we define specific scenarios 15:22:40 ... an implementation should show that these scenarios can be mapped on the correct annotation structures 15:23:09 ... and maybe also the other way around: annotation structures should be understood by implementations and in some way tell us what they would do with these annotations 15:23:42 ... questions are: does that provide a reasonable way of testing the spec, and can this be translated into proper tools? 15:24:07 ... we have to be very careful that we are not testing implementations, but we are testing the spec 15:24:25 q+ to disagree a little with ivan about what must be tested... 15:24:58 ack sha 15:24:58 ShaneM, you wanted to disagree a little with ivan about what must be tested... 15:25:01 TimCole: about the scenarios: a suggestion was to have sample resources to be tested, to illustrate the features 15:25:26 ShaneM: we're not testing implementations was said... 15:25:36 ... but each feature must be implemented 15:25:46 ... so we test the implementations implementing the features 15:26:10 aaronpk has joined #annotation 15:26:16 ivan: if my goal would be to test the implementation, I would use automatic tests 15:26:54 ... in this case, we test the specifications, we can ask implementers to send a report, without knowing how they tested their implementation 15:27:39 ShaneM: In the Web annotation spec, there are requirements on a lot of different actors 15:27:44 ... data structure requirements 15:27:48 ... client requirements 15:27:54 ... interpretation requirements 15:27:58 ... server requirements 15:28:15 ... I assume you want to test all of them? 15:28:19 TimCole: I think so 15:29:10 ... question was: what does an implementation have to do with an existing annotation to make sure it interprets it correctly? 15:29:29 ShaneM: you don't have behaviour requirements 15:29:40 ... or UI requirements 15:29:52 ... so that makes your testing burden lower 15:30:13 ,,, here, you just have to ensure that consumers of the data format receive the data format intact 15:30:27 ... you can ignore SHOULDs for testing purposes 15:30:41 ... I would focus on the MUSTs and MUST NOTs 15:31:27 +1 for Tim 15:31:41 ... Talking about MUSTs 15:31:59 ... there's some data structural requirements, which come for free via JSON-LD 15:32:16 ... so testing conforming output is probably kind of manual 15:32:34 ... e.g. these N selection annotations needs to be tested 15:32:51 ... you don't want to test if the region makes sense or the CSS selector is correct etc. 15:33:35 ivan: Let's say that we describe a scenario: here's an SVG file, we want to put an annotation on this circle on the upperhand corner 15:33:52 ... the resulting annotation structure should correspond with an annotation put on that corner 15:34:09 ... in the output, we assume an SVGSelector to the correct corner 15:34:26 ... so we need to check for correct JSON-LD, and correct to our spec (i.e., that it's an SVGSelector) 15:34:43 ... but we don't have to check that the SVGSelector actually selects the correct target? 15:34:58 ShaneM: you go into that dept, but I'm not sure it's required 15:35:11 ... because there are a lot of ways of specifying that region 15:35:14 s/dept/depth 15:35:36 ... suppose you have a server, he's going to analyze that annotation, but it's hard to analyze every detail 15:35:44 ... you would need an SVGrenderer 15:35:56 ... you could do that manually, but that very consuming 15:36:10 ivan: I was considering human testing, but that's very time consuming 15:36:15 q+ 15:36:29 ShaneM: I always look at this as: what can we automate? 15:36:37 ivan: the only advantage is that the model is relatively simple 15:36:48 ... we don't have a huge number things to test 15:36:52 ack ti 15:37:05 ShaneM: there are some combinatorial things that I have noted 15:37:51 TimeCole: Manual can be very expensive, we thought about: I have a specific scenario: this is the image, this is the exact circle to annotate, and that should limit the number of ways to do it 15:38:10 ... e.g., textQuoteSelectors doesn't have that many ways 15:38:26 ShaneM: depends on what kind of constraints you want to put on the client 15:38:42 ... talking about textRangeSelection 15:38:49 ... you allow for a number of ways to express 15:38:55 ... not all clients will implement all ways 15:39:10 ... And I assume the client decides which expression will be the right way 15:39:22 ... depending on the context 15:39:54 ... do you want to require for test X that the client gives a CSS selector, and test Y gives a textRangeSelector 15:39:58 ivan: that doesn't sound right 15:40:19 ShaneM: another way would be: here's a sample document, select these 5 words 15:40:39 ... the server-side should check: which expression does it use, and is that correct? 15:40:55 ... that way, you simplify the test matrix, without testing every possible combination 15:41:07 ... you can say: the textselector works 15:41:16 ivan: it can happen that one of the selectors is never used 15:41:24 mete_pinar has joined #annotation 15:41:29 mete_pinar has left #annotation 15:41:37 ... because another combination of selectors always works for the given scenarios 15:41:41 mete_pinar has joined #annotation 15:42:03 ... does that mean that that selector shouldn't be in the spec 15:42:17 ShaneM: you could say it becomes a feature at risk 15:42:25 ... or, it could be part of the testing cycle 15:42:57 ... e.g., are there ways to require the feature 15:43:30 TimCole: one extra thing before the end of the call 15:43:47 ... who on the WG could help to build the list of features? 15:44:08 ... someone (pref. not the editors) should identify the features in the docs 15:44:16 ... certainly the MUSTs, maybe the SHOULDs 15:44:33 ivan: we have to be careful to minimize people's time 15:44:47 q+ to say that you should also identify if requirements are on a repository vs. a generator vs. a consumer 15:44:53 ... we have to know in what format to make these scenarios 15:45:05 ack Sh 15:45:05 ShaneM, you wanted to say that you should also identify if requirements are on a repository vs. a generator vs. a consumer 15:45:49 ShaneM: it's useful, I've been doing it, I think one of the critical pieces are checking whether MUSTs and SHOULDs are really MUSTs and SHOULDs 15:46:00 ... and also, what parts of the system these features are one 15:46:09 ... we need to compartimentalize by 15:46:16 ... repository vs. a generator vs. a consumer 15:46:43 ... in terms of interoperability, we should make sure any generator can talk to any repository 15:46:58 ivan: you have implementations not bound to a specific server 15:47:05 ... there are 2 directions of testing: 15:47:14 ... scenario-based 15:47:39 ... and from annotation-structure to interpretation of the implementation 15:48:23 TimCole: I don't know whether the compartiments are already named very well 15:48:50 PaoloCiccarese: I don't have specific testing-ideas 15:48:59 ... I didn't have to test for different servers or clients 15:49:48 ... we changed the specs over time, in my case, we used SPARQL validators for the model 15:49:57 ... and adapted over time 15:49:59 q+ 15:50:41 TimCole: The Annotation Community Group identified 55(?) MUSTs and SHOULDs, and validated semantically using a set of SPARQL queries 15:50:47 ... two for every must and should 15:51:06 ... one to check whether a feature applied, a second to validate 15:51:18 ... but the spec has changed since then 15:51:30 ... and not only semantic anymore 15:51:48 ... there must be three components, but there not defined yet 15:51:51 +1 15:51:56 q+ 15:52:04 ack ivan 15:52:04 ShaneM: I agree, and I would structure the tests that way 15:52:33 ivan: I would be very helpful if Paolo could give a small description of how his implementation was tested 15:52:42 ... it would help me, but maybe also Shane 15:53:06 ... maybe I could ask Takeshi to do something similar 15:53:56 takeshi: I have some, but I was thinking about modifying the testing files to test for internationalitation 15:54:09 Every little bit helps 15:54:44 ivan: having a good feeling of what is currently being done would be very helpful 15:54:57 ShaneM: every little piece will come together in the whole 15:55:09 ... tests should be as discrete as possible 15:55:21 ... big (i.e., integration of system) tests exist 15:55:55 ... but the small tests show where something goes down 15:56:10 ack paolo 15:56:11 ... e.g., the same scenario needs to be tested for 11 different variables 15:56:39 PaoloCiccarese: critical point would be the republishing of annotations 15:57:04 ... I'm not sure we have the solution to that 15:57:09 ... it will be interesting to test 15:57:15 ... it will be cross-system testing 15:57:34 ... test system a, then system b, then send from a to b, and have certain expectations 15:57:43 ... it's one of the most delicate points 15:57:59 ... duplications of annotations will make things go out of control 15:58:04 ... but it's the web, so it will happen 15:58:25 ... about my current testing: I mostly do code testing, very tailored, lots of RDF 15:58:41 ... so testing is many roundtrips to the triple store 15:59:18 uskudarli has joined #annotation 15:59:28 TimCole: There's a need to talk about what we think are the compartiments (generator, consumer, repository) 15:59:45 ... then, we need to talk about scenarios and features 16:00:10 ... to make some progress before F2F, for next week we might also talk about this testing topic 16:00:11 ivan has joined #annotation 16:00:27 ivan: email discussion is fine at the moment 16:00:55 ShaneM: I'll put issues or send questions to the mailing list 16:01:44 ivan: will you join the WG? 16:01:51 ShaneM: I'll ask, I know the drill 16:02:05 TimCole: [adjourn] 16:02:13 rrsagent, draft minutes 16:02:13 I have made the request to generate http://www.w3.org/2016/03/25-annotation-minutes.html ivan 16:02:17 ... next week: F2F, conformance 16:02:21 ... bye! 16:02:38 trackbot, end telcon 16:02:38 Zakim, list attendees 16:02:38 As of this point the attendees have been Ivan, Frederick_Hirsch, Rob_Sandersion, Rob_Sanderson, Tim_Cole, Benjamin_Young, Jacob_Jett, shepazu, davis_salisbury, Paolo_Ciccarese, 16:02:42 ... Ben_De_Meester, Chris_Birk, TB_Dinesh, Takeshi_Kanai, Randall_Leeds, Dan_Whaley, Susan, Uskudarli, !, Nick_Stenning, Suzan_Uskudarli, 0, Kyrce_Swenson, ShaneM, Cole 16:02:46 RRSAgent, please draft minutes 16:02:46 I have made the request to generate http://www.w3.org/2016/03/25-annotation-minutes.html trackbot 16:02:47 RRSAgent, bye 16:02:47 I see no action items 16:02:47 TimCole has left #annotation