14:30:38 RRSAgent has joined #annotation 14:30:38 logging to http://www.w3.org/2016/03/18-annotation-irc 14:30:48 trackbot, start meeting 14:30:50 RRSAgent, make logs public 14:30:52 Zakim, this will be 2666 14:30:52 I do not see a conference matching that name scheduled within the next hour, trackbot 14:30:53 Meeting: Web Annotation Working Group Teleconference 14:30:53 Date: 18 March 2016 14:31:03 Chairs: Rob_Sanderson, Tim_Cole 14:32:00 Regrets+ Davis_Salisbury, Paolo_Ciccarese, Benjamin_Young, Frederick_Hirsch 14:32:20 Present+ Rob_Sanderson 14:32:50 azaroth has changed the topic to: Agenda: https://lists.w3.org/Archives/Public/public-annotation/2016Mar/0060.html 14:44:52 Regrets+ Randall_Leeds 14:56:32 TimCole has joined #annotation 15:00:58 Present+ Dan_Whaley 15:01:46 nickstenn__ has joined #annotation 15:01:52 Present+ ivan 15:02:04 Present+ Nick_Stenning 15:02:46 bjdmeest has joined #annotation 15:03:05 Present+ Tim_Cole 15:03:19 Present+ Ben_De_Meester 15:06:26 takeshi has joined #annotation 15:06:36 present+ shepazu 15:06:54 Present+ Takeshi_Kanai 15:06:56 scribenick: bjdmeest 15:07:11 TOPIC: 1. Scribe selection, Agenda review, Announcements? 15:07:37 azaroth: also talk (quickly) about CFC 15:08:02 ... if no large outstanding issues, let's issue a CFC within one week 15:08:07 ... any announcements? 15:08:16 timcole: meeting next week? 15:08:24 I will be present 15:08:29 I will be 15:08:41 I can be present 15:09:06 I will be present 15:09:28 TimCole: is there any specific topic to work on? 15:09:33 azaroth: testing would be the big one 15:09:52 TimCole: I'm happy to host a meeting about that next week 15:10:13 ... how about I'll do an e-mail about that call? 15:10:17 azaroth: perfec 15:10:19 ...t 15:10:29 TOPIC: Minutes approval 15:10:52 PROPOSED RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/03/11-annotation-minutes.html 15:11:13 RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/03/11-annotation-minutes.html 15:11:20 TOPIC: Issue CFC for publication of Model, Vocab and Protocol 15:11:58 azaroth: we would like to have a CFC to publish the docs 15:12:07 ... especially vocab needs more text, and diagrams to be done 15:12:30 ... but to get feedback about the technicalities, more visibility would be good 15:12:43 ... we can do a one-week CFC via email 15:13:00 ... any concerns about that? 15:13:14 q+ 15:13:20 ack takeshi 15:13:22 ack TimCole 15:13:23 ... any issues that need to be addressed before? except for ivan's comments 15:13:39 timcole: we don't have a vocab doc on W3C yet, I think 15:13:52 ivan: that was an index mistake, I changed that 15:14:16 http://w3c.github.io/web-annotation/model/wd2/ 15:14:26 http://w3c.github.io/web-annotation/vocab/wd/ 15:14:38 http://w3c.github.io/web-annotation/protocol/wd/ 15:14:54 ivan: (these are the urls used for the CFC) 15:15:09 ... these dates are close to this date 15:15:32 ... these three documents would become the next versions of the /TR WD's 15:15:51 timcole: draft should be updated quickly 15:16:05 ivan: it's very timely that we have these things published 15:16:13 azaroth: we don't have a shortname for the vocab yet 15:16:20 ivan: we need to have that 15:16:54 ... in the CFC, best, we propose the shortname as well 15:17:10 ... the final resolution e-mail should also have that shortname 15:17:19 azaroth: is annotation-vocab ok? 15:17:20 +1 15:17:26 +1 15:17:29 ... seems consistent with the other shortnames 15:17:34 +1 15:17:34 +1 15:18:43 ivan: on timing: the restriction I have, is that I am around the week of the 28th of March, the week after that, I am away for two consecutive weeks 15:18:58 ... I propose we try to get this published on Thursday the 31st 15:18:59 +1 to 2016-03-31 15:19:22 ... the editor should prepare by going through all checkers (i.e., link checkers) 15:19:34 ... so there won't be any last-minute editorial problems 15:19:45 azaroth: I did it before and I'll do it again 15:20:17 ... the model and protocol doc point to the vocab spec, but there is not publication yet 15:20:26 ivan: you can have a local bibliography in respec 15:20:56 ... the local bibliography should include all three documents 15:21:01 ... I'll send you an example 15:21:29 azaroth: by the time we get through CR, we don't need a local bib? 15:21:44 ivan: I think these version should always be dated URIs, I think.. 15:22:01 ... it always puts the date of the publication there, but that would be wrong.. 15:22:08 ... we have to update that until the REC version 15:22:12 azaroth: ok 15:22:23 ... any other thoughts? 15:22:47 [crickets] 15:23:32 azaroth: last week, we talked about what we could ask the clients and servers to conform to 15:23:55 ... i.e., core model vs support support for different types of selectors 15:24:06 shepazu: so the profiles-thing? 15:24:20 azaroth: yes, also syntactic vs semantic vs ??? testing 15:24:32 ... also about the tools to test with 15:24:43 ... and about the W3C rules about what needs to be tested 15:24:51 s/???/UI/ 15:24:58 shepazu: usually testing depends on conformance 15:25:17 q+ 15:25:20 ivan: question is: for the three things (vocab, model, protocol): what do we want to test? 15:25:26 ... profiles etc. is separate 15:25:44 ... how was the LDP protocol tested? 15:26:02 azaroth: LDP had a test-kit that you could download and test against your implementation 15:26:13 ... it would do the various methods 15:26:18 ... and generate a report 15:26:35 ivan: so a separate program (client-side) that tested the server? 15:26:50 azaroth: yes, rather than a module incorporated into the server 15:27:06 ... I can ask our developer that used it about his experience 15:27:11 ack shepazu 15:27:15 ivan: for the protocol, that seems reasonable to me 15:27:21 q+ 15:27:45 shepazu: Chris and I talked about testing the model 15:27:54 ... a validator would not be sufficient 15:28:27 ... simply saying 'this incarnation validates to the model' is not enough, that's testing some content 15:28:58 ... we talked about creating a test page, i.e., the target of the annotation 15:29:15 ... the annotation client does the steps to create an annotation (manual or automatic) 15:29:22 ... that annotation is sent to a toy annotation server 15:29:39 q+ 15:29:43 ... downloaded again into the client, and validated there, to check whether the structure fits with the model 15:30:01 ... that would be 1) reusable, and 2) actually test the client, not only validate 15:30:01 ack TimCole 15:30:14 timcole: that makes a lot sense 15:30:25 ... but it conflates the testing of the protocol and the model 15:30:49 shepazu: it doesn't matter how the annotation is published to the website, it does matter that it is published 15:31:04 ... e.g., in CSS, they make a baseline 15:31:58 ... if the client could simply give the annotation to the other client-side code, that would also work, but would violate the principle that the client would be inaccessible to other clients 15:32:34 (LDP Implementation Report: https://dvcs.w3.org/hg/ldpwg/raw-file/default/tests/reports/ldp.html ) 15:32:57 timcole: do we need something like: how does a server react to the protocol: a report 15:33:11 ... for the client: can it create and take correct annotation and use them 15:33:28 ... so, can the client also recognize incorrect annotations? 15:33:32 And the test suite: http://w3c.github.io/ldp-testsuite/ 15:33:51 ... I'm worried not having a method specified for checking the generated annotations 15:34:00 q? 15:34:04 ... the process of sending it somewhere could miss errors 15:34:07 ack ivan 15:34:19 ivan: two things: 15:34:40 ... first: not all implementations that we may want to test are such that you can set up a web page that easily 15:34:57 ... if the goal is to annotate data on the web, then I don't see what the web page is 15:35:23 ... I wouldn't restrict to only web page based implementations 15:36:02 ... second: let's say we have that, the annotation system produces the relevant structure, and we have to test whether the structure is right, or makes mistakes 15:36:28 q+ to ask Dan about AAK as potential audience 15:36:36 ... what we did for RDFa, is that something happens, and produce a clear structure 15:36:47 ... for each of the tasks, we have the pattern that must be generated 15:37:05 ... and an automatic procedure that could compare results 15:37:21 ... that requires that the annotation system can dump the structure into an external file 15:37:58 ... if we can't do automatic testing, we give the annotation system some tasks, and the outputted structure we compare those with what we expect that should happen 15:38:08 ... I don't know whether we can do that automatically 15:38:36 shepazu: manual testing is more time-consuming, and doesn't work well within our testing framework, but it might be inevitable 15:38:47 ... i.e., the W3C testing framework 15:38:49 ack azaroth 15:38:49 azaroth, you wanted to ask Dan about AAK as potential audience 15:38:56 ivan: that's not for all types of working groups 15:39:31 azaroth: Dan: about the AAK as potential audience: what would be valuable, as testing implementations? 15:39:42 dwhly: it's a little premature for now 15:39:59 ... coalition is currently group of publishers, not software writers 15:40:04 q+ 15:40:45 ... I think that's most useful for the upcoming F2F: what are the use cases for annotation, and how do those use cases articulate in an interoperable annotation layer? 15:41:10 ack shepazu 15:41:11 ... the technical people can triage the use cases to see what works with waht the W3C is doing, and what not 15:42:01 shepazu: in similar situations, W3C sees that validators are useful if other people want to use W3C annotation stuff 15:42:09 ... seeing that they are doing the output correctly 15:42:18 ... a validator would be a necessary component 15:42:35 q? 15:42:37 q+ 15:42:40 ack TimCole 15:42:40 ... if other people want to use the Web Annotation model 15:43:05 q+ 15:43:15 q+ to +1 tasks on sample resources 15:43:22 timcole: So, single page won't be enough: shouldn't we identify a set of annotation tests 15:43:57 ... and seeing whether a client can generate a file which can be validated and checked for errors 15:44:38 ... I would like to see whether we can identify all test cases that are in the model 15:44:43 q+ 15:45:03 shepazu: each test would have his own page, and that test would also contain the passing criteria 15:45:42 timcole: is it feasible, given that we have different kinds of objects and different kinds of implements, and some implementations would only pass a part of the annotation tests 15:45:51 shepazu: that's about different conformance classes 15:46:25 ... if your annotation client would only work with, e.g., image objects, we test that client only for the relevant test cases 15:46:44 ... W3C doesn't actually test implementations, it tests the implementability of the specifications 15:47:12 ... i.e., this feature of this spec was implemented interoperably by two or more user agents 15:47:50 ... if that feature does not have two passing implementations, it is marked at risk and possibly removed 15:48:11 ... until we have two passing implementations, or we move the spec forward without that feature 15:48:32 timcole: I expect not to find two implementations that implement all features 15:48:47 ack ivan 15:49:01 shepazu: you don't need that, it could be some kind of combination of different clients 15:49:15 timcole: good, my first question was: can we find all test cases 15:49:52 ivan: [about CSV working group]: each test had a scenario, data file 15:50:05 ... implementation had to produce something (JSON, metadata, etc.) 15:50:11 ... each of these tests were run separately 15:50:32 ... each implementation had to validate itself and return the results in some accepted format 15:50:54 ... about 350 different use cases 15:51:01 ... to cover the various features 15:51:13 ... if we have a structure like that, we need a certain number of scenarios 15:51:38 q? 15:51:41 ack azaroth 15:51:41 azaroth, you wanted to +1 tasks on sample resources 15:51:59 azaroth: I also like the idea of setting up a series of defined tasks 15:52:17 ... and possibly downloaded, tested, and uploaded again, or done online if possible 15:52:37 ... question is that we would a group of people to implement the testing framework 15:52:40 q? 15:52:50 ack nickstenn__ 15:53:13 nickstenn: what would the tests for the model look like? 15:53:39 ... actually, this means processing annotation data like clients would do it in the real world? 15:53:51 ... but the model is about the semantics of the model, not about implementations 15:54:03 +1 to not testing UA behavior on consumption of an annotation 15:54:09 q+ 15:54:13 ... how could we possibly create a testing framework that would work for all user agents? 15:54:29 ack TimCole 15:54:36 q+ 15:54:43 ... I think about giving a set of annotations, and asking to implementers: can you client interpret it correctly? 15:55:08 q+ to note difference between creation vs consumption tests 15:55:16 timcole: the thing is that someone somewhere could implement a tool that could generate, e.g., a selector 15:56:06 ... secondly, could our testing framework distinguish between correct and incorrect usage of a feature 15:56:17 ack ivan 15:56:29 ... can a tool recognize the difference between a correct and an incorrect annotation? 15:57:00 ivan: let's say we set up an HTML page 15:57:17 ... and describe in human terms: this is the annotation the user is supposed to do: select and comment 15:57:21 q? 15:57:38 ... the implementer would have to perform this task, and internally, you would have to build up the annotation via the annotation model 15:57:59 ... and the implementation needs to dump the model in, e.g., a JSON file 15:58:10 ... then, the implementation shows that the model can describe that action 15:58:47 ... the other direction is that we provide annotation structures, and see whether these annotation structures can be understood by implementations 15:59:38 q- 15:59:38 ... it would be interesting to understand how current annotation clients are tested 16:00:01 nickstenn: hypothesis tests on a granular level 16:01:00 ivan: this is a question for other implementers as well 16:01:08 .. we need a feeling of what is realistic 16:01:19 azaroth: we only test syntax, not semantics 16:01:30 ... we don't test the interaction 16:01:49 ... if there are any comments about testing: put it on the mailing list 16:01:52 ... and continue next week 16:01:59 sounds good to me 16:02:09 azaroth: adjourn 16:02:26 Thanks to bjdmeest for scribing! :) 16:02:50 trackbot, end telcon 16:02:50 Zakim, list attendees 16:02:50 As of this point the attendees have been Ivan, Frederick_Hirsch, Rob_Sandersion, Rob_Sanderson, Tim_Cole, Benjamin_Young, Jacob_Jett, shepazu, davis_salisbury, Paolo_Ciccarese, 16:02:53 ... Ben_De_Meester, Chris_Birk, TB_Dinesh, Takeshi_Kanai, Randall_Leeds, Dan_Whaley, Susan, Uskudarli, !, Nick_Stenning, Suzan_Uskudarli, 0, Kyrce_Swenson 16:02:58 RRSAgent, please draft minutes 16:02:58 I have made the request to generate http://www.w3.org/2016/03/18-annotation-minutes.html trackbot 16:02:59 RRSAgent, bye 16:02:59 I see no action items