See also: IRC log
TimCole: we going to try resolve all issues except for 2, which we will do this afternoon
azaroth: we closed some of the 'new' issues
... #223
... that was intentional
... in JSON-LD, if you would associate a language, it would look like a
resource
... which would be confusing
... #220 not our concern about how a client does requests headers
... the client does not have access to response headers
... javascript doesn't allow it
... for security reasons (cookies etc.)
... about #219: you can always add annotations to a collection
... default would have very little value
TimCole: bodies and targets may have languages, the annotation itself does not have a language
<azaroth> The issues we closed: https://github.com/w3c/web-annotation/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3Ai18n-review+is%3Aclosed
azaroth: the link is about the issues we closed yesterday
r12a: when the annotation have no language or direction, doesn't an annotation have some text?
TimCole: the model separates the body (the content) from
the annotation itself
... the structure is more a description of the content than the body
itself, the body may be embedded or referenced
... the body can be a resource without properties, we also have some
basic properties, all optional
... the creator can always add additional properties from different
vocabularies
r12a: something like this additional info should be added to #209
<nickstenn> fyi, d12a, an annotation as an object is a *relationship* between a body and a target
<nickstenn> bodies/targets may well have language/direction, but annotations themselves do not
r12a: so that everyone can understand
TimCole: good point
... so, about #223...
azaroth: #223 is specifically about the string body, that
must be just a string
... in rdf and JSON-LD, you can associate a language to a string, but
for JSON-LD, you then get an object
... the point of bodyValue is to have purely a string
... we don't need an object for bodyValue, that is already taken into
account elsewhere (the body object)
TimCole: bodyValue was added for 'the simplest case': no additional properties
nickstenn: for clarity, could we add a parenthetical: if your use case needs additional properties, use 'this structure'
r12a: if I wanted to make annotations, I would take the
easiest approach, and would end up with lots of annotations without
language
... so I couldn't display them properly
azaroth: indeed
ivan: e.g., CSV WG uses annotation structure to add
annotations to CSV metadata
... for systems that do annotations in 'isolation', it's not useful, but
for systems that have context, this system might be useful
TimCole: people will abuse this, that's likely to happen
... the consensus was that we should allow this
<azaroth> Spec ref: https://www.w3.org/TR/annotation-model/#string-body
TimCole: partly because would do it anyway
<azaroth> And commenting is 5th requirement bullet
r12a: this is the reason why I added the issue of adding language to a collection of stuff
TimCole: well, you may be talking about the language of the body, the target, multiple bodies (each having a distinct language)... etc
<Zakim> azaroth, you wanted to discuss rdf and inheritance
r12a: but there, you must provide the language information
... I was talking about a default
... like in HTML
azaroth: issue about RDF and inheritance of properties is
tricky
... 'for all annotaties, for all bodies, for dc:language...' etc.
ivan: the mapping on RDF is hard
... you want 'all the literals should be of language X', which is not a
concept RDF has
azaroth: r12a, thanks for raising the issues
r12a: about #218: does a person have to assign a language every time he creates an annotation?
azaroth: language is not required
TimCole: the language could be figured out by the client
azaroth: how the language gets assigned is an implementation details, just as a Request Header
TimCole: do we need changes in the spec for this?
azaroth: there is a note we could extend
<fsasaki> note discussed is in this section https://www.w3.org/TR/annotation-model/#external-web-resources
r21a: when reviewing the model spec, I was very happy to see all the examples, extremely helpful
azaroth: about #211: the agreement was that we would
specify the intended audience from the annotation perspective, it would
be up to schema to add a property to say 'a person that understands
language X is a member of this audience'
... we would recommend the audience property of schema.org
TimCole: it avoids us needing to do audience description, which is not in our scope
<ivan> https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+label%3Ai18n-review+label%3Aeditor_action+
ivan: we also have a list of 'editor'-issues
... they are editor actions, once done, they are considered to be closed
azaroth: BCP47 changes to keep up to date
... can we normatively refer to it?
... BCP47 is not versioned
r12a: BCP47 is a hook, because people are referring to out
of date RFCs
... it is created so that specifications stay up to date
ivan: we have a precedence, W3C rec refers to this, so I think can close it with that
r12a: lots of spec refer to BCP47 normatively
azaroth: #225 is fine to continue dc:language, but add a note: 'we require BCP47'
TimCole: because dc:language doesn't preclude that
azaroth: #216 and #215 are accepted, we would require UTC
TimCole: there was a comment that W3CDTF is more flexible
<ivan> https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+label%3Ai18n-review+-label%3Aeditor_action+
<nickstenn> feedback was on #217: https://github.com/w3c/web-annotation/issues/217#issuecomment-219939781
azaroth: about #210: logical order is way better that visual order
<ivan> Pending issues for i18n: https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+label%3Ai18n-review+-label%3Aeditor_action+is%3Aopen
azaroth: about #224 (base direction)
r12a: we aren't talking about language, we are talking about direction
azaroth: the example is not correct, but can we require HTML when bidirectional text is required, instead of importing it?
r12a: so require HTML for all arabic, hebrew, etc?
azaroth: only if it is bidirectional, right? a single language determines the direction?
r12a: not necessarily, hebrew could be in latin script
<azaroth> Isn't that ar-latn vs ar-somethingelse ?
r12a: the kinds of values to need to handle direction aren't the same as for language
fsasaki: the remark was not on the language, but on the unicode characters themselves
r12a: we have some hebrew and latin, but W3C needs to be
placed on the left-hand side, and you know which are hebrew characters,
but you dont' have an idea about the base direction
... there's hebrew and latin text
... an algorithm could put the hebrew text and latin text separately in
the correct order, but cannot put the latin text to the left-hand side
of the hebrew side without a base direction
... you have 2 runs (1 hebrew + 1 latin), and 1 base direction
ivan: is it enough to have something like 'direction: ltr'?
azaroth: how many values are there? rtl and ltr?
r21a: you may have a value 'auto' (determine the direction based on the first strong character)
ivan: I would propose: add this term to the vocabulary, with two terms 'ltr' and 'rtl'
fsasaki: this is only relevant for textual body, right?
azaroth: it could be a plain text as resource
... we should define rtl and ltr as URIS?
ivan: it should be put in the context
<azaroth> PROPOSED RESOLUTION: Add a `direction` property to the vocabulary, to be associated with any content resrouce (body or target) with two possible values, rtl and ltr (in JSON-LD) and define URIs to identify the concepts
ivan: is it safer to refer back to the HTML doc? in this case, auto could also be used
<azaroth> PROPOSED RESOLUTION: Add a `direction` property to the vocabulary, to be associated with any content resource (body or target) with three possible values, auto, rtl and ltr (in JSON-LD) and define URIs to identify the concepts. Refer back to HTML5 document for the definitions.
r12a: if auto is the default, it may catch a lot of cases
<azaroth> +1
<ivan> +1
<bigbluehat> +1
<fsasaki> html reference is https://www.w3.org/TR/html5/dom.html#the-dir-attribute
<TimCole> +1
+1
<nickstenn> +1
<takeshi> +1
RESOLUTION: Add a `direction` property to the vocabulary, to be associated with any content resource (body or target) with three possible values, auto, rtl and ltr (in JSON-LD) and define URIs to identify the concepts. Refer back to HTML5 document for the definitions.
<tbdinesh> +1
<azaroth> Clarified that this is the same as where Language property is appropriate
azaroth: about #222
r12a: when using 'normalization', you mean more than
unicode character normalization, but you include it
... what we are saying, is that if you get a piece of non-normalized
form
<azaroth> spec ref: https://www.w3.org/TR/annotation-model/#text-position-selector
r12a: and you want to establish a range by counting
characters, you shouldn't normalize the target document
... there are reasons why people don't put something in, e.g., NFC
... however, for text/string matching, you need normalization
... there was a time to say everything should be normalized, but that
time is passed
nickstenn: specifically, text position selector, we agreed
to be code point sequences
... that doens't mean normalizing the text, but understanding what the
normalized version would be
<Zakim> azaroth, you wanted to ask re DOM manipulation
azaroth: the whitespace normalization would be very hard to undo if you're in a browser context, you don't have the raw whitespace
TimCole: It's a hard problem, but I'm not sure there's a change we need to make
r12a: I was talking about unicode normalisation
... you could encode e-acute using 4 codepoints vs 3 codepoints
nickstenn: let's say we can have the same content in two
targets, but unicode normalization is different
... we want to use a text position selector to be useful in both targets
... what a user agent would allow a user to do, you would still only be
able to select grapheme clusters
r12a: let's say, we start our selection 34 characters from
the beginning of a paragraph
... depending on the normalization, we have 33 or 34
nickstenn: we need more discussion, there are cases where we need to normalize before selection
ivan: can we say, by default, everything works with code
points, and it has to consistent
... and we introduce a separate flag, to tell explicity, and we don't
normalize
... I think, in 90% of the cases, the normalization is right choice
TimCole: we have to test implementations
... will they likely be normalized?
nickstenn: there are two layers: javascript doens't allow
an easy way of counting code points
... and there's the question of counting code points vs counting
normalized code points
ivan: seeing an e-acute
nickstenn: you count the code points of the document
... it is either very interoperable in principle but hard in practice,
or vice versa
fsasaki: talking about trans-format documents
... you cannot enforce normalization for the user perspective
ivan: we could make an explicit case if necessary
r12a: if I'm referring to a target with resume (with
acutes), and that target is copied somewhere
... if I am an implementation trying to find the position of the 's', it
may not be problematic to normalize the text
fsasaki: in the IPA case, you don't want your application to do normalization
r12a: I want to keep the text as I have written it, but don't mind the normalization for annotations
nickstenn: we assume we don't alter the document you are
annotating
... we might copy a part and normalize that
r12a: great
nickstenn: so we need a note: please clients, don't alter the current DOM
ivan: so we can close #222?
nickstenn: I'm adding a comment
azaroth: about normalizing whitespace: we didn't mention
anything about normalization
... #206
r12a: that's about a different question
... not about normalization
ivan: it is, because for a user, the 's' is the third
character of resume (with e-acute etc)
... the text quote selector is a user-controlled selector
nickstenn: there are two layers: pay attention to code
points vs encoding
... and pay attention to normalized vs non-normalized code points
... they are separate
<azaroth> Nick's comment: https://github.com/w3c/web-annotation/issues/222#issuecomment-219958840
https://github.com/w3c/web-annotation/issues/222#issuecomment-219958840
TimCole: you could comment on that, we can revisit if necessary
r12a: that captures protecting the original document
... about normalizing the text for text quote selector...
ivan: that's #206
azaroth: #221 is about 'normalizing unnecessary whitespace'
... but there is no separate spec
r12a: whitespace is trending
... problem is that it's not really defined here
ivan: but there is no such specification?
fsasaki: xpath 2.0 there is regex function using unicode
character classes, defining what is whitespace and what isn't
... interoperability across whitespace across technologies is 'hard'
<fsasaki> (the xml schema list of whitespace: http://www.xmlschemareference.com/regularExpression.html#MultipleCharacterEscape )
<fsasaki> (white space in xml https://www.w3.org/TR/REC-xml/#sec-common-syn )
<ivan> https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+label%3Ai18n-review+-label%3Aeditor_action+is%3Aopen
<bjdmeest_> scribenick: bjdmeest_
fsasaki: on of the reasons about trending whitespace, is that whitespace handling of javascript is different from HTML
ivan: we were hoping to get a simple reference to use as a
normative reference
... or to the xpath thing
... I think everyone would be fine with that
nickstenn: as long as I can use it in the DOM
azaroth: about #217
... we talked about using xsd:datetime and UTC
ivan: so we can close with ref to #216
azaroth: about #213
... more than one language is hard for text processing
... question is: do we allow 0, 1, but not n languages?
r12a: question is: why would you provide a language property?
azaroth: e.g., display annotation from Choice-annotation based on language
r12a: we call this metadata-language
... you still probably want one language, but might have a situation
where two different audience groups read the same thing
... so using lang-property for that is fair enough
... we need to know the language of the actual text
<Zakim> azaroth, you wanted to ask about "This is hello in French: bonjour"
r12a: question is: can this language property do both of these tasks?
azaroth: is there a single language tag for "This is hello
in French: bonjour"? I would say english and french
... picking one language, I would take english, but there are multiple
language tokens
r12a: actually, you need to specify language for a part of a text for some cases
azaroth: proposal: reduce to 0..1, multiple languages within one text would require HTML with xml:lang attributes
r12a: if you have, e.g., japanese and french, the language
property could say 'this is japanese and french'
... you need to visualize that properly (e.g., use the correct font)
ivan: we are mixing different things, we should not this
issue for our own
... we can fallback to formats that have means to describe these
advanced cases, e.g., xml and html
r12a: the language property can only have 0..1 language property, so you can use that as the default text processing
TimCole: for someone that includes french and spanish, they
either use html, or indicate one language
... we could add a note 'you may use multiple languages, but it's better
to use advanced formats'
... MAY is doing at your own risk, so that's fine
azaroth: so leave as is, but further explain best practice in the note
r12a: many people don't understand the difference between
text processing and metadata language properties
... you will get something that is marked up with multiple language, and
you won't know how to process that
... so I would prefer one language tag max
<ivan> My proposal for solution: Keep the functionality, but add an editorial comment on what the MAY can be used for (and using eg, XML or multiple bodies, for more complex cases).
TimCole: we wouldn't recommend multiple language, but we
wouldn't disallow
... e.g., when you have a title of a book containing one separate token,
you usually just mark that up as one language
ivan: I would turn this into editorial action
<azaroth> +1 to Ivan
ivan: e.g., you have books with two main languages, then you should be able to mark that up, and it would be overkill to use HTML tags for that
azaroth: could we just have two properties?
ivan: it is metadata, it doesn't claim to be more than that
nickstenn: my assumption is that these annotations need to be rendered, to be rendered correctly, we need text processing metadata
ivan: which wouldn't be a problem for spanish vs catalan
... if there would be a problem (e.g., french and japanese), then we
need more advanced markup, and that would be in the note
r12a: there are text processing problems, even with spanish
vs catalan
... you do need to render this stuff, so you need to know the default
language
fsasaki: so for text processing, we use html
TimCole: what will users do? leaving MAY in, will lead to
abuse
... leaving MAY out, users will put in only one language
... which risk is the worst?
nickstenn: if you mix languages, it's complicated, there
are no simple cases
... we should use things as used, e.g., in xml and html
... and not create a simple case that breaks this
fsasaki: if you have to copy a whole catalog (e.g., multiple bodies), that's not efficient
ivan: we need text-only annotations with several languages
r12a: I understand the need for the metadata, but why copy the whole catalog?
azaroth: if I want to search for a book, that is both french and italian, you need 2 annotations, once using french, once using italian
TimCole: that won't happen
... maybe we should wait on your i11n meeting?
r12n: we can do an extra meeting with you guys
<fsasaki> ( some background on the i18n metadata topic, discussed in the i18n group : https://www.w3.org/International/wiki/ContentMetadataJavaScriptDiscussion )
r12n: one more thing: the meaning of the language property
is different for the target and the body
... for target, it is metadata, for body it is more text processing
related
azaroth: I think #209 is the same as #206, or #221
<ivan> https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+label%3Ai18n-review+-label%3Aeditor_action+is%3Aopen
ivan: so we can close #209
... r12a, these have to be closed recently?
<azaroth> link: https://github.com/w3c/web-annotation/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3Ai18n-review+-label%3Aeditor_action+is%3Aopen
[all]: bye r12a, thanks!
TimCole: about #214
<ivan> https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+-label%3Ai18n-review+-label%3Apostpone+-label%3Aeditor_action+is%3Aopen+
ivan: these should be closed
azaroth: I think #214 are just editor actions
... and also #227
... about #214
... point 1, I think @context can go anywhere
... point 2, datatype for rights is unclear
TimCole: text-based rights are deemed useless in many communities
azaroth: point 3: cardinalities about rights should be 0..1
nickstenn: you could always have a dual-license as a new URI
azaroth: point 4: for republishing annotations, we only use
an IRI
... point 5 and 6 also
... if no problems, moving on.
nickstenn: about #227
... we should not talk about encoding
... this is about text quote selector
... just going to close it
<nickstenn> For the minutes, from takeshi: the unicode consortium's table of characters that share a codepoint between CJK languages, but must be rendered differently: http://unicode.org/charts/PDF/U3400.pdf
TimCole: our goal is to talk about testing
TimCole: plan is 10-15m introduction about what's in
progress
... and then bigbluehat will demo what we've been up to with Shane
... we'll spend another ~1h after lunch on testing
... any objections? *crickets*
... Here are the tests we've been working on https://github.com/Spec-Ops/web-platform-tests/tree/master/annotation-model
... goal here is to provide a platform for testing the data model and
vocabulary in particular
... after lunch we'll discuss whether the same platform will be
used/usable for the platform
... extensive documentation in the repository about how this all works
... the basic summary is: we're looking at the [RFC 2119 statements] in
the spec
... we translate those into tests ... a .tst file, which run in the test
harness
... we're trying to record which implementations have correctly
implemented which features
<azaroth> spreadsheet: https://docs.google.com/spreadsheets/d/1QwhHYyEd-106nvwe_q-A9z02wO9R-Oa7l5vnmMlYTQ0/edit
TimCole: azaroth has been working on a spreadsheet that tries to capture all the testable assertions ^
<azaroth> And I filled out all the MUST/SHOULD/MAY for 1 last night and this morning
TimCole: [walking through a (different) document with
extracted testable assertions from the model spec]
... looking at these makes us wonder whether all of these assertions can
been tested using jsonschema
... (which is the way the current tests work)
... having done that,
... have manually created schemas for §3.1 in the model
... e.g. "MUST have a context" https://github.com/Spec-Ops/web-platform-tests/blob/master/annotation-model/common/context.json
... e.g. "context MUST have value <...>" https://github.com/Spec-Ops/web-platform-tests/blob/master/annotation-model/common/contextValue.json
... [showing the format of a jsonschema, including test metadata such as
"assertionType": "(must|should|may)" and a human-readable error message]
<azaroth> Link: http://json-schema.org/latest/json-schema-validation.html
ivan: there are tools for validating against jsonschema documents available in a variety of programming languages?
<azaroth> Link: http://jsonschemalint.com/draft4/
TimCole: yes, and there are also web services which can be used such as http://jsonschemalint.com/draft4/
<bigbluehat> http://json-schema.org/latest/json-schema-validation.html is the one we're using
<bigbluehat> v5 basically
TimCole: here's the example schema for checking that we
have an @context property, which may be an array, one element of which
should be our context IRI: https://github.com/Spec-Ops/web-platform-tests/blob/master/annotation-model/common/contextValue.json
... unfortunately, this doesn't test that
... it tests that the *first* item in the array is our context
... which may result in false negatives -- conforming documents will
fail the jsonschema validation
... we could solve this by having 1 test comprise multiple schemas, and
the test passes if >1 schema validates
<tbdinesh> we tried this @context spec like this https://pad.riseup.net/p/jants.wa.json.schema so there are many ways to specify. and yet we are not sure if this ok.
<azaroth> Order matters only for @context and items, I believe
TimCole: problem here is that jsonschema seems to be strict about ordering in unhelpful ways, but JSON-LD is not
shepazu: I don't want the tail to wag the dog
... but some of this may be valid feedback on the design of the spec --
if it's hard to test, perhaps it's not helpful
azaroth: we need to be aware of the possibility that if we uncover issues in testing, while in CR, we'll have to reset the clock on CR
TimCole: be aware, a small group has gone some way down the
path on this approach to testing
... the larger group now needs to weigh in on that effort
ivan: the result of testing a specific implementation needs
to be reported back
... have seen in other groups, great implementation reports. Will we
have those?
bigbluehat: this is a good segue into the manifest format
which Greg has designed: e.g. https://github.com/Spec-Ops/web-platform-tests/blob/master/annotation-model/sample2.test
... these are json-ld documents which describe a test in terms of the
assertions in the common/ directory
... and can also include inline schemas for additional assertions
... there's also the ability to "failAndSkip" if we want to report a
failure but keep running when an assertion fails
ivan: not sure this is the "manifest"
... Greg usually refers to the "manifest" as the document in which you
report your test results, which in turn is translated into an
implementation report
bigbluehat: here's an example of what running the tests
looks like: http://shane.spec-ops.io:8000/tools/runner/index.html
... you run the test suite by providing some example output from your
implementation (a JSON-LD annotation) manually
ivan: What I like about this: although the tests themselves
are of a fine granularity, you can paste complex annotations into the
testing tool
... does this have the ability to dump a report of the test results?
bigbluehat: yes
ivan: the report needs to include a detailed list of which tests were run, not just an overall pass/fail data
bigbluehat: part of the motivation for writing these tests in this way is to allow other implementers to use the *data* (the JSON-LD/jsonschema test descriptions) to validate annotations using their own toolchain in future
TimCole: we currently don't fail for out-of-spec (unrecognised) properties
azaroth: that's fine, but it does mean it's easy to generate an annotation with a typo in a key (e.g. "purrpose") which passes all tests
ivan: Are we also testing that this is valid JSON-LD --
i.e. when it's translated into RDF it's compliant with the vocabulary
... that should be a basic property of the test suite -- everything has
to be valid json-ld
<Zakim> azaroth, you wanted to suggest we should not validate data values for language and format
bigbluehat: we can use preexisting tools to validate the RDF by simply providing our vocabulary to Greg
ivan: it is important that we do test that any annotation
can be mapped to valid RDF
... but these checks can be limited to syntactic checking
... we're not going to test that (e.g.) Shape resources are semantically
correct
... another "zero-level" test we should be doing is to ensure that the
JSON-LD context file is correct
... and verify every JSON-LD example in the spec(s)
TimCole: the other thing azaroth and I talked about: we
have examples from the model and we should test they all pass
... we also need examples of invalid annotations
gsergiu: we should fail if there are unknown keys in the annotation [that don't match an added context]
ivan: we need to check with Greg if we can do this in the context of the test harness
azaroth: but regardless, unmapped keys are *always* valid in a JSON-LD file, even if not mapped
TimCole: we need to give the implementer a way to express
which tests they expect to be run/applicable
... or at least a report of which tests were run/applicable
gsergiu: another issue -- different checks might have
different severity: error/warning
... some profiles could be defined to warn if we (for example) see keys
we don't recognise
nickstenn: we're testing one half of an implementation: that an implementation can produce a conforming annotation document, but not that it can consume one
ivan: the purpose of the tests is to test that the spec is implementable, not that implementations are correctly implementing the spec
azaroth: so far we have not had any protocol discussion
... we've briefly mentioned that we could use the LDP tests
... if we were to start with that, we'd need to write our tests in Java
... separate from our other tests
... but you could use the descriptive tests from the model within the
Java tests
... In terms of what we would need to test
... clients that interact with a server; and servers that interact with
the client
... both ends need to be testable
... one way I'd thought of was reference implementation
... such that you could swap out either side of the reference
implementations with your own
... if we used the LDP tests, then we could lean on the data model tests
separately--or incorporate
... so we could combine them and get data model testing "for free"
ivan: what information do we have on what the LDP people
did
... more interestingly can we simply piggyback on what they
<azaroth> Link: https://github.com/w3c/ldp-testsuite
bigbluehat: we change the default from turtle to json-ld, but everything else is more or less the same
ivan: who did the ldp tests?
azaroth: mostly IBM folks
... they've sadly not been terribly responsive--there are old open PRs
on GitHub for instance
bigbluehat: who knows java?
... crickets...
azaroth: I think it would be quicker to re-implement the bits we actually need in something we can build
ivan: if we could use an existing LDP server, that might be helpful
azaroth: Apache Marmotta was the closest, but it won't do any of the new things we've added or the change in the default type
bigbluehat: I'd looked into use hippie.js but that only tests the server, not the client
azaroth: a vast majority of the tests are around HTTP
headers and method combinations
... when you do a GET, then the response must have these 3 headers on it
ivan: of course the HTTP header requirements are simple
... but the server has to implement the whole thing: storage, etc.
azaroth: implementing's not too hard. it took me 2 days to
do the one I built--which I found spec issues from after
... and then iterated both to what we have now
bigbluehat: you could do it in javascript; I'd found hippie for testing, you could use express to build the server
azaroth: there are existing implementations that could be
upgraded to support the protocol
... but none know to exist today
TimCole: we tried a couple different attempts
nickstenn: Hypothesis is interested in doing this
... but it's kind of a bad time for us
... because we're swapping out our data storage
... what we have now is a descendent of the Annotator API
... our current format has some similarities to the data model
... read output could be done in a few months time
... write would take bit longer
azaroth: so lets talk about the elephant in the room known
as authentication
... so we either support several authentication methods on the test
platform
... or we have a test server that is world write-able
... and we have to store them for at least some time
... even if we throw them out after an hour
... it would still have to be an actual read/write server-side thing
... if 10 people are all in at the same time...race conditions
bigbluehat: could we run implementations in their own destructible containers?
azaroth: I think we have to do something like that
nickstenn: we could just do token-based authentication per-container
azaroth: yeah, we could give dynamically named containerized endpoints for people to test against
bigbluehat: and then only the person testing knows about that temporary annotation server
ivan: how would the test suite work exactly
azaroth: the way I've been thinking about it
... a client would try to test various scenarios against the server
... try to POST to the server
... server would check the headers, etc. and record a validation report
... the client would then try to GET the annotation back, server would
report, etc.
... the flip side: the client does the reporting of the same process of
a specific server
... and then you'd combine the reports to see the full validation of
client-against-server and server-against-client
ivan: my feeling was a little bit different
... we are not testing the client.
... we cannot rely on the fact that the client will generate the correct
requests
... so I was thinking we'd write up the scenarios, and the client would
attempt them and then validate against the scenario
... did I get back what I expected?
... and then we could use that same list of scenarios on the server to
test that it's getting what it expects from the client
... when I start somewhere with some valid JSON-LD
... then the client & server goes through the protocol then one side
or the other should end up with the matching things
azaroth: we'd actually only need to implement the server, then.
ivan: scenario: I POST an annotation, I GET an annotation,
I POST a change
... the server has to do things
azaroth: but we're testing that the server does those things
ivan: we're testing what the protocol expresses
... what we're testing is a reasonable set of scenarios for annotation
servers for which we have defined a protocol
... we need to be sure that the protocol contains the necessary things
to move that data in that conversation
... the more of the work that is on the server, the easier
TimCole: what are the features of the protocol?
ivan: we'd actually need two servers
... and a baby client
... both servers should respond in a correct way to the requests made by
the client
... we could have the same things from the data model tests
... if we have the two servers and a reasonable set of scenarios, then
we're testing the protocol.
... the number of round scenarios...what 10?
azaroth: maybe.
ivan: I don't see many scenarios needed
nickstenn: it makes authentication harder again
ivan: why?
nickstenn: we could write the dummy servers, but it would
be nicer
... the client is basically a shell script that runs the tests
... but it needs to have some authentication to talk to real-world
servers
azaroth: could we require a world-writeable or shared-secret auth sort of thing for implementers to use when testing?
nickstenn: yep. something like that could work
ivan: it's important that we have real world implementors
gsergiu: we can offer one
... right now there are two annotation types supported
... depends on how many use cases are being tested
... if there are new types of annotations not in our roadmap, we could
branch and implement there
azaroth: if it's just at the model level, then I don't
think it's in-scope for testing the protocol
... we're going to update it to this, we're going to retrieve it...or do
multiples, or whatever
ivan: right and what we get back should be our annotation
data model
... we can then push that data in to the other data model validation
code
... and into greg's reporting system
tbdinesh: so the point of 2 servers?
ivan: yeah. we need two servers that are implementors
... do you have one tbdinesh?
tbdinesh: depends
TimCole: LDP does bring some overhead
... but in this case we don't need to worry about efficiencies, etc.
just that it works according to the protocol
... so you're right, there's not a lot to be done to make this work
ivan: does the protocol have enough information to do what we expect it to do
TimCole: so the response from two different servers should be identical?
azaroth: no. they could add their own custom and the created/modified values, etc.
ivan: could we do the schema approach for pagination?
azaroth: yeah. I think so.
... we'd need new schemas
ivan: so the server has a certain level freedom with regards to pagination
azaroth: I think it's deterministic for certain requests
ivan: right. the pages of annotations do change per request modifiers
gsergiu: you'd just need to prove the collection size?
<azaroth> paging spec ref: https://www.w3.org/TR/annotation-protocol/#responses-with-annotations
ivan: we have to prove that I get back all the things I put there
nickstenn: one thing you want to test that the response
format is correct
... the second thing you want to test is that the server state is as you
expect
TimCole: the report should be able to say that you got the number back you put in
gsergiu: you could say, I have this series of annotations
... and this is my expected results
azaroth: so if the test script, says create, create, create, create, create and then retrieve and there are not 5 entries, then it's invalid
TimCole: a container response even when paged shows the total number when coming back?
azaroth: yep, there could be a server with a page size of 2, which would not find the 5 annotations
bigbluehat: yeah. JSON Schema could test that pagination is
there, but not what it's value is
... JSON Schema's not the tool for that
azaroth: all of the operations have a set of set response headers that should be returned
ivan: the toy client has to return those headers, correct?
gsergiu: you could map them to JSON
nickstenn: well...you can, but not easily
... you are right it can be done
... tests being defined as data would still be a good thing
... but that data may just be a text file that contains headers we
expect to see
TimCole: essentially what you're doing is instrumenting the
client with these tests
... it has to report back
nickstenn: summary. however we define the tests, they should be generated from data rather than tied directly to the implementation of a specific test client
<azaroth> Action on azaroth to work through protocol and make spreadsheet of MUST/SHOULD/MAY tests
<trackbot> Error finding 'on'. You can review and register nicknames at <http://www.w3.org/annotation/track/users>.
ivan: we need a similar document for the protocol to what we have for the tests
scribe would like to note that lots of people of are volunteering for all kinds of testing things...but with no definitive promises as yet
TimCole: we have gsergiu and others working on a server
nickstenn: we're working on some piece of this with a view to Hypothesis being able to write some part of it
TimCole: azaroth you could write the server and nickstenn could write the client
nickstenn: yeah, we can figure out who does what
ivan: yeah. keeping it separate is even better
bigbluehat: I may have more significant time soon, but can't promise anything beyond spare cycles at this point
TimCole: anyone else have time for helping?
shepazu: why are you asking?
TimCole: just to be sure we have good accurate scenarios
... I think the scenarios should be built by the group
shepazu: you've asked several times for help. I just want
to be clear about the why
... some people do the work. other people vet it...I just want to be
clear about what you think is needed
TimCole: I don't think we need a larger group writing
servers
... but these scenarios and a should/must matrix
azaroth: yep. I'll tackle the should/must
TimCole: have thought about it over lunch, have we had a discussion about vocab testing
bigbluehat: yep. greg has tools that should do what we need when we need it
TimCole: who's going to handle incorrect annotations to run
through the validator?
... I can ask when I get back to Illinois
azaroth: the trick is not just creating incorrect
annotations, but ones that are strategically incorrect
... in order to test that certain tests fail correctly
TimCole: we need to know that these annotation should fail in specific ways
tbdinesh: I think everyone should submit 3 broken annotations
TimCole: I'll ask again in a couple weeks to see who's interested and available
ivan: does JSON Schema support data type testing?
bigbluehat: azaroth: TimCole: yes. lots of options there
hugo: have you considered using RDF Shapes?
TimCole: yes. for vocabulary testing
... it's mostly a question for Greg to decide
<azaroth> SHACL
hugo: the thing we worked with is SHACL one and it supports several things
TimCole: two questions: is it ready, and who knows enough
about it to write the shape documents for our ~150 MUSTs and SHOULDs
... if we could automate it, that'd be super
... we have people who can generate JSON Schema
... I just don't think we're ready for it
hugo: if you need internal comparisons of for instance date values
TimCole: yeah. we don't do that. just date format
... what I know of RDF Shapes, it could certainly be of benefit
... but we don't currently have someone to do it
azaroth: would you like to do it hugo?
hugo: they're not that hard to write. it could be interesting for us to apply this technology for annotations--we're already using it
TimCole: we will have in a very few days of the list of constraints for all the tests
hugo: there's still a situation where you have some surface that JSON Schema and vocabulary testing won't be able to test
TimCole: azaroth: Web Annotation has a very specific shape
ivan: you can use our vocabulary where ever you want--even
RDF/XML--in any way you like
... but if you're using our Web Annotation JSON format it has to be in
that specific shape
TimCole: we're not geared up to test the other possible serializations
hugo: should there be a split between a schema and syntactical testing?
azaroth: yep. it's a discussion we've had and we are essentially doing it. we should talk later.
ivan: for each test that we run has to be approved by the
working group.
... for the JSON Schema that should be easy enough for everyone to
appraise and agree upon
... for SHACL I don't think we have anyone to validate it
... that it is a valid test for our scenarios
... it's crazy, but that's they way it is
... if you have the SHACL's, we'd love to see them and add them to our
repo
... but they aren't likely to be canonical tests as we'd not have the
people to approve them
TimCole: but to your point about shape testing vs. semantic testing...we are essentially doing that
tbdinesh: you do need to know how things are stored?
ivan: we don't care how it's stored, just as long as it comes back in the right way
TimCole: k. that's our 2 hours on testing.
... the other thing we may talk about is that even if we're using shapes
... that can feedback annotations, would be ones to feed our test
environment
... I don't know what we want to do to share their annotations with us
bigbluehat: would love to discuss that later. we could keep annotations, etc. (with permission) for iterating on our tests
azaroth: I'd like to move to the rest of the agenda
... we have serialization, tpac, signaling, and other work started but
not finished (findtext, search, etc)
... also collaboration bits, etc.
ivan: before we go into the "short" list, we should try to
close the remaining 3 issues.
... quickly is fine. but we need to formally close them
... then it'd be just the i18n issues which we could do formally with
them elsewhere
<azaroth> link: https://github.com/w3c/web-annotation/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+-label%3Apostpone+-label%3Aeditor_action+
azaroth: there are 3 outstanding issues
<ivan> https://github.com/w3c/web-annotation/issues?utf8=✓&q=is%3Aissue+is%3Aopen+-label%3Apostpone+-label%3Ai18n-review+-label%3Aeditor_action+
azaroth: actually just one https://github.com/w3c/web-annotation/issues/147
... oh and https://github.com/w3c/web-annotation/issues/204
... in order to close #204
... stick to just HTTPS URLs
<ivan> https://github.com/w3c/web-annotation/issues/204#issuecomment-210202818
azaroth: and the larger "don't annotate me" discussion is for later
<azaroth> https://github.com/w3c/web-annotation/issues/204#issuecomment-210202818
<azaroth> PROPOSED RESOLUTION: Add a recommendation for HTTPS into the protocol spec
+1
<azaroth> +1
<TimCole> +1
<takeshi> +1
<bjdmeest_> +1
<ivan> +1
<tbdinesh> +1
<nickstenn> +0
RESOLUTION: Add a recommendation for HTTPS into the protocol spec
ivan: we have a longer discussion to be had, but it would
be good to focus on just the document related issue
... and ideally focus on moving to CR
shepazu: I would like to do it now
... and then discuss the effects it may have
bigbluehat: propose to close now, and then potentially reopen
ivan: I'd like to be able to say that we did everything we can at this stage to close the issue against the documents
shepazu: I made it clear to PING that we would likely not address it in V1
<azaroth> PROPOSED RESOLUTION: We will defer work on signalling mechanisms regarding opt-out of annotation to a future version of the specifications
<azaroth> +1
ivan: then we're clear to close the issue and move toward CR and address this in the future and with more dicussion
+1
<TimCole> +1
<tbdinesh> +1
<takeshi> +1
<ivan> +1
<bjdmeest_> +1
<shepazu> +1
RESOLUTION: We will defer work on signalling mechanisms regarding opt-out of annotation to a future version of the specifications
ivan: so now, provided we can handle the i18n, we can go to CR
<shepazu> https://www.w3.org/annotation/wiki/Publisher_preferences
shepazu: does anyone not know the back ground? no one? k.
I'll assume you know the background
... we've long had a need for notifications about when someone's
annotated your page
... ideas like partnerships where you might advertise an annotation
service to be used (possibly with incentive)
... when you combine that with opt-out and harassment prevention I think
they can all be encapsulated in a single proposal
... If you check the proposal, I'm trying to suggest that we not invent
anything new for this
... link tag, rel attribute, and meta tags
<shepazu> <meta name="comment-prefs" content="nocomment">
shepazu: insert technical difficulties with projector...
... we don't want to do this with robots.txt because we want to do it on
a per file...per file...or per resource basis
... so I'm suggesting the meta tag, link tag, and some other HTML stuff
azaroth: can you explain why we can't do that with robots.txt?
shepazu: most people don't have access to robots.txt
azaroth: but then you can't do it for anything but HTML
shepazu: no. there are ways to do this for other things
besides HTML
... so "nocomment" means they don't want you to commit on their blog at
all
... "noselection" they don't want someone reusing selections from their
books--a different use case of I don't want people stealing my content
... "nopublic" that people can't publish their thoughts about someone's
content
... "nodisplay" meaning they don't want other peoples content shown over
theirs
... "moderated" the link would point to where you need to publish
something in order for me to have it on my page
... then we get to rel="author"
... using rel="author" so it can be used on anchor tags
... also rel="comment-moderation" where the URL is where you send stuff
to be moderated
... I'm still fuzzy about how this would work with someone's own hosting
... no one wants to moderates all the comments themselves
... so having something like comment whitelist and blacklists might be
preferrable
... if, for instance, Hypothesis curates a whitelist or blacklist of
comments, I could use that
... or whomever
<azaroth> Kind of like https://www.w3.org/TR/annotation-protocol/#discovery-of-annotation-containers
shepazu: then we have rel="comment-service" which can point to an annotation publishing service
<azaroth> ;P
shepazu: this is an opportunity for adding to the value of the page
<azaroth> Spec says: Any resource may link to an Annotation Container when Annotations on the resource should be created within the referenced Container.
shepazu: proxied annotated content is another place people
felt violated
... Genius for instance was republishing content
... so. we could use <meta name="robots" content="noarchive"> to
signal that one doesn't want their content archived
... and then I think we could also use rel='webmention" for signally
bigbluehat: so. I think the rel="nodisplay" has the most potential for discussion today
azaroth: the others feel like their completely out of scope
shepazu: mostly the hope is to fix the misperception that annotation are bad
ivan: so is this signally really the direction we want to
go
... I don't know how Genius actually works
... an annotation service has something dwhly calls "layers"
... so the person who reads the content can optionally turn these layers
on to see the original content with or without the additional content
... anyone who runs a blog knows that comments are mostly often bad,
offensive, spam, etc.
... and I don't even look at them
shepazu: I hear what you're saying, but it's not only about
me
... so let's say I personal turn that layer off
... but what if I don't want anyone else to see them?
ivan: so. they go to my blog and switch on the annotation layer.
shepazu: I don't think that's decided yet
... I don't know who's using facebook, but of all the things people post
I only see what it selects
... what the service does on your behalf is not up to you, it's up to
the service
<lenazun> +q
ivan: and I always change that
lenazun: in the last few weeks, we interviewed a lot of people who has been analyzing harassment
<dwhly> +q
lenazun: it does often happen in a private space
... in which you have no control but it still gets exchanged
ivan: yeah...and I am absolutely aware of that
<Zakim> azaroth, you wanted to suggest that anything other than nodisplay is completely unsolvable
ivan: but the fact of just putting these "don't comment on me" doesn't seem to solve any of that
azaroth: to follow on from that, it's completely unsolvable
... we're using the Web. it's a public publishing media. it's not been
solved in 20+ years. we 20 people here aren't going to magically solve
it....I think we should move on
dwhly: so. one thing we have discussed is concern around
the publicly discoverable annotations list
... we're trying to make annotation a world-wide known and available
form of content
... we may get to the future where it's impossible to shut that off
... and that was the thing people were beginning to be concerned about
... there's a wide range of opinion on this
... it can't just be choose your layer, because there may be a huge
public layer available anyway
TimCole: it is a bit of an impediment for content
providers, so while I think it's out of scope, there is a concern that
it is an issue by people using annotation
... there does need to be continued discussion had on the topic and we
should be involved in it
... but I don't think the answer comes from material we're making here
... it would be good for us to stay in the discussion and explore
options
... we have to ship what we have and can't solve this first
... but we hope we can stay in the conversation
and if there's an interest group, then that would be a better place
<Zakim> tbdinesh, you wanted to ask where this could go in the current spec? rights?
trackbot: if this was a current spec, could you do it around the writes (vs. the reads)
<trackbot> Sorry, bigbluehat, I don't understand ''. Please refer to <http://www.w3.org/2005/06/tracker/irc> for help.
tbdinesh: if this was a current spec, could you do it around the right(s) (vs. the reads)
shepazu: yeah. there's more to be explored here
ivan: so. I understand what you're after here, but I think
we need to be careful to not fool ourselves that there may be a
technical answer to this problem
... the problem is very much a social problem
... it leads teenagers to suicide
... and some of the things that happen online are now criminal activity
... there's an upcoming interest group where this would be better
address
<azaroth> +1 to Ivan
<TimCole> +1 to Ivan
ivan: I don't think putting the technical solution first is going to get us to the right end
shepazu: so I agree and had thought about the interest
group
... robots.txt works for instance
ivan: I think it's a mix of both technical and social
shepazu: I differ with you in that I think the social
problem is well understood
... and the basis for the technical solution is pretty simple at this
point
ivan: I disagree. there's so much to cover here
shepazu: the ability for someone to state their preferences
is important
... and that's all this does.
... to be able to state what my preferred defaults are
... gamergaters, etc. are going to have clients that ignore all this
anyway
... this is only for good actors in the system
... and we hope that the good actors are the majority
bigbluehat: I think we should move on. there are other venues for this. we should keep discussing it, but not here. we only have few hours and more to tackle here
nickstenn: it's important that we not stop talking about
this and don't punt on these problems
... robots.txt does not prevent anything. it simply says if you don't
abide by this, then my server will completely prevent you from accessing
anything
... we need to be clear that there's not a hard line between social and
technical
<dwhly> +1 to nickstenn
<Zakim> azaroth, you wanted to disagree with HTML centricity
shepazu: so. quickly. how might this work with the data model
<azaroth> whatevs
shepazu: I know we've already said this is out of scope
... but I think the annotation should carry the preferences that the
target publisher requested
... so "nodisplay" etc are carried forward
... whether the annotation client does anything with them is up to the
client
... carrying on from that
... there might be exceptions where a government has prevented comment,
but someone does it anyway
... also within the annotation there could be a way to say don't
annotate this annotation
... and the last bit, is registries
... there are 2 registries for the meta tag and the rel values
... a spec can be written and submitted to these registries
... that's all it would take
... it would not make any changes to HTML, etc.
... and that's the end.
TimCole: 15 minute break.
shepazu: actually....
... I hear that there are people in this group, who seem to like or be
OK with the "nodisplay" thing
... and that that's the only thing that's in scope for this group
... if we could have note to discuss the "nodisplay"
azaroth: don't think we need a note to discuss anything
ivan: so the only thing we've done so far is closing
technical things we've declined to specify
... and we shouldn't fast track this similar thing via a note
... the only thing we've done so far is closing technical issues
shepazu: I've seen resolutions used this way in other WGs
... the vast majority of people have no idea what resolutions are for
... so we can use them to make this issue clear to the community that we
care about this issue
ivan: it should go on record that this discussion is to continue and certainly hasn't stopped.
gsergiu: I think we were also thinking. we have many
providers, we start with very weak concepts, but modeling moderation is
something that needs addressing.
... for facebook and others may be a legal issue
... where you could use the spec with legal law to enforce action
shepazu: all I wanted was resolution to keep discussing...but whatever
TimCole: let's break for 15 minutes
azaroth: 15 minutes on the selectors
ivan: whenever the new version is out, I will have to
update the changes
... it's almost copy-paste
... that's the only action
... going to PR or REC, we can publish it
ivan: one new section
... fragment identifier, section 5
... there is one example, which works for my program
... comment on it, I don't we need to discuss here
... the tool is on the repo
TimCole: remaining action?
<azaroth> Link: http://w3c.github.io/web-annotation/selector-note/index-respec.html
ivan: reading and commenting
... my action is updating it when changes on selection
... when no particular comments, I can publish
... it has to have (at some point) have a resolution to publish
azaroth: next, about HTML serialization
TimCole: some comments on the github issue
<azaroth> Link: https://github.com/w3c/web-annotation/issues/147
TimCole: the biggest issue, is the discussion of putting RDFa, you need to use the full vocabulary URIs
ivan: there is no issue of defining a namespace document,
that would greatly simplify the RDFa encoding
... it's semantically correct
... it's technically doable, without that, the RDFa encoding is very
ugly
TimCole: it's a secondary activity
... an HTML serialization solution might help uptake
... one possibility is the RDFa namespace doc
... another one is a note with some possibilities of how to do HTML
serialization
... e.g., including JSON-LD in script tags
... or using html tags, extending html5
ivan: there is a possibility of using custom elements
TimCole: we currently add JSON-LD in HTML, don't know whether that is enough
shepazu: I don't think we currently have time to do the
mapping to HTML directly
... we could do a note for mapping to RDFa
... maybe use that as a kicking point for the note-element
... so, we map our model the RDFa, and later map the RDFa to the
note-element
... so we can play with structures
ivan: one part is technically defined and done, just not
written down, i.e., embedding JSON-LD
... the mapping to RDFa is documenting something
... a note that documents those two options makes a lot of sense
... we aren't inventing anything new here, just reusing
<bigbluehat> +1 to issuing a note about using what's out there now RDFa and JSON-LD
ivan: mapping to the note-element, we have to define ourselves
TimCole: Sarven could help us, as he already did some stuff to mapping to RDFa
azaroth: I like the RDF namespace, to use the JSON keys
TimCole: much more consistent with the JSON
... both options (using full URIs and using the RDFa namespace) will
exist in the wild
ivan: which is fine
... doing and testing the namespace, I like to do
TimCole: I'll take the lead on the note
ivan: the doc that Gregg produced at the end of the CSV WG,
is very specific, but very related (i.e., embedding JSON-LD)
... there are some minor issues, that you may want to take over
TimCole: so, we can do this, quietly, we can do this in the
background, without slowing down PR and CR
... So we can add editorial action to the open issue #147
<TimCole> scribenick: TimCole
azaroth: we have a room, but do we need to meet
shepazu: Are there reasons not to meet - likely last opportunity
azaroth: but we may not have anything to discuss by then.
ivan: we might want to talk about what's next after end of charter
bigbluehat: that might be an opportunity to coalesce around what's next
ivan: talk about what''s next and come back to this question
shepazu: FindTextAPI would need to be moved foreward by
different group
... Nick had some early discussion on Client-side API, but not enough
there to move foreward within the time frame
bigbluehat: WebMention might be sufficient for Notification
shepazu: is search generalizable enough to JSON that other WGs might take care of this?
bigbluehat: there is work, but not clear it will address
our issues...
... extending Web Annot protocol would (for example) add search
parameters onto the container
shepazu: but we're not going being able to write that in timeframe left, so is in scope for some other WG?
azaroth: not really
<bigbluehat> how the IIIF does it http://iiif.io/api/search/1.0/
ivan: so clear we won't do it in this WG, is it enough to
create a WG to do this?
... or will LDP (or someone else) subsume in a more general solution?
... does it make sense to consider these as work items for next version
of WG?
... do we need to plan for a future WG, if so, what would be items for
the WG and when might we need to start a WG?
shepazu: pretty likely that we would at least need a
process of incubation. unlikely a follow-on WG would come immediately.
... so we may not have this to discuss at TPAC, and may not need to meet
at TPAC
... individuals will take advantage of TPAC, but not a formal F2F of the
WG.
ivan: Having a CG that does 2 things:
... 1. maintenance (errata, etc.). This has been done for CSV (for
example)
... 2. a CG to look at these other issues and come up with incubations,
implementations, ...
... Client side implementation and maybe a FindText 'implementation'
might be good to play with, html mappings / extensions of the model
... come back in a year or two and see if there's enough to propose a
new WG
shepazu: the scope of the WG didn't get much bigger than the OA CG anyway, so might be able to just re-energize that CG
ivan: if convergence with IDPF happens, then there will need to be an update of OA within IDPF, which may require a TR
takeshi: For EPub OA was modified a little, so that was done within IDPF.
ivan: yes, if migration from OA to Web Annotation were complete by 31 Dec, wouldn't need a WG, otherwise probably would
shepazu: are there going to be enough of us at TPAC
ivan: if we meet at TPAC, I would be double-booked
<azaroth> Action on azaroth to cancel TPAC session
<trackbot> Error finding 'on'. You can review and register nicknames at <http://www.w3.org/annotation/track/users>.
azaroth: Agenda is done...
azaroth: Shane has commented on HTTPS decision
<bigbluehat> from ShaneM: "I think there is a W3C policy that says "ns" URIs are compared as strings and that they should be http: even if ultimately they are just redirected to https: I will check right now"
<ShaneM> Hi.
<ShaneM> sorry - I don't want to derail your meeting
<ShaneM> I could be wrong about this. But it has come up in the publication discsussions at W3C and there was definitely some sort of decision.
azaroth: if we choose HTTPS for our namespace, then this won't apply.
<azaroth> If we choose https then that's the string to use for comparison, not http://
<azaroth> So it would ONLY be https, not also http
bigbluehat: we would not be making http copy available any more.
<ShaneM> azaroth: yes I understand that. And FWIW the W3C will no longer serve anything as http:
<ShaneM> it was just a consistency issue with other namespaces. concern about authoring errors
<ShaneM> if some are http: and some are https: people are going to screw up.
ivan: yes, this is the discussion that the team has been having...
<ShaneM> ivan: cool - well if you advised the group and they made an informed decision then I will shut up.
ivan: however, the reason we decided to use OA for the
namespace was to facilitate backward compatibility
... so moving to https we will break backward compatibility with OA.
... isn't it a logical conclustion that if we make this change, we
should change the name
... we should change it to avoid collisions...
<ShaneM> FYI in the publication channel (#pub) denis just said "afaik we will keep namespaces under http"
TimCole: the old copy would stay
<ShaneM> As to changing it... I would if it is different. Which it most certainly is.
azaroth: -0, the forward compatibility is good, but if we want people to upgrade there's more incentive to do so ???
ivan: remember that https was in part in response to
request from PING
... so i'd like to propose staying with https and changing the name .
nickstenn: if you load an http into a secure page (https) you get an error message
azaroth: ergo we must go to https.
ivan: so I reiterate my proposal
<ShaneM> namespaces are not loaded...
<ShaneM> and regardless the W3C server will always redirect to the https version so it is just a string.
<bigbluehat> json-ld.js would load it if you were using it in the browser, correct?
<nickstenn> ShaneM: this is true, but AIUI the NS for web annotation doubles as the JSON-LD context document
azaroth: @context docs are loaded and having one http and ns https not good
<ShaneM> bigbluehat: yes, but it would redirect automatically so there would be no conflict
<azaroth> PROPOSED RESOLUTION: Due to security considerations, we must change the namespace to be https, and thus we will change the slug to "wa" from "oa"
+1
<ivan> +1
<ShaneM> I am just warning you all that the W3C has made some policy decision about this and I didn't want you to be in conflict. I encourage you to leave this decision to whomever makes organizational decisions at the W3C.
<azaroth> +1
<ShaneM> +1 to changing the slug. -1 to changing the scheme because I think it is in conflict with W3C policy.
ivan: I will double check with Ralph
<ShaneM> But leave it until you get pushback from the Director.
<bigbluehat> +0
<ShaneM> ivan: thank you
<nickstenn> +?
ivan: we will try to meet next week with the Internationalization WG
<bjdmeest> +1
RESOLUTION: Due to security considerations, we must change the namespace to be https, and thus we will change the slug to "wa" from "oa" assuming Ivan verifies not in conflict with W3C Policy
azaroth: will target drafts ready for WG to review on 10 June, formal vote by the WG closes 17 June, potential CR publication 27 June
<ShaneM> nickstenn: only if it were not redirected. as far as I know.
<ivan> ...
<ivan> adjurned
<nickstenn> Drinks with I Annotate attendees at the Pratergarten, Kastanienallee 7-9, 10435 Berlin
<nickstenn> 1830 onwards!
<nickstenn> From DFKI: https://citymapper.com/trip/Tdq4ego