See also: IRC log
resolution: no objection on the global structure of TCDL
CV: issues in the metadata?
CS: in BenToWeb we used DC terms of the year
2002 (previous version), it has been changed as of january
... we can now use dc:date and dc:description
CV: dc:description is not in the
"formalMetadata" section
... can one have markup in dc:description?
CS: yes, can derive any datatype
SAZ: we need to describe the datatype we use, whichever it is (and XSD should be ok)
resolution: no objections to using dc:date (instead of just date), with datatype xs:date
CV: any objections to "Extension"?
SAZ: yes, the extension model is not adequate
CS: want to avoid having extension all over the place, just one single point of extension
SAZ: don't see the benefit of chosing this model, could equally well have parsers simply ignore elements and attributes they don't know
CS: this is one way of doing it
SAZ: agree, philosophical discussion. what is the group preference?
CI: agree with SAZ, no point of restricting the extension model
CR: no point of restricting the extension model
SAZ: just to be clear, both methods provide a way of extending the core vocabulary
CS: the current proposal is to extend at the end of each element at a well defined extension point
VE: not sure which method is better. the method used in BenToWeb proved useful but there may be a better approach too
CV: both methods work, prefer to keep it as
is
... one the one hand SAZ and CI wanted to restrict the language, and now
taking a liberal approach
CI: we have a specific focus, so don't need an
extension model
... once we have a stable language, we won't need extension
... but if you insist on having an extension model, then prefer to have it as
open as possible
SAZ: agree we have a specific focus so propose to have as small vocabulary as possible
SAZ: on the other hand, need to be as flexible
as possible for the future
... for example with EARL and TCDL
CV: what is the resolution? do we add more extension points?
SAZ: what is the problem of not defining
extension points and just ignoring unknown elements/attributes
... note, this is not the BenToWeb spec, just the convention of this TF
CS: it is good to be flexible but comes at the
cost of accurate validation
... also, any added elements should be in a separate namespace
SAZ: agree, not good practice to modify someones elses schema
CV: we expect the extensions to be content negotiation (HTTP request/response) and pointers, not sure we need others
SAZ: for example, if you build a parser for the
current TF vocabulary, it understands dc:creator and a bunch of other
elements
... however, if you feed it metadata from the BenToWeb project
... which also contains additional elements such as dc:contributor
... validation will fail and it will reject the metadata
... even though dc:creator information is also in the BenToWeb metadata
files
... if the parser would simply ignore unknown elements dc:contributor, then
it could filter out the information it needs
CV: two options - 1) leave it as it is, or 2) add more extension points
CS: add more points where expect we may need them
SAZ: 3rd option is to tell parsers to simply ignore elements or attributes they don't know
CS: then you can extend anything and anywhere
CI: yes, this is how many vocabularies are defined like HTML etc
CV: this makes validation messy, not really the concept of XMLS
SAZ: there are certain constriants, like cardinality
CS: the new elements will be from different namespaces
<Christophe> <xs:any namespace="##other" processContents="lax" />
CS: this will allow the core vocabulary to be validated, and additional elements from other namespaces as extension
resolution: two options - the one described directly above, or the "Extension" method. this is for voting next week