See also: IRC log
<RiccardoAlbertoni> how remote partecipation is going to work? are we supposed to be connected only by irc or an audio connection is foreseen?
<phila> Hi RiccardoAlbertoni - you need to be on IRC as usual and the dial in number will work. We're still gathering here so I won't connect to zakim, just yet
<phila> My guess is a lot of people will assume we're starting at 09:00 (half an hour's time). No chairs here yet...
<Caroline__> Hello!!!
<RiccardoAlbertoni> ok, thanks .. then I will wait the actual start for calling by skype ..
<Caroline__> We will call Zakim?
<phila> Yes, but not yet Caroline__
<phila> People are still gathering here in the room.
<Caroline__> ok! Please let me know when I should call
<phila> It's still early morning here...
<Caroline__> Good morning! :)
<hadleybeeman> Hi all! Official declaration that we'll be starting at 9:00 (in 15 mins)
<Caroline__> Ok! :)
<hadleybeeman> :)
<hadleybeeman> Morning, gatemezi. I just said we'll be starting in 15 mins
<gatemezi> Morning Hadley.. Thanks. Any other means to follow you remotely apart from Zakim ?
<phila> hadleybeeman: We plan to cover the big picture topic, what is the scope, what do we have the capacity to do etc.
<phila> ... More importantly I'm hoping we can get stuff writen down, work through issues, perhaps writing/editing as we go
<phila> ... we'll spend this morning reviuewing the requirements in the BP doc
<phila> ... ideally ending with a long list of issues in the tracker
<phila> ... this PM we'll split into groups and work on the BP doc and the 2 vocabs
<phila> ... may come back with specific questions for the group
<phila> ... likely to spread into tomorrow morning
<phila> ... ideally we want Editor's drafts by end of tomorrow
<phila> ... we need to think about the use cases that we have
<phila> ... there seem to be UCs in our heads that need to be in the UCR doc
<phila> ... and we want to make the most of having everyoine here. So we need feedback and suggestions for making the best use of the time
<phila> scribe: philA
<scribe> scribeNick:philA
bernadette: Before we split into groups I'd like to talk about the structure of the BP doc
hadleybeeman: OK, but probably tomorrow afternoon
Tour de Table
raphael: From EURECOM. Ghislain is one of my colleagues
(Only scribing guests)
chunming: From China Host, observing today but work on big data in China
Gary_Driscoll: Interested in all things data
JeniT: From ODI, co-chair of CSVW
<BernadetteLoscio> hello Carol!
jtandy: I'm jeremy Tandy from the UK Met Office. I'm an observer here but interested in taking down the barriers to others reusing data. Unanticiapted reuse is what we're aiming for
Adrian: From University of Minas Gerais, Brazil where we work on data consumption. We're trying to complement and add value to what we call data enrichment
<Ig_Bittencourt> Adrian from University of Minas Gerais
Olivier: I'm from the BBC
annette_g: I work at the Lawrence Berkeley Lab in the super computer centre
Kirby: I'm with Boeing in Seattle
reinaldo: I work in W3C Brasil office, observing today
ErikM: I'm observing today but my team is involved in a lot of groups
Ken: I'm with MITRE Corp
<annette_g> s/Lawrence LIvermore/Lawrence Berkeley
hadleybeeman: Explains overall
aim of the WG
... We're not a Linked data WG. We have a broad aim therefore.
2 quite specific vocabs and a general best practices doc
... the use cases provide the grounding of course
<hadleybeeman> http://www.w3.org/TR/dwbp-ucr/
hadleybeeman: Please keep thinking about use cases that we're missisng
<hadleybeeman> scribe: hadleybeeman
<phila> Requirements
<ericstephan> phila: To make the best use of the time that we have we will skip the use cases and focus on requirements
<scribe> scribe: ericstephan
phila: It would be really good for people to go thru the use cases and make sure that everything is complete. For the interest of time we will go thru the requirements together. If we are missing a requirement now is the time to add new requirements.
Bernadette: Will we also filter out requirements to determine scope?
phila: We need to bring the use cases to something manageable
The requirements are in different clusters and for most use cases you can follow the links that pertain to the use cases in the document.
<Caroline> +q
phila: All the requirements have
been derived from use cases. Some requirements are absolutely
basic baby steps.
... Reviewing over section 4.1.1 requirements in UCR
<Zakim> JeniT, you wanted to ask about the choice of a *suitable* format
JeniT: Is there a requirement for a suitable format? If you are publishing for geographic data then you need a geographic format etc.
JeniT is speaking @Caroline
<hadleybeeman> ?
<hadleybeeman> http://www.w3.org/TR/dwbp-ucr/#requirements-1
phila: Yes JeniT we do need to include a requirement for a suitable format
<KenL> Is the question a single format or unambiguously identifying the format that is being used? Format =s will change and we need to understand how to interpret.
<hadleybeeman> @kenL: Sounds like we need to flesh that out
phila: R-FormatLocalize requirement, different parts of the world write in different formats the local can make a big difference when sharing data
ken: question on locale:: Is it your local or the locale of the data?
bernadette: is it a requirement for data format or metadata?
phila: It is a requirement for the information about the data.
<hadleybeeman> ACTION: phil to add a requirement for a suitable format (as per jenit's suggestion) [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action01]
<trackbot> Created ACTION-106 - Add a requirement for a suitable format (as per jenit's suggestion) [on Phil Archer - due 2014-11-06].
phila: The meaning about localize needs to become clearer.
<hadleybeeman> ACTION: phil to clarify RFormatLocalize according to questions in the F2F discussion [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action02]
<trackbot> Created ACTION-107 - Clarify rformatlocalize according to questions in the f2f discussion [on Phil Archer - due 2014-11-06].
phila: ....localize and format
bart: need to be making issues and actions as we go along.
laufer: There are layers in data and metadata information. Do we need to clarify inheritance when we discuss collections
phila: There is no requirement that covers inheritance, the current requirement for granularity doesn't cover it.
<hadleybeeman> ACTION: phil to amend/expand R-GranularityLevels to cover Laufer's question about inheritance —metadata for the data itself and for the dataset [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action03]
<trackbot> Created ACTION-108 - Amend/expand r-granularitylevels to cover laufer's question about inheritance —metadata for the data itself and for the dataset [on Phil Archer - due 2014-11-06].
Hi Sumit!
<hadleybeeman> hello Sumit!
<SumitPurohit> Hi Eric
<SumitPurohit> Hello Hedley
<Zakim> JeniT, you wanted to ask whether vocabularies cover code lists
phila: 4.1.2 discussed data vocabularies section
<jtandy> +1 to JeniT's comment about separating the "vocabulary data model" requirement from the "vocabulary code list" requirement
jeniT: Is this about the format, we need to publish data that relates to code lists if they are available
<KenL> Shouldn't any vocabulary be covered and be able to be uniquely identified?
jeniT: It is very much like vocabularies..
bernadette: I think we had that in mind but maybe more focused on ontology specific vocabularies to supply the meaning.
<gatemezi> what about using a vocabulary such as SKOS for publishing code list ?
phila: currently the ucr doesn't include code lists....does the use cases include code lists? This is an issue
laufer: A code list is a foreign key?
<hadleybeeman> issue: phil to look at whether the UCR doc sufficiently covers code lists
<trackbot> Created ISSUE-48 - Phil to look at whether the ucr doc sufficiently covers code lists. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/48/edit>.
phila: Yes it is, it has to be there.
bart: If you don't have it you don't have a clue what the data means
<Zakim> raphael, you wanted to ask what is a "reference vocabularies"?
<SumitPurohit> Now voice is clear...
raphael: I wonder if there is a definition of a reference vocabulary?
bernadette: We don't have a glossary....
raphael: a glossary would be helpful.
phila: There are w3c documents around we need to point to them or expand upon them
<hadleybeeman> issue: Phil to Either improve on the definition of "reference vocabulary", or point to existing definitions
<trackbot> Created ISSUE-49 - Phil to either improve on the definition of "reference vocabulary", or point to existing definitions. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/49/edit>.
<Zakim> KenL, you wanted to ask if it matters what qualified as a vocabulary if way to identify is useful
<raphael> There are many dimensions that can be use (authority, persistency, popularity, etc.) to decide whether a vocabulary is a reference one or not. Perhaps one could at this stage provides examples of reference vocabularies and others that are not
Ken: What qualifies as a vocabulary? If you can have something that is well documented can't the vocabulary be more fluid?
hadley: Are we talking about the definition of a vocabulary or reference to vocabulary?
<CarlosIglesias> wondering whether "vocabulary" is a too semantic web/linked data biassed terms
Ken: I don't know if I care...
<CarlosIglesias> we may be talking more generically about "data models"
<gatemezi> me Caroline, sue this link http://www.w3.org/2013/dwbp/track/isuses/49/edit
Ken: URIs to identify vocabularies if I think its one and you think its one, that's ok
Jtandy: One of the things that
inhibits people is knowing which vocabulary to use. Helping
people use to start with would be a great outcome and take away
the excuse
... establishing a procedure for where to look would be really
useful
<SumitPurohit> +q
Ig: The reference vocabulary should take into account of ontological commitment. I advocate the use of ontology
hadley: I agree with Jtandys point but at the same time recommending vocabularies for an infinite number of use cases might be a very hard problem
<gatemezi> Jtandy : There was an attempt in a previous WG on how to find/look for vocabularies. Maybe this link here http://www.w3.org/TR/ld-bp/#VOCABULARIES can be a good starter to look at
sumit: I have a suggestion, we should explicitly say what we mean by vocabulary so we will be on the same page. We should take it as an action item to define what we mean
<Zakim> hadleybeeman, you wanted to respond re recommending vocabs
laufer: we are talking about suggestions for vocabulary that could be useful. This would be a huge problem, we should restrict the metadata about the collection ....
jtandy: responding back to hadley, I wasn't advocating a list of vocabularys that would be obsolete very quickly. I am advocating a way of registering vocabularies to find the things that may or may not be useful to them
<Zakim> KenL, you wanted to sayi terms and definitions was meant as an example and not a firm recommendation. Also, choosing vocabulary can be matter of policy or current practice, and
kenL: I didn't want to get into definitions of vocabularies, this might change from case to case
yaso: We should recommend best practices on making vocabularies?
<Zakim> BartvanLeeuwen, you wanted to isn't this part of best practices?
bart: I am wondering if this is something that goes to vocabularies about best practices to select vocabularies
<laufer> +1 to bart
<Ig_Bittencourt> http://lov.okfn.org/dataset/lov/
ig: I agree with Ken and Raphael,
not to propose a vocabulary but a place for people to find the
vocabulary.
... To Yaso are we interested in how to use a vocabulary or
create a vocabulary
<hadleybeeman> scribe: hadleybeeman
<KenL> so one piece of metadata for a vocabulary would be the documented formalism in which the vocabulary is expressed.
phila: Thanks to all — I don't
disagree with anything I've heard. I've written about how you
choose a vocab in W3C namespace and European Commission
sites.
... It does list some vocabularies. Schema.org, Dublin Core,
etc. That could be found and incorporated/improved upon.
... LOV — linked open vocabularies — is a project run by
Raphael.
... If you're thinking of coming up for a term for a bus, it
tells you all the schemas and vocabularies that have anything
to do with the term "bus".
... The Research Data Alliance are tryign to build something
similar
<ericstephan> @hadleybeeman I can take over again
<gatemezi> http://lov.okfn.org/dataset/lov/search?q=bus
phila: On new vocabularies: It is a different subject, but we have been offered some useful text on that by the Multilingual Web group
<scribe> scribe: ericstephan
<fjh> https://rd-alliance.org
<jtandy> @gatemezi ... wow, there are 73 results already for http://lov.okfn.org/dataset/lov/search?q=bus
phila: To summarize yes we are talking requirements, kens point is really well taken not defining to narrowly, provide guidance and the bp document should provide this
<SumitPurohit> +1 Phil
<gatemezi> @jtandy.. yep! More details on the right column: 49 classes, 24 properties... and the domain of the vocabularies ;
bernadette: We are going to have
requirements for vocabularys themselves and best practices for
vocabularies themselves. We need more than what we currently
have to help guide this.
... If we are going to work on vocabularies we need this tree
and more
... pointing to 4.1.2
<hadleybeeman> issue: Bernadette to help us find more use cases on the vocabulary itself (including creating a vocabulary)
<trackbot> Created ISSUE-50 - Bernadette to help us find more use cases on the vocabulary itself (including creating a vocabulary). Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/50/edit>.
bernadette: when I look at 4.1.2 this is not for people using the vocabularies it is for the people creating the vocabularies
phila: 4.1.3 Are there other requirements for metadata?
<Zakim> jtandy, you wanted to ask if we should talk about "discovery metadata"
jtandy: when I see the word metadata, its so broad in its meaning, do we want to refine the metadata to define as discovery metadata usage metadata (which is much richer)
<annette_g> * +1 for usage metadata
jtandy: for scoping I recommend focusing on discovery metadata...
<Zakim> JeniT, you wanted to ask whether there are particular metadata requirements
phila: I agree and I think we will need to have some usage metadata
bernadette: We are thinking about different kinds of metadata
<KenL> best not to try to silo the metadata
<Eric_Kauz> +1
bernadette: We haven't defined this yet we should consider what laufer said about the levels. Some metadata related to collection and some related to the data itself.
jtandy: We've steered away from provenance metadata at this point (csv working group)
bernadette: The collection could consist of different kinds of data, the metadata can be nonspecific to each particular type of data in the data set.
<jtandy> (or at least we've steered away from making a recommendation about inclusion of provenance metadata at this point in order to keep our scope tight)
laufer: We could classify the
data by the collection or specific data schema. Someone could
define a profile ....
... I think its a good hint that we don't have to focus on
metadata for the schema
<Zakim> KenL, you wanted to say distinguishing discovery metadata vs. use metadata is sometimes a slippery distinction because what I would use as a criteria for discovery could be what
laufer: data about the data, not to clarify the items of the schema.
<Zakim> hadleybeeman, you wanted to talk about which metadata — and our scope
ken: If we are taking about different kinds of metadata on discover and usage. If you can keep it as flat as possible that's what I could recommend.
hadley: We could take on describing best practices for metadata and not be particularly useful. What is stopping other people for using my data?
<Zakim> jtandy, you wanted to ask how these requirements might actually be tested ... which might help determine if a given requirement should be included in the doc
hadley: If we think about about people not using my metadata because its not tidy? I am interested to hear what you have to say
jtandy: I agree hadley, you have to ask specific questions, how do you actually test these requirements? How are you going to demonstrate whether they work or not if you are going to put this in an recommendation?
phila: Being able to think about this is useful, we do have to think about how the best practices are based on the requirements. How you validate this.
jtandy: How do you validate this because its really hard to test?
<KenL> q
phila: We've got to be able to narrow down the scope because the current scope is vast.
annette: In the science world usage is a pervasive problem. Unless you can say this column represents this it is meaningless to others.
<hadleybeeman> scribe: hadleybeeman
Bernadette: I'm not sure if we need a requirement of how to associate the metadata to the data collection? The collection will be a set of data, a set of files — the metadata will be in another file. We need a requirement to say how we are going to link these things.
phila: It's in the requirements. R-Citable, asking for a persistent and unique identifier.
Hadley: maybe we need to explain it a bit more?
phila: If people are raising questions, we need to clarify it.
jtandy: the metadata needs to cite the data, not the other way around.
ericstephan: Re Validation: Are we allowed to specify technical approaches for best practices?
phila: yes
ericstephan: We've talked about using JSON, JSON-LD and RDF as examples for metadata. Choosing one or all of them.
<jtandy> (what I meant was that the metadata should cite the data _and_ there should be a way to find the metadata from the data - e.g. like a link header)
<gatemezi> @jtandy : now it's clear enough.. +1
bernadette: for the best practices, we can have more than one implementation for a best practice. The technical approach can expressed be in different implementations
philA: on to Requirements for
Licenses.
... I'm sorry to report that an Eu project we thought might
help didn't get funded. But we did say, and the ODI has made
plain, that data should be associated with a license.
... The ODI recommends rights rather than a license.
JeniT: Well, both
Phila: This is more of a commercial angle stuff. What liability do you have as a user, or as a publisher?
<jtandy> +1 ... and if you are publishing data under a free usage license then you should say so - not assume that people will infer that!
Phila: We don't have the legal
expertise to develop this, (what licenses are, or what right
statements may be) — but this is explicitly out of scope for
the group.
... We can just say "stick a license on it."
... If the group has the capacity to go further, then we're
open to it.
jenit: I think it should also say
"information about rights are available", which is a separate
thing. For example, the data may have some third party rights
restrictions.
... This should be a separate requirement. Not to specify what
that could be, but that it's worth including.
philA: Lee Dodds wants us to do more.
jeniT: I'm sure there is more to
do there.
... Also, why pull out liability terms? There are lots of terms
and conditions to put on the use of data.
... Maybe better to say "Requirements for legal compliance".
Info about rights, about licenses, and clear terms and
conditions (which may include liability)
philA: I think the liability came from Steve Adler
Steve: I'm not sure
BREAK FOR COFFEE, back in 15 mins
<RiccardoAlbertoni> Sorry but I have to leave, Hope you'll continue the good discussion after the coffee..
<Eric_Kauz> PhilA: Provenance
<Eric_Kauz> Phila: Who created this data?
<Eric_Kauz> KenL: Who created it or who owns it.
<Eric_Kauz> Kenl: this gets into policy.
<Eric_Kauz> fjh: which provinence matters, need a bit more guidance.
<Eric_Kauz> Jtandy: where did this data come from, do I trust this data, only one facet of provinence.
<Eric_Kauz> jtandy: provinence means all things to all people. ambiguous.
<Eric_Kauz> Phila: originating organisation with contact details
<Eric_Kauz> ericstephan: should not put this requirement on everybody, originator creator would be sufficient.
<Eric_Kauz> laufer: give the organisation, should be sufficient
<ericstephan_> +1 bernadette
<Eric_Kauz> BernadetteLoscio: is this just simple metadata about who created the data?
<Caroline> +1 BernadetteLoscio
<Eric_Kauz> ericstephan_: we need to be explicit about provinance and what it means.
<Eric_Kauz> phila: can we simplifiy this to originating organisation
<Eric_Kauz> BernadetteLoscio: If we define organisation, we have to define other metadata
<Zakim> KenL, you wanted to say suggest we accept Phil's original requirement as placeholder because we can spend days trying to resolve this. Defer until later.
<Eric_Kauz> hadleybeeman: talking about different things, origin and creator is a specific use case, needs to be backed up by UC and evidence,
<Zakim> hadleybeeman, you wanted to ask if we are developing use cases in this discussion
<Eric_Kauz> hadleybeeman: otherwise have to define for all other metadata, is there another word
<Eric_Kauz> phila: do we need to change provAvailable
<Eric_Kauz> hadleybeeman: make it an issue that word provenance is unclear and needs to be better defined.
issue: Phil to clarify the use of the word "provenance" any potential confusion it causes
<trackbot> Created ISSUE-51 - Phil to clarify the use of the word "provenance" any potential confusion it causes. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/51/edit>.
<Eric_Kauz> hadleybeeman: proposed as an issue,
<Eric_Kauz> ericstephan_: need to establish an minimum set of provenance
<annette_g> * +1 to Eric
<Eric_Kauz> ericstephan_: provenance vocabulary is highly complex. Need to identify minimum requirements set
<jtandy_> +1 to ericstephan_ ... agreed that the provenance requirement should start by indicating a minimal set of requirements
<Eric_Kauz> BartvanLeeuwen: are we differing from the process. we are discussing each item over again.
<Eric_Kauz> laufer: we are discussing meaning of it, not that we have to give all information. We have an example of people wanting simple, but there are others that are more complex.
<jtandy_> agree with laufer ... if people can (& want) to provide complex provenance information they should be able to do so
<Eric_Kauz> hadleybeeman: what do we do regarding confusion on terms
<ericstephan_> I agree jtandy, but I think we need to have a minimal set defined for validation
issue: lauter to help us think about how to address our confusion of terms. (glossary?)
<trackbot> Created ISSUE-52 - Lauter to help us think about how to address our confusion of terms. (glossary?). Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/52/edit>.
<Eric_Kauz> Phila:requirements for industry reuse, goes to motivation of work group, if we are building eco system, we need SLAs.
<Eric_Kauz> phila: data should be suitable for industry reuse is vague.
<Eric_Kauz> phila: service level agreeements are at heart of it.
<Eric_Kauz> Ig_Bittencourt: difference regarding reuse, should be data should be available for reuse. not currently good requirement
<Eric_Kauz> jtandy_: what is criteria for suitable for reuse for an industry., revenue stream should be removed.
<Eric_Kauz> BartvanLeeuwen: was breakout session on financial benefits, no one is giving out figures on monetary advantages of using open data.
<Eric_Kauz> laufer: are we talking about contracts? All of them are requirements.
<Eric_Kauz> steve: if there is no service level agreement, companies will not use it.
<Eric_Kauz> steve: 90 percent of open data sites do not have an SLA, it is out there but can be removed anytime.
<Eric_Kauz> steve: many license agreements have restrictions. Say they have ability to remove the data anytime, potential revenue is a misnomer
<Zakim> hadleybeeman, you wanted to suggest changing this from a "should" to a "may"
<jtandy_> +1 to steve ... the SLA needs to be included as a separate item to indicate a data publisher's commitment to keeping data available or that it will be refreshed on a particular frequency etc.
<Eric_Kauz> hadleybeeman: there is a question of how these are put in requirements vs. how we are going to discuss it in best practices
<phila> ISSUE: Whether SLA is/can be thought of as part of the licence or whether it needs to be pulled out spearately?
<trackbot> Created ISSUE-53 - Whether sla is/can be thought of as part of the licence or whether it needs to be pulled out spearately?. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/53/edit>.
<Eric_Kauz> hadleybeeman: industry is a vague term,
<Eric_Kauz> hadleybeeman: proposes change section from SLA should be available .. ..... do not want to stop someone from using an SLA
<Zakim> JeniT, you wanted to say that there’s a point of publishing data for access rather than reuse and to talk about guarantees for availability separate from SLAs and to say that
<Eric_Kauz> JeniT: plenty of times people are publishing to provide access to data
<jtandy_> am happy to concede to JeniT's point :-)
<Eric_Kauz> JeniT: distinction regarding api availability, also important for users to have quaranteed availability over a long period of time not just up time.
Perhaps we need to be clear about what we mean when we say "service level agreement"
<Eric_Kauz> JeniT: api will be available for example 5 years, also SLA should be different than licenses.
<jtandy_> in addition to commitment for availability, an 'SLA' might include the refresh rate for the data
<Eric_Kauz> BernadetteLoscio: why should this be different for industry and not for someone else.
<phila> issue-53?
<trackbot> issue-53 -- Whether sla is/can be thought of as part of the licence or whether it needs to be pulled out spearately? -- raised
<trackbot> http://www.w3.org/2013/dwbp/track/issues/53
<BernadetteLoscio> +1
<Eric_Kauz> phila: proposal is that industry reuse and potential revenue be deleted.
<chunming> +1
<yanai> +1
<JeniT> +1 (observer)
<BernadetteLoscio> +1
<laufer> +1
<Ig_Bittencourt> +1
<Eric_Kauz> +1
<Caroline> +1
<yaso> +1
<phila> PROPOSED: Delete R-IndustryReuse and R-PotentialRevenue as requirements
<jtandy_> +1
<ericstephan_> +1
+1
<Caroline> +1
<chunming> i would like the 3rd party reuse
<newton_> +1
<BartvanLeeuwen> +1
<phila> RESOLVED: Delete R-IndustryReuse and R-PotentialRevenue as requirements
<Eric_Kauz> chunming: sla, static data sets or dynamic data sets. If static, sla is related to trust of data, dynamic data sets there would be other metrics freshness, real time guarantee,
<jtandy_> +1 to comment from chunming
<Eric_Kauz> chunming: maybe we can find another terminology to use instead of SLA
<jtandy_> @Caroline ... Eric has minuted his comment fairly well
<Eric_Kauz> phila: static and dynamic data is coming up. Timeliness and quality are being covered.
<Zakim> KenL, you wanted to say SLA should be replaced with Applicable Policies because agreement is two sided and here we are stating conditions of use by owner/provider
<jtandy_> @Caroline ... happy to help :-)
<Eric_Kauz> KenL: what is SLA is not defined in day job, we are talking about conditions of use, describing what you are getting. SLA is wrong term.
<BartvanLeeuwen> +1 to KenL
<ericstephan_> +1
issue: the term "SLA" is vague, undefined, and may not actually represent an agreement between the publisher and reuser
<trackbot> Created ISSUE-54 - The term "sla" is vague, undefined, and may not actually represent an agreement between the publisher and reuser. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/54/edit>.
<Eric_Kauz> laufer: concept of the contract, including license and sla, needs to be addressed.
<KenL> would suggest avoid "service level" altogether because that still comes with baggage.
<Eric_Kauz> hadleybeeman: word contract has different legal meanings in different countries.
<Eric_Kauz> phila: wants to talk about new use cases and new requirements besides reviewing existing requirements
<phila> New use cases
<Eric_Kauz> phila: need to go over new requirements, see if we are missing, science ones do not add new requirements, wants annette and eric to check work.
<Eric_Kauz> steve: what are ethics of EU funded workshops, entered in as requirements
<Eric_Kauz> phila: they have same voting power as anyone else in group
<Eric_Kauz> phila:need to revisit data granularity, vague
<Eric_Kauz> BernadetteLoscio: granularity? terminology issue, scope, aggregations?
issue: the word "granularity" can been many things. scope, city/state/country, data aggregation
<trackbot> Created ISSUE-55 - The word "granularity" can been many things. scope, city/state/country, data aggregation. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/55/edit>.
<Eric_Kauz> ericstephan_: may want to split granularity out to what different domains think of granularity.
<Eric_Kauz> phila: how to select high value, how to respond to demand and lifecycle
<Eric_Kauz> phila: issue more policy than tech.
<Eric_Kauz> laufer: granularity how people define granularity not within our scope.
<Eric_Kauz> Ig_Bittencourt: second requirement related to selection of publication
Our charter asks us about best practices for "technical factors for consideration when choosing data sets for publication;" http://www.w3.org/2013/05/odbp-charter.html
<Eric_Kauz> steve: cannot prioritize based on perceived value, this is antithesis of open data
<laufer> +1 to steve
<Zakim> JeniT, you wanted to question whether data selection should be in scope
<Eric_Kauz> JeniT: organisations need to pick and choose, but this should still be out of scope
<Eric_Kauz> jtandy_: scope should be limited to once you have decided what to publish, this is what you should do. Should be post decision on what to publish
<Eric_Kauz> BernadetteLoscio: high value data is very subjective,
<Eric_Kauz> BernadetteLoscio: requirements on tech level related to data source used to publish but not clear to her.
<Eric_Kauz> BernadetteLoscio: not clear if we need requirements for data selection.
<Eric_Kauz> hadleybeeman: cannot make rules for what to publish and not to publish, do not know how to write something here that is testable.
<CarlosIglesias> I think this is not about what to publish and what not to publish
<CarlosIglesias> but what to prioritize
<Eric_Kauz> phila: if we are trying to provide publishers on how to use this stuff, need policy framework, for example responding to feedback, part of policy is how to prioritize based on budget
<CarlosIglesias> in an ideal world you may be able to open data by default
<CarlosIglesias> in the real one it takes quite a lot of time and resources
<Eric_Kauz> phila: need some framework on how decisions should be taken.
<CarlosIglesias> that's why we see few data so far
<CarlosIglesias> and that's why publishers need some guidelines on how to prioritize efforts to get the best value and ROI
<Eric_Kauz> ericstephan_: when publishing instrument data, there is raw stuff, ingest data, higher quality data, perhaps an indicator on type of data you are getting.
<CarlosIglesias> it is not about perceived value, but real value
<Eric_Kauz> ericstephan_: complete transperancy
<KenL> value is in the eye of the beholder.
<CarlosIglesias> there are techniques that have been already in use for that
<Eric_Kauz> Ig_Bittencourt: legal limits to not publish certain data.
<Eric_Kauz> hadleybeeman: policy is out of scope for us.
<Eric_Kauz> hadleybeeman: focus on how to publish if you want to publish.
<KenL> issue of authorization for use; value is whether you think you will find value and may be argument that you want to be authorized
<Zakim> hadleybeeman, you wanted to disagree :)
<Eric_Kauz> steve: agree we should not make recommendations on what to publish.
<Eric_Kauz> laufer: don't know value before users use the data.
<adler1> +1
<chunming> +1
<CarlosIglesias> you can know asking the users for example
PROPOSED: The topic of selecting data for publication is out of scope for this working group
<jtandy_> +1
<BartvanLeeuwen> +1
<yaso> +1
<ericstephan_> +1
<laufer> +1
<Caroline> +1
<Ig_Bittencourt> +1
<phila> 0
<jtandy_> scribenick: jtandy_
<CarlosIglesias> -1 but can live with that
phila: talks about why he abstained ...
<newton_> +1
phila: norwegian gov has a traffic light framework for making data publication choices
BernadetteLoscio: perhaps we can include the selection of data in our docs - but not define this as requirements (in the Rec track)
hadleybeeman: concerned about
putting things in doc that don't come from use cases or end up
in the Rec
... perhaps we could publish a separate note?
BernadetteLoscio: we need this
information for context - but i don't know how to organise this
information
... if not in the Rec - then where?
<phila> This sounds like an issue to me, not for now
hadleybeeman: my feeling is that
it's better in a note
... not Rec
<hadleybeeman> fair point, phila
<hadleybeeman> issue: We need context and examples. Do they go into the rec-track documents or into a separate note?
<trackbot> Created ISSUE-56 - We need context and examples. do they go into the rec-track documents or into a separate note?. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/56/edit>.
ericstephan_: we have talked about in the past .- there may be examples that complement the Rec but don't add any new requirements
laufer: we talk about what types of metadata that might be useful - but not why .... we need to add information about the 'ecosystem' (or metamodel) within which these requirements exist
KenL: on subject of metamodel / ecosystem ... there is a reference architecture published by OASIS (from a group I chair) that discusses this
(link coming)
phila: pleased with the outcome
of that discussion
... we decided to delete R_SelectHighValue
hadleybeeman: did we do this in IRC?
<KenL> http://docs.oasis-open.org/soa-rm/soa-ra/v1.0/cs01/soa-ra-v1.0-cs01.pdf
phila: I'm happy that the issue is covered - but offer a variation on hadleybeeman's proposal
<phila> PROPOSED: The topic of selecting data for publication is out of scope for this working group, however, the BPs do need tobe contextualised as discussed
<Ig_Bittencourt> +1 to adler1
<phila> PROPOSED: The topic of selecting data for publication is out of scope for this working group, however, the BPs do need to be contextualised as discussed. That done, RSelectHighValue and R-SelectDemand should be removed
<hadleybeeman> +1
adler1: (steve) suggests that "demand" is very similar to "high value" and should also be removed
<Ig_Bittencourt> +1
<BernadetteLoscio> +1
<phila> +1
<laufer> +1
<newton_> +1
+1
<Caroline> +1
<annette_g_> +1
<adler1> +1
<BartvanLeeuwen> +1
<ericstephan_> +1
<CarlosIglesias> 0
<phila> RESOLVED: The topic of selecting data for publication is out of scope for this working group, however, the BPs do need to be contextualised as discussed. That done, RSelectHighValue and R-SelectDemand should be removed
hadleybeeman: resolved
<annette_g_> oops, mine shouldn't count
<hadleybeeman> It's still useful to hear your thoughts, annette_g_
<annette_g_> * :)
phila: given we've removed high
value and demand, let's move R-DataLifecyclePrivacy somewhere
else
... notes that the text of R-DataLifecyclePrivacy doesn't match
the title ... this is nothing to do with Privacy
... let's rename to R-DataLifecycleIPR
adler1: what does "individual’s intellectual property rights." actually mean
<Zakim> hadleybeeman, you wanted to suggest moving policy and privacy issues to notes
??: this is about copyright etc.
hadleybeeman: this is about policy which varies from country to country so this shouldn't be included in the Rec
KenL: we just need to be able to
specify the conditions of use - not include policy about what
do about that condition of use
... we enable people people to express conditions of use - and
look further about how we might categorise that
phila: it isn't w3c's place to
talk about policy - but we do talk about privacy and
security!
... can we just include this kind of information in the
non-normative section?
hadleybeeman: worries that we would need to go through _all_ use cases to pick this out
JeniT: just say "don't do illegal stuff"
<JeniT> my meaning is that there is no need to say ‘don’t do illegal stuff'
laufer: sometimes companies don't know if they're legal or not
<hadleybeeman> +1 to jeniT
phila: are we saying that issues on privacy, security etc. should be out of scope?
hadleybeeman: yes - because there's nothing _technical_ that we're trying to say
<phila> PROPOSED: R-DataLifecyclePrivacy, R-SensitivePrivacy and R-SensitiveSecurity should be deleted from the requirements as they are out of scope for a W3C Rec Track doc. Any discussion should be, at most, informative.
<hadleybeeman> +1
<ericstephan> 0
<BartvanLeeuwen> +1
<Ig_Bittencourt> 0
laufer: we should identify these kinds of issues that data publishers should care about
<yanai> +1
<laufer> +1
<annette_g_> +1 (observer)
<AdrianoC> +1
<chunming> 0
ericstephan: this (moving out of scope) is fine ...
phila: we're not just about open data here ... we have to consider closed data too
<newton_> +1
phila: I'm concerned that if all these issues are removed we look like the "open data bp" WG
fjh: emerging best practice for Recs is to include a security and privacy section - and this should be reviewed by the relevent other WG within W3C
<Zakim> JeniT, you wanted to say that there are things that you should still do to address privacy issues
JeniT: you seem to be taking out
all the things about privacy
... but there are still things you can do to include
information that helps end-users feel more comfortable about
using the data
... the ODI's open data certificate includes recommendations
to, say, include a data-privacy impact statement
KenL: we can have a range of restrictions; we should catalogue these
<fjh> +1 to mentioning security/privacy assessment in a security/privacy considerations section
<hadleybeeman> Issue: R-DataLifecyclePrivacy, R-SensitivePrivacy and R-SensitiveSecurity and the topics they represent may or not be in scope for the working group
<trackbot> Created ISSUE-57 - R-datalifecycleprivacy, r-sensitiveprivacy and r-sensitivesecurity and the topics they represent may or not be in scope for the working group. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/57/edit>.
annette_g_: suggest that a dataset should be reviewed prior to publication according to their own policies - this should be part of the best practice
(back at 13:00)
<KenL> quit
<KenL> exit
<BartvanLeeuwen> scribe: BartvanLeeuwen
<jtandy> scribenick: BartvanLeeuwen
Zakim who is on the phone
<phila> zakimn, ipcaller is deirdrelee
<hadleybeeman> 4.1.9: Requirements for data access
hadleybeeman, can we define bulk
Issue: What do we mean with bulk 4.1.9 R-AccessBulk
<trackbot> Created ISSUE-58 - What do we mean with bulk 4.1.9 r-accessbulk. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/58/edit>.
<Zakim> jtandy, you wanted to ask about "realtime" ... do you mean millisenconds?
jtandy: Realtime tends to mean streamed not stored
<ericstephan> +1 Jtandy's comment on real time
<yaso> +1
<hadleybeeman> issue: we should agree on a definition for "real time"
<trackbot> Created ISSUE-59 - We should agree on a definition for "real time". Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/59/edit>.
<ericstephan> +1
annette_g: some people want to avoid bulk, so make slices of data available
<hadleybeeman> would it help to phrase this in terms of user needs? People want bulk data because... sometimes they want more or less of the dataset?
annette_g: in research e.g. supercomputing files are huge
Ig_Bittencourt: about access realtime, if we talk about producing there is no difference between R-AccessRealTime and R-Access Up to date
BernadetteLoscio: when we talk
about data acces we talk about the different ways the data is
available
... can we download a file or use a API
<Zakim> JeniT, you wanted to ask about how bulk works for eg sensor data
JeniT: sensor data , if we want bulk data download does it mean we should archive sensor data ?
phila_: subsetting large data sets, yes I understand , is there scope in the current use cases
<hadleybeeman> ACTION: Eric Stephan and Annette to write a use case about real-time and bulk data [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action04]
<trackbot> 'Eric' is an ambiguous username. Please try a different identifier, such as family name or username (e.g., ek1, estephan).
phila_: R-Access Up to date and
R-access-realtime are different
... e.g. transport data, I need to know that in half minute
scale
<jtandy> adding to JeniT's comment: where realtime data is available as a stream, it is also beneficial to save the data to an archive & offer bulk access to that archive to download (historical) lumps of data
phila_: trains don't need milisecond accuracy
<hadleybeeman> ACTION: Eric S to work with Annette to write a use case about real-time and bulk data [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action05]
<trackbot> 'Eric' is an ambiguous username. Please try a different identifier, such as family name or username (e.g., ek1, estephan).
<hadleybeeman> ACTION: EricStephan and Annette to write a use case about real-time and bulk data [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action06]
<trackbot> Created ACTION-109 - And annette to write a use case about real-time and bulk data [on Eric Stephan - due 2014-11-06].
phila_: up to date means within the published update timescale, e.g. weekly, then the data should be no more then a week old
ericstephan: realtime does not always means always immediatly available on the web
laufer: the archiving is
important if we say we publish every week, we should be able to
get old documents
... realtime means same document is changed
<Zakim> jtandy, you wanted to note that R-AccessRealTime is a special case of R_AccessUpToDate
jtandy: access is realtime is a subcase of access up to date, the update frequency is defined period
<Ig_Bittencourt> +1 to jtandy
jtandy: met office provides near realtime data on lightning strikes, 10sec update after a strike
<Zakim> deirdrelee, you wanted to say up-to-date applies to both bulk & real-time data
deirdrelee: up to date applies to both to bulk & realtime
<ericstephan> +1
<EricKauz> +1
<hadleybeeman> +1
<ericstephan> +1deirdre
<jtandy> (I think it's something like 10-seconds ... take that as illustrative)
<deirdrelee> is 'stream' better word than 'real-time'
<deirdrelee> up-to-date applies to both bulk & real-time
BernadetteLoscio: should we have
a req for different types of data access
... api, download etc
... how do you make the data available
... we should at least say what the options are
adler1: do we have the refreshrate in a vocabulary, these 2 things are different and should have 2 elements in the vocabulary
hadleybeeman: should the spec have 2 different elements to describe this.
laufer: BernadetteLoscio has raised a issue , if we have a API we have no means of knowing if the data is realtime or up to date
<hadleybeeman> issue: Does 4.1.9 accurately reflect the use cases? And can it be better summarised as data availability?
<trackbot> Created ISSUE-60 - Does 4.1.9 accurately reflect the use cases? and can it be better summarised as data availability?. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/60/edit>.
laufer: if we distribute files we can see if its updates, but not with realtime data
phila: explain DataUnvailabilty
reference a set might reference to a NON open data set
... should contain how to access that data
ericstephan: I support this requirement, I have use cases which have subject matter experts who do no want to disclose the data
Ig_Bittencourt: should it be machine readable ?
phila: we have the requirement written somewhere else, but it could point to a document at e.g. another website
phila: R-Unique indentifier, the wording almost implies Linked Data
<ericstephan> resource can be restful without being linked (sorry not meaning to start holy war)
BernadetteLoscio: the terminolgy should go in the gloassary
ericstephan: resource can be restful without being linked (sorry not meaning to start holy war)
BernadetteLoscio: we should
concern only about the dataset and files, and not the model
inside
... the datamodel defines if there are unique identifiers or
not
<hadleybeeman> "By design a URI identifies one resource. We do not limit the scope of what might be a resource." http://www.w3.org/TR/webarch/#id-resources
jtandy: if you are not using a URI, its not on the web
<markharrison> hello hadleybeeman and all - sorry for the delay in joining
<Zakim> deirdrelee, you wanted to talk about opengoup UDEF standard
<phila> deirdrelee: UDEF was mentioned last week at an event
<hadleybeeman> PROPOSE: include the term "URI" in 4.1.11 R-UniqueIdentifier
<hadleybeeman> PROPOSED: include the term "URI" in 4.1.11 R-UniqueIdentifier
<ericstephan> +1
+1
<Ig_Bittencourt> +1
<yanai> +1
<deirdrelee> UDEF http://www.opengroup.org/udef/
<AdrianoC> +1
<laufer> +1
<newton_> +1
<annette_g> +1 (observer)
<jtandy> +1 (observer) ...
<phila> +1
<yaso> +2 yaso
<yaso> ops
<markharrison> +q to ask whether we mean a web-resolvable URI (e.g. HTTP URI) versus a URN?
<BernadetteLoscio> +1
<yaso> 1+
<hadleybeeman> +1
markharrison: specifically state web resolvable URI's
<ericstephan> +1 jtandy
<Zakim> markharrison, you wanted to ask whether we mean a web-resolvable URI (e.g. HTTP URI) versus a URN?
jtandy: URL for everydata item drives people nuts, I have petabytes of weather data
BernadetteLoscio: are we talking about the files or the instances in the files
<jtandy> I want to _identify_ data items / resources ... but I don't necessarily want to have to put the machinery in place to make them all resolve on the web ... HTTP 404 response is fine; it just means "not found"
<Zakim> phila, you wanted to propose Each data resource should be associated with a unique identifier as a minimum at the dataset level. Such identifiers are expected to be Web resolveable
ericstephan: i like it except for dataset
BernadettLoscio: if we are going to define what we mean with dataset, that needs to go in a gloassary or point to DCAT
<phila> I'm thinking this is an issue...
<hadleybeeman> Proposed: use DCAT definitions of "dataset" etc
<phila> +1
+1
<newton_> +1
<ericstephan> +11
<yaso> +1
<Ig_Bittencourt> +1
<yanai> +1
<laufer> +1
<annette_g> +1 (observer)
<BernadettLoscio> +1
<markharrison> +1
<hadleybeeman> +1
<AdrianoC> +1
<EricKauz> +1
<jtandy> 0 (observer)
<Caroline> +1
<hadleybeeman> resolved: use DCAT definitions of "dataset" etc
<CarlosIglesias> +1
<phila> MIME TYPES....
<phila> MIME TYPES....
<phila> MIME TYPES....
<phila> RFC7111 gives URIs for each cell in a CSV
phila: for a UCR this is fine with me, we need to rephrase it in the BP
hadleybeeman: now is not the time to discuss this
<hadleybeeman> PROPOSED: Leave R-UniqueIdentifier as a requirement and expect much discussion when we work out what the best practice should be to meet it.
<phila> +1
<ericstephan> +1
<laufer> 0
+1
<hadleybeeman> +1
<annette_g> +1 (observer)
<jtandy> +1 (observer)
<Ig_Bittencourt> +1
<BernadettLoscio> +1
<newton_> +1
<yaso> +1
<EricKauz> +1
<Caroline> +1
<yanai> +1
<markharrison> +1
<hadleybeeman> RESOLVED: Leave R-UniqueIdentifier as a requirement and expect much discussion when we work out what the best practice should be to meet it.
<Zakim> JeniT, you wanted to say that R-MultipleRepresentations duplicates R-FormatMultiple
JeniT: R-MultipleRepresentations duplicates R-FormatMultiple
<hadleybeeman> ACTION: phil to remove R-MultipleRepresentations as a duplicate of R-FormatMultiple [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action07]
<trackbot> Created ACTION-110 - Remove r-multiplerepresentations as a duplicate of r-formatmultiple [on Phil Archer - due 2014-11-06].
jtandy: does this lean towards
content negotiation ?
... the wording should reflect that
markharrison: we are happy with content negotiation
phila: R-Citable caries a lot of
weight from the research data
... its about citing not only the paper but also the data
adler1: isn't this a license term, attribution ?
<ericstephan> +1
ericstephan: URI is not the
strongest identifier, the DOI is stronger . e.g. in chemistry
the data needs to be available for 5 years after your
publication
... so the data doesn't have to be available / citable after 5
years
jtandy: if you can't cite the
data its not on the web
... so if there is no mechanism for a hyperlink, its not on the
web
phila: there is a whole lot of work going on, its more then just a pointer
adler1: there is whole lot more then just research, also law
<jtandy> +1 to adler1's comment ...
BernadettLoscio: should this be together with data usage requirements
<Ig_Bittencourt> Scribe: Ig
<hadleybeeman> ACTION: phil to merge R-Citable with the data usage requirements [recorded in http://www.w3.org/2014/10/30-dwbp-minutes.html#action08]
<trackbot> Created ACTION-111 - Merge r-citable with the data usage requirements [on Phil Archer - due 2014-11-06].
<Ig_Bittencourt> laufer: maybe this thing is related to prov
<Ig_Bittencourt> jtandy: this requirement is an example of getting data on the web
<Ig_Bittencourt> ... is example of this group to focus on this kind
<Ig_Bittencourt> .... this is a good practice
<Zakim> BartvanLeeuwen, you wanted to discuss talks at iswc
<Ig_Bittencourt> jtandy: it is something really tangible
<Ig_Bittencourt> BartvanLeeuwen: it is exactly of how do we publish data
<Ig_Bittencourt> ... there is a movement in this way
<Ig_Bittencourt> ... but maybe more interested on dcat.
<jtandy> phila: notes that citation requirement is one of the clear deliverables mentioned in the WG Charter
<Ig_Bittencourt> phila: there is a manifesto related to data citation
<hadleybeeman> http://www.force11.org/AmsterdamManifesto
<Ig_Bittencourt> phila: we can probably refer to them
<ericstephan> Am I a non-web resource? ;-)
<Ig_Bittencourt> hadleybeeman: do we care where data comes from...
<Ig_Bittencourt> BernadettLoscio: datashould be updated
<Ig_Bittencourt> jtandy: if you update, them you should make sure it is published on the web
<Ig_Bittencourt> ... and that is visible to all.
<Ig_Bittencourt> ... kind a policy decision
<hadleybeeman> deirdre, I think that makes sense, but I also think it may be out of scope
<Ig_Bittencourt> adler1: we can tell people what things are...
<Ig_Bittencourt> ericstephan: just database... they are producing data...
<Ig_Bittencourt> .. not necessarily on the web.
<Ig_Bittencourt> deirdrelee: just talk about the dynamic
<Ig_Bittencourt> ... which is similar to real time data...
<Ig_Bittencourt> hadleybeeman: I think this is out of the scope
<Ig_Bittencourt> adler1: make a recommendation to meausre
<Ig_Bittencourt> ericstephan: I like that sense
<Ig_Bittencourt> hadleybeeman: is the requirement to tell how old certain data is
<Ig_Bittencourt> adler1: you can make a sentence if you want to measure it
<Ig_Bittencourt> ... i think if you put on the measurement
<hadleybeeman> and having the date of publication in the metadata allows reusers to measure
<Ig_Bittencourt> BernadettLoscio: I think Data Publication is the whole thing
<Ig_Bittencourt> ... data access has to be discussed
<Ig_Bittencourt> ... I think we can bring this to the Data Access section
<Ig_Bittencourt> bring R-SynchronizedData
<hadleybeeman> What is a core register?
<Ig_Bittencourt> ... and R-CoreRegister is really related to the use case
<ericstephan> Does the Apple corporation have a "core" registry?
<hadleybeeman> "core register" comes from a data selection process
<Ig_Bittencourt> jtandy: this is about having access to reference
<Ig_Bittencourt> ... I agree if you publish data, you can use ...
<Ig_Bittencourt> ??
<hadleybeeman> PROPOSED: 4.1.12 should be removed
<hadleybeeman> +1
<phila> +1
<Ig_Bittencourt> +1
<laufer> +1
<newton_> +1
<annette_g> +1 (observer)
<BernadettLoscio> +1
<Caroline> +1
+1
<yaso> +1
<ericstephan> +1
<jtandy> (what I was saying is that if your data references terms from a code list, the code list should also be published)
<hadleybeeman> RESOLVED: 4.1.12 should be removed
<jtandy> +1 (obs)
<deirdrelee> ok
<jtandy> (back in 15 mins)
<deirdrelee> are we talking about last biscuit at coffee break?
<deirdrelee> ah :)
<Ig_Bittencourt> BernadettLoscio: here we are talking about persistence
<Ig_Bittencourt> and I don't think persistence and archiving are the same thing
<Ig_Bittencourt> BernadettLoscio: sometimes preservations has a context
<Ig_Bittencourt> .. you need to transfer to something else
<Ig_Bittencourt> BernadettLoscio: but this is for preservation
<Ig_Bittencourt> ... is it the same for archive date?
<Ig_Bittencourt> I don't know if we need to talk about this here.
<Ig_Bittencourt> BernadettLoscio: the data is already persistent.
<Ig_Bittencourt> phila: but is a persistent identifier
<Ig_Bittencourt> you could update your software
<Ig_Bittencourt> ... but your identifier is the same
<Ig_Bittencourt> adler1: we only really care about publication
<Ig_Bittencourt> phila: but if you for some reason remove the date
<Ig_Bittencourt> ... and you want the data again
<yaso> scribe: yaso
phila: that issue is already adressed
… it must to be possible to follow the BP
… there are cases where data will not be made available irt
<hadleybeeman> issue: R-archiving appears to be out of scope. We must ask Christophe, who put it in
<trackbot> Created ISSUE-61 - R-archiving appears to be out of scope. we must ask christophe, who put it in. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/61/edit>.
BernadettLoscio: we should create another challenge
ericstephan: If data was archived you can make a requirement
HadleyBeeman: what can we put in specs to solve this problem
phila: what we need to cover
<markharrison> +q
HadleyBeeman: I’m trying to understand what are the practices, why it is important for data on the web
… what’s the difference between posting anything on the web and archiving it
adler1: if anything is being archived out of the web then isn’t in our scope
<deirdrelee> +1
<hadleybeeman> Is there a difference between archiving and "persistence of data"?
phil: and yes in that case the data should persist
phila: removing something is not to say that is not important
… what we agreed so far as yet to be flashed out is that document will include non normative references
… that’s why the room is arguing to remove this thing about archiving data
<Zakim> annette_g, you wanted to ask what about redaction?
deirdrelee: you should be able to request data
<phila> PersistentIdentification says: An identifier for a particular resource should be resolvable on the Web and associated for the foreseeable future with a single resource or with information about why the resource is no longer available.
<Makx> +1 to what phil proposed
hadleybeeman: is that something that you put in a technical spec
laufer: the extension for the proposal that Phil made
… we don’t have to think outside the web
phila: that’s not in our scope
laufer: this could be in the initial stage, I can have a resource and a URI where the guy can send me an email and ask for that resource
<newton> scribe: newton
ig: I was wondering if deep web is in our scope
phil: out of the scope
<phila> issue: What info is given when dereferencing a persistent Identifier after the resource has been removed/archived
<trackbot> Created ISSUE-62 - What info is given when dereferencing a persistent identifier after the resource has been removed/archived. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/62/edit>.
<hadleybeeman> ig: deep web = data that is not over HTTP
<phila> issue: If a resource is archived, is the correct response 410, 303 or something else?
<trackbot> Created ISSUE-63 - If a resource is archived, is the correct response 410, 303 or something else?. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/63/edit>.
<Ig_Bittencourt> different classifications for Deep Web: http://en.wikipedia.org/wiki/Deep_Web
laufer: is the URI is the identification of the resrouce or the other things?
phil: Do we want to remove R-Archiving?
<deirdrelee> nah, i'm convinced :)
<phila> PROPOSED: That the R-Archive requirement be removed
<jtandy_> +1 (observer)
<hadleybeeman> +1
<ericstephan> +1 to tossing archival
<phila> +1
<laufer> +0.99
<BartvanLeeuwen> +1
<annette_g> +1 (observer)
+1
<Ig_Bittencourt> 0
<adler1_> +1
<Caroline> +1
<BernadettLoscio> +1
<EricKauz> +1
<Makx> +0
<AdrianoC> +1
Ig: I'm still not convinced about that, because we can archiving in deep web
<Makx> i'm just abstaining
<Ig_Bittencourt> +1
<phila> RESOLVED: That the R-Archive requirement be removed
<Makx> either way fine with me
phila: Now we're going to discuss the requirements for Data Quality
BernadettLoscio: I've a question
about Data Quality, It's related to Metrics
... Metrics are domain independent
BartvanLeeuwen: Quality is so subjective. It's so usage independent
<Makx> my proposal would be to distinguish subjective quality and objective metrics
<scribe> scribe: yaso
<Makx> see link i sent by email to legislation.gov.uk FAQ
<jtandy_> I agree with @Makx ... it's important to address objective quality (based on the result of test that have been done on the data) and subjective quality (based on, say, the fact it was made using a quality management system)
<Makx> will try
Erictephan: one of the speakers yesterday
… we were listening to some of the speakers yesterday… iI was thinking about on how we can be smart using the web
<Zakim> jtandy_, you wanted to note prior art ... ISO 19156:2013 Geographic information - data quality
<jtandy_> http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=32575
jtandy_: you’re not alone about data quality
<phila> Makx's e-mail on this topic
<Makx> http://lists.w3.org/Archives/Public/public-dwbp-wg/2014Oct/0132.html
<ericstephan> From TPAC: the term Mass participation relationg to QA ? – Di-Ann Eisnor
HadleyBeeman: I wonder if we need more use cases on this. It sounds like.. there’s specific domains
<Makx> example was http://www.legislation.gov.uk/help#aboutChangesToLeg but other FAQ items are relevant for quality too
<phila> What Hadley is saying is the kind of thing is what I've always had in mind on that
… I don’t know how to turn a use case in a deliverable
bernadettLoscio: I agree with Hadley, in some cases we can have a metric, but in others, no
<ericstephan> I wonder if more use cases would make it more complicated. Lots of qa criteria..
… it is more general, If we identify what is more general, we can have simple metrics, maybe that can be possible
scribe: there’s a lot of discussions about dimentions of data quality
… in some cases it’s necessary to have the metrics
<ericstephan> I'd prefer have a handle to say here's the data quality and let me describe in my own way
<Makx> +1 to eric
<phila> +1 to Hadley
jtandy_: if you have quality exeptions,
… you don’t test it before the data is produced
<Zakim> jtandy_, you wanted to note that publishers should indicate the completeness - not that it necessarily needs to be complete
<annette_g> +1 to jtandy
… I like to see that change in the document
<phila> issue: Jeremy T's expression of concern over 'data must be complete' - not realistic. Better to say where it isn't complete
<trackbot> Created ISSUE-64 - Jeremy t's expression of concern over 'data must be complete' - not realistic. better to say where it isn't complete. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/64/edit>.
<jtandy_> (the example is sensor data ... if the sensor fails and the dataset has gaps it is "incomplete" but I would still want to publish that data)
ericstephan: I think that our previous UC document has examples
<hadleybeeman> @ericstephan: would a free text field meet the needs you just described?
Laufer: we have a lot of issues here.. can we define metrics about data independent of domains?
<phila> AIUI we don't have to as jtandy_ has told us about the ISO standard which could be helpful
<Makx> i'm all for dropping 'metrics' (for now) and seeing what we can do with text fields to start with
<Zakim> fabien-gandon, you wanted to say that one of the stories in the shape WG given by dublin core is about giving different outputs depending on the quality of the data validated
BernadettLoscio: if we can identify in our UC document what is relevant to data quality
… maybe we need to know what type of information we haave to describe in a dataset
<ericstephan> @hadleybeeman - I'd prefer have a handle on DCAT to say here's the data quality and let me describe in my own way
… the literature about data q. is really huge, so in the real world what is really necessary to data quality?
adler1: even those steps will need best practices.
… is more that description
BernadettLoscio: we have in minf “data quality should be avalable”
<phila> proposed text... How to carry forward the data quality issue - more use cases? Available options? text only? machine readable dimensions?
<phila> issue: How to carry forward the data quality issue - more use cases? Available options? text only? machine readable dimensions?
<trackbot> Created ISSUE-65 - How to carry forward the data quality issue - more use cases? available options? text only? machine readable dimensions?. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/65/edit>.
<fabien-gandon> The Dublin Core use case for providing different kinds of feedback as output of a validation was provided by Karen Coyle (DC) in the Shape WG ; not detailed in minutes unfortunately http://www.w3.org/2014/10/30-shapes-minutes.html
phila: early on today we where talking briefly
… about how do you document your own vocabulary
<fsasaki> https://www.w3.org/community/bpmlod/wiki/Best_practises
<phila> addison: I;m chair of the Internationalisation WG
<ericstephan> @Hadleybeeman no prob I can do it.
<hadleybeeman> scribe: ericstephan
phila: Have you got a list to offer?
felix: http://www.w3.org/community/bpmlod/wiki/Best_practises
... Just wanted to get your thoughts on the best practises we
were working on
<yaso> HadleyBeeman: o>
phila: Thank you, you are working
on a level at this point beyond where we are. In that context
multilingual annotations came up this morning.
... This is really useful and welcome, I'm not sure we are
ready for it at this point.
hadleybeeman: I agree
Felix: I don't know what your timeline, but it would be helpful to get your feedback
phila: Tomas Carrasco Had a write up on identifiers and I agreed with about 10% of it. This fsasaki is really useful information.
<fsasaki> http://bpmlod.github.io/report/patterns/index.html
phila: This is extremely helpful
and please keep doing it.
... we aren't limited to linked data, as a matter of process
this is a community group document, this could be a potential
output of the working group product just something to think
about
... Its adding capability to this group and as you can see this
is a pretty multi-lingual group.
<phila> For the WG - the phrase i18n is code for 'internationalisation' (i - 18 chars - n)
phila: please provide feedback to us...
bernadette: This is work that can go into vocabularies?
phila: Yes internationalization goes across everything...
hadleybeeman: All of the work we are doing is setting up the agenda for the weeks and months ahead
bernadette: We don't have
requirements for data usage vocabulary yet. Its the same for
data quality. We don't have requirements for data usage and
data feedback....
... For data usage we already have ideas about how to do this.
Right now is data quality and usage should be available but we
don't know how to do this.
<hadleybeeman> proposed issue: We need more use cases to get the requirements to define the Data Usage vocabulary
bernadette: We don't have the requirements for the data usage data feedback and data quality, we need the requirements to describe the vocabularies.
<phila> proposed (additional) issue - are we talking about one vocabulary or multiple vocabularies, DCAT+ or something else, to cover our vocab work (and what is a best practice)
<phila> scrap that, I don't like it
<hadleybeeman> issue: We need more use cases to get the requirements to define the Data Usage vocabulary
<trackbot> Created ISSUE-66 - We need more use cases to get the requirements to define the data usage vocabulary. Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/66/edit>.
adler1: I think after our discussion today if we would come up with a DCAT+ vocabulary, beyond the technical capability of us, perhaps its something that is defined at the field level.
<laufer> +1 to steve
adler1: WE don't have the expertise to define data quality we can develop the best practices for how you define or articulate best practices.
+1
break time
<hadleybeeman> BREAK FOR 15 MINS
In the spirit of internationalism I take back my toss comment after looking of the slang term in english
<markharrison> It's still Thursday here (for another 43 minutes...)
<markharrison> Cambridge, UK
<markharrison> Here it's 11.18pm
<hadleybeeman> jenit: http://dragoman.org/comuri.html
<deirdrelee> +1
<deirdrelee> around 17th March???
hadleybeeman: I think we have
done everything to gather issues from the requirements, we need
to break out into smaller groups or stay all together.
... From my point of view the two themes we keep coming back
to....Scope thoughts 1) Is it unique to publishing on the
web?
2) Does addressing it encourage people to publish/reuse data on the web? (or remove barriers to it?)
yaso: I was in conference last
week in SFO, I asked about semantics, they don't use
vocabularies. They just use data, not vocabularies or
ontologies.
... Most people are semantic oriented, I would like to see more
use cases that aren't semantic web focused
<hadleybeeman> Yaso is mentioning Netflix and Medium
yaso: Like if two companies want to integrate data, they don't define a vocabulary they just define terms to integrate their data.
laufer: I think they have
semantics
... I think its impossible to not have semantics, we have to
separate out these things, what are the semantics they want to
aggregate?
... People can find information, each one of the approaches
uses semantics but in different formats.
yaso: Eric mentioned web of things as well.
bernadette: I think we agree we
aren't publishing specifically rdf data or data formats, lets
support different data formats.
... We have use cases that are not RDF, about the vocabularies,
you can publish data without the vocabulary. In my opinion this
makes data publication more difficult.
... If two people publish data in two different domains and use
the same vocabulary its more of an agreement of terms, its
going to make your life more difficult if you don't have a
vocabulary
no prob @Caroline
<Zakim> KenL, you wanted to say we need to be able to use whatever semantics the source is willing to provide. Demonstrating use of data will encourage behaviors that make the data most
yaso: we need to study why others are not using vocabularies and we have to assume that others may not use that.
kenL: We need to provide a minimum of what people need to provide and where they need to provide it. DOn't se the bar to high.
Bartvanleeeuwen: For smaller companies getting code lists can be a nightmare if they don't support best practices. The whole process of getting there is setting up the best practices.
adler1: A small percentage of data on the web is rdf....
phila: I was talking with Ann and
Adam at boeing about supply chain data. It is of interest to
W3C, its the kind of stuff gs1 works on. It probably goes into
more detail than what this group does ...
... As people said this isn't a linked data group, we aren't
going to put something in this that you must use linked data.
json-ld is a way to go for non-linked data groups.
... data visualization and supply chain are other efforts I
want to get into...
<hadleybeeman> ericstephan: Re vocabularies: I've been concerned that we've been talking about something broader than linked data.
<hadleybeeman> ... I thought we'd agreed that a vocabulary is just a model.
<hadleybeeman> ... As long as that's the case, — yaso, does that keep us out of the weeds enough?
<Vagner_Br> ?q
<markharrison> Thanks, phila for talking with Ann and Adam at Boeing. For supply chain data (about events), please see http://gs1.org/EPCIS (GS1 EPC Information Services standard). In GS1 Digital / GTIN+ on the Web, we are leaning towards JSON-LD for including a single block of structured data about products and product offers - happy to discuss further with you and Boeing folks
bernadette: The requirements show we should reuse vocabularies
yaso: we have a second round of use cases that need to added.
<yaso> q+
bartvanleeuwen: I like what Eric
was saying, in the supply chain the xml schema from the data,
it has a namespace that didn't resolve, if you are going to do
data on the web you should publish your schema not the
vocabulary
... mapping a vocabulary to an xml schema is still a
vocabulary.
bernadette: I agree with you, you can have simple and complex things if you want you can just have a simple vocabulary to make just the concepts. If we are talking about the same thing it doesn't have to be complex it just needs to be qualified
vagner_br: I am just trying to understand yaso. Are you tying to distinguish structured and non-structured data like you find in social networks?
yaso: Yes
... Lets have a hypothetical use case where you make machinery
to predict customer behavior on the web I just think you need
to support this type of ecosystem
phila: one of the strategic
things we did at W3C was stop the semantic web activity and we
went to the data activity....
... If you want to describe data in a catalog is uris and
models use jason-ld ;-)
<phila> +1 to jtandy
jtandy: listen to the body, just
want to be sensitive to the group. IF you publish the data on
the web publish your schema on the web in a referenciable
way
... Don't care, just publish the schema
... if you publish a vocabulary talk about data schema and code
lists as the other meaning of the vocabulary.
bernadette: I fully agree that we have the schema the code lists, should I use something to say that what context should I use, person, foaf person, schema.org person what context do I usee?
<jtandy> (meaning that a "data schema" is the description of how your data is structured and what the 'classes' mean ... this might be an XML Schema, an OWL ontology, a CSV schema etc.)
bernadette: Its like the namespace on xml we define it once and use everywhere....
boy do people talk fast at the end of the day...
;-)
hadleybeeman: I think we have to look at process, I hear what you are saying Yaso but we only have one use case that mentions vocabulary.
yaso: Maybe we need to look at different companies, we need to look at data in government environment.
<phila> Twitter publishes its data in XML...
laufer: I think that we are thinking about the broader use of vocabulary, we have different ways of describing things.
<jtandy> +1 to @laufer's comment that the "data schema" must by machine readable _and_ publicly available
<hadleybeeman> Does anyone else use Netflix's data? Or is just for them to use internally?
<hadleybeeman> (If it's just for them to use internally, then they don't need to make it understandable to other people who they don't speak to)
<yaso> I think it’s just internally, Hadley. But they collect it at the Web.
<yaso> mostly
<hadleybeeman> Ah okay — different use case then. Thanks!
<yaso> :-)
I think the point of the unstructured data, data on the web, using the most raw data like the html code, in this case, the other part of my group is working how a user application can consume this data and give semantics to the data
<BartvanLeeuwen> +1 hadleybeeman
<yaso> I think yes, Caroline
adriano: Are we also dealing with multiple structures or multiple documents? Just a question its a minimal requirement we have to define...
markharrison: We plan to use json-ld
<Zakim> phila, you wanted to talk about NetFlix
phila: Netflix, they see it as a competitive advantage to have a ton of metadata and use criteria they have an interest in not sharing the data.
Ig: If we ate talking about best practices on the web, you need structure and semantics to be reusable.
<Zakim> KenL, you wanted to say first requirement for data schema (or any vocabulary representation) is to be publicly available with a strong preference that it be machine readable, but
KenL: The schema should be accessible and available. We should interpret it as a schema so that it is accessible and machine readable.
<Ig_Bittencourt> s\are\are
<BartvanLeeuwen> +q
<jtandy> +1 to KenL's use of the phrase that if you publish data then you should also publish the complementary information that enables "unambiguous interpretation of the data" ... what I was referring to as the "data schema"
<hadleybeeman> ericstephan: There is a lot of implied metadata in what we're discussing
<hadleybeeman> ...Trying info together after the fact isn't a best practice
bernadette: data formats used to publish data, cdv, json-ld, json, xml etc Can we specify this?
Yes I can show a use case @hadley
bernadette: Should we give some indication about a possible format? as a best practice?
<hadleybeeman> issue: should we include a best practice around which format to use? (CSV, JSON, JSON-LD, XML, etc.)
<trackbot> Created ISSUE-67 - Should we include a best practice around which format to use? (csv, json, json-ld, xml, etc.). Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/67/edit>.
bernadette: the whole idea is to show how to publish data in a better way...if you are going to recommend something and you have HTML data and its not the best option to publish not to publish data
<Adriano> +q
<yaso> ops
laufer: I agree its with ken to
have a good description of the data
... I think we are thinking 5 star we need to remember 1-4
stars as well.
... e.g. pdf
<Zakim> phila, you wanted to riff on the theme of stars
laufer: maybe the most important thing is the description and machine readable. I don't think its a constraint, you need to establish best practices you don't need a constraint.
<phila> ODI Certificates
phila: we have a star rating
scheme I want to talk about the ODI certificates, they are open
data certificates they could cover closed data and they are
already machine readable.
... coming back to yaso's point, if you have 3 star data and
you want to visualize it, that is perfectly fine.
... Don't be afraid to say that the 5 star goal is a henderence
and not a help.
... sometimes 3 star is perfectly fine. ODI is perfectly fine,
we might want to have our own 5 star approach ourselves
... lots of star schemes, we can make up our own approach
<Ig_Bittencourt> Maybe this one: http://www.opendataimpacts.net/engagement/ ?
<Zakim> BartvanLeeuwen, you wanted to propose 5star approach for vocabularies ?
bartvanleeeuwen: agree with phila
adriano: I agree with bernadette
and laufer
... For example data fusion or integration there are data
enrichment tests we are defining, if we have structured data
most data on the web is not structured.
... Let's establish best practices for new users not just what
is out there.
yaso: from my pov lets go with 4 stars
<hadleybeeman> yaso: I think Facebook is 4 stars
ig: Could you consider a use case about data enrichment
<yaso> +1 to Bart :-)
<newton_> https://www.w3.org/2013/dwbp/wiki/Proposed_structure
<Zakim> phila, you wanted to pick up on Adriano's point about data enrichment
<newton_> BernadetteLoscio was talking about that page
phila: the data enrichment concept is really interesting, lots of people want to make money off of it, would be very interested in that.
bernadette: Its on the table of contents tomorrow
<Zakim> annette_g, you wanted to ask what we mean by structured
<Ig_Bittencourt> deirdrelee, do you have the link?
Annette: I think that people think of structured data as being something that I am not used to, perhaps it is defined differently.
<deirdrelee> proceedings not published yet http://icegov.org/townhalls/thematic-session-5-open-government-data/
<hadleybeeman> issue: we should define "structured"
<trackbot> Created ISSUE-68 - We should define "structured". Please complete additional details at <http://www.w3.org/2013/dwbp/track/issues/68/edit>.
<Ig_Bittencourt> Thanks deirdrelee
hadley: Tomorrow meet at 9am
<jtandy> @deirdrelee ... yes!!!
we are wrapping up now IRC friends...
<markharrison> sorry I probably can't join tomorrow
<markharrison> Goodnight! (00:43 in Cambridge)
you bet! Take care
<SumitPurohit_> Thanks.....
<SumitPurohit_> bye everyone
<hadleybeeman> Thanks, all of you on the phone!
<hadleybeeman> Talk more tomorrow :)
<deirdrelee> i'm going to, bye all
<newton_> Bye
<deirdrelee> flying back to ireland tomorrow, so might join a bit late
<Vagner_Br> *bye Deirdree
<deirdrelee> have a nice dinner!!
This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/work on data/work on big data/ FAILED: s/Lawrence LIvermore/Lawrence Berkeley/ Succeeded: s/Lawrence Livermore/Lawrence Berkeley/ Succeeded: s/??/Minas Gerais, Brazil/ Succeeded: s/sue/use/ Succeeded: s/not the people/it is for the people/ Succeeded: s/IRC document/recommendation/ Succeeded: s/Provinance/Provenance/ Succeeded: s/UE/EU/ Succeeded: s/statemetn/statement/ Succeeded: s/assessement/assessment/ Succeeded: s/millisenconds/milliseconds/ Succeeded: s/coneg/content negotiation/ Succeeded: s/this/citation/ Succeeded: s/what data is/how old certain data is/ Succeeded: s/JeniT/BernadettLoscio/ Succeeded: s/inrt/irt/ Succeeded: s/requiement/requirement/ Succeeded: s/yeet/yet/ Succeeded: s/thing/things/ Succeeded: s/Addision Philipps/Addison Phillips/ Succeeded: s/Felix/fsasaki/ Succeeded: s/Thomas K?/Tomas Carrasco/ Succeeded: s/disquish/distinguish/ Succeeded: s/ate/are/ Succeeded: s/trying/tying/ Found Scribe: philA Inferring ScribeNick: phila Found ScribeNick: philA Found Scribe: hadleybeeman Inferring ScribeNick: hadleybeeman Found Scribe: ericstephan Inferring ScribeNick: ericstephan Found Scribe: hadleybeeman Inferring ScribeNick: hadleybeeman Found Scribe: ericstephan Inferring ScribeNick: ericstephan Found Scribe: hadleybeeman Inferring ScribeNick: hadleybeeman Found ScribeNick: jtandy_ Found Scribe: BartvanLeeuwen Found ScribeNick: BartvanLeeuwen Found Scribe: Ig Found Scribe: yaso Inferring ScribeNick: yaso Found Scribe: newton Inferring ScribeNick: newton Found Scribe: yaso Inferring ScribeNick: yaso Found Scribe: ericstephan Inferring ScribeNick: ericstephan Scribes: philA, hadleybeeman, ericstephan, BartvanLeeuwen, Ig, yaso, newton ScribeNicks: phila, hadleybeeman, ericstephan, jtandy_, BartvanLeeuwen, yaso, newton WARNING: Replacing list of attendees. Old list: Jeremy_Tandy Laufer Ig JeniT Bart Chunming Eric_Kauz raphael hadleybeeman phila Olivier Annette Erik_Mannens RiccardoAlbertoni Kirby_Shabaga Gary_Driscoll Vagner_Br Bernadette Caroline_ Ken_Laskey Reinaldo +1.509.372.aaaa +1.509.372.aabb antoine SumitPurohit +1.509.372.aacc New list: deirdrelee Default Present: SalonA, deirdrelee, +44.796.910.aaaa, markharrison, Caroline_, Makx Present: +44.796.910.aaaa Addison Caroline_ Fabien_Gandon Frederick_Hirsch Makx Phillips Raphael_Troncy SalonA deirdrelee markharrison Got date from IRC log name: 30 Oct 2014 Guessing minutes URL: http://www.w3.org/2014/10/30-dwbp-minutes.html People with action items: annette eric ericstephan phil s stephan[End of scribe.perl diagnostic output]