Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document lists use cases, compiled by the Data on the Web Best Practices Working Group, that represent scenarios of how data is commonly published on the Web and how it is used. This document also provides a set of requirements derived from these use cases that will be used to guide the development of the set of Data on the Web Best Practices and the development of two new vocabularies: Quality and Granularity Description Vocabulary and Data Usage Description Vocabulary.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This document was published by the Data on the Web Best Practices Working Group as a Working Draft. If you wish to make comments regarding this document, please send them to public-dwbp-comments@w3.org (subscribe, archives). All comments are welcome.
The working group believes this use cases document to be mature. However, it is keen to hear of further use cases and requirements not already covered.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 1 August 2014 W3C Process Document.
This section is non-normative.
There is a growing interest in publishing and consuming data on the Web. Both government and non-government organizations already make a variety of data available on the Web, some openly, some with access restrictions, covering many domains like education, the economy, security, cultural heritage, eCommerce and scientific data. Developers, journalists and others manipulate this data to create visualizations and to perform data analysis. Experience in this field shows that several important issues need to be addressed in order to meet the requirements of both data publishers and data consumers.
To address these issues, the Data on the Web Best Practices Working Group seeks to provide guidance to all stakeholders that will improve consistency in the way data is published, managed, referenced and used on the Web. The guidance will take two forms: a set of best practices that apply to multiple technologies, and vocabularies that are currently missing but that are needed to support the data ecosystem on the Web.
In order to determine the scope of the best practices and the requirements for the new vocabularies, a set of use cases has been compiled. Each use case provides a narrative describing an experience of publishing and using Data on the Web. The use cases cover different domains and illustrate some of the main challenges faced by data publishers and data consumers. A set of requirements, used to guide the development of the set of best practices as well as the development of the vocabularies, have been derived from the compiled use cases.
This is the second Working Draft and is believed to be a relatively mature reflection of the major issues. The Working Group is therefore particularly keen to receive comments and new use cases to ensure that its future work is relevant, useful and comprehensive. Please send comments to public-dwbp-comments@w3.org ( subscribe , archives ).
A use case illustrates an experience of publishing and using Data on the Web. The information gathered from the uses cases should be helpful for the identification of the best practices that will guide the publishing and usage of Data on the Web. In general, a use case will be described at least by a statement and a discussion of how the use case is currently implemented. Use case descriptions demonstrate some of the main challenges faced by publishers or developers. Information about challenges will be helpful to identify areas where Best Practices are necessary. According to the challenges, a set of requirements are abstracted in such a way that a requirement motivates the creation of one or more best practices.
(Contributed by Deirdre Lee)
URL:
http://mypp.ie/
Buildingeye.com makes building and planning information easier to find and understand by mapping what's happening in your city. In Ireland local authorities handle planning applications and usually provide some customized views of the data (PDFs, maps, etc.) on their own Web site. However there isn't an easy way to get a nationwide view of the data. BuildingEye, an independent SME, built http://mypp.ie/ to achieve this. However as each local authority didn't have an Open Data portal, BuildingEye had to directly ask each local authority for its data. It was granted access to some authorities, but not all. The data it did receive was in different formats and of varying quality/detail. BuildingEye harmonized this data for its own system. However, if another SME wanted to use this data, they would have to go through the same process and again go to each local authority asking for the data.
Elements:
Challenges:
Potential Requirements:
Requires: R-FormatMachineRead , R-FormatStandardized , R-FormatOpen , R-LicenseAvailable , R-AccessBulk , R-DataUnavailabilityReference , R-DataMissingIncomplete , R-DataLifecyclePrivacy and R-SensitiveSecurity .
(Contributed by Carlos Iglesias)
URL:
http://landportal.info/
The IFAD Land Portal platform has been completely rebuilt as an Open Data collaborative platform for the Land Governance community. Among the new features the Land Portal provides access to more than 100 indicators from more than 25 different sources on land governance issues for more than 200 countries over the world, as well as a repository of land related-content and documentation. Thanks to the new platform people could
Elements:
Challenges:
Potential Requirements:
Requires: R-MetadataMachineRead , R-GranularityLevels , R-FormatMachineRead , R-FormatStandardized , R-FormatLocalize , R-VocabReference , R-VocabVersion , R-ProvAvailable , R-AccessBulk , R-AccessRealTime , R-PersistentIdentification , R-QualityCompleteness and R-QualityMetrics .
(Contributed by Bernadette Lóscio )
URL:
http://dados.recife.pe.gov.br/
Recife is a city situated in the Northeast of Brazil and it is famous for being one of the Brazil’s biggest tech hubs. Recife is also one of the first Brazilian cities to release data generated by public sector organizations for public use as open data. Then Open Data Portal Recife was created to offer access to a repository of governmental machine-readable data about several domains, including: finances, health, education and tourism. Data is available in CSV and GeoJSON formats and every dataset has metadata that helps in the understanding and usage of the data. However, the metadata is not provided using standard vocabularies or taxonomies. In general, data is created in a static way, where data from relational databases are exported in a CSV format and then published in the data catalog. Currently, work is under way to dynamically generate data from relational databases so that data will be available as soon as it is created. The main phases of the development of this initiative were: to educate people with appropriate knowledge concerning open data, relevant data identification in order to identify the sources of data that their potential consumers could find useful, data extraction and transformation from the original data sources to open formats, configuration and installation of the open data catalog tool, data publication and portal release.
Elements:
Challenges:
Requires: R-MetadataMachineRead , R-MetadataStandardized , R-MetadataDocum , R-VocabReference , R-VocabDocum , R-VocabOpen , R-SelectHighValue , R-SelectDemand , R-QualityCompleteness , R-SynchronizedData and R-QualityComparable .
(Contributed by Yasodara)
URL:
http://dados.gov.br/
Dados.gov.br is the open data portal of Brazil's Federal Government. The site was built by a community network pulled together by three technicians from the Ministry of Planning. They managed the group from INDA or "National Infrastructure for Open Data." CKAN was chosen because it is free software and presents independent solutions for the placement of a data catalog of the Federal Government provided on the internet.
Elements:
Challenges:
Requires: R-FormatStandardized , R-VocabReference , R-LicenseAvailable and R-QualityOpinions
(Contributed by Ghislain Atemezing)
ISO GEO manages catalog records of geographic information in XML that conform to ISO-19139, a French adaptation of ISO-19115 ( data sample ). They export thousands of records like that today but they need to manage them better. In their platform, they store the information in a more conventional manner and use this standard for export datasets compliant to the INSPIRE standards or via the OGC 's CSW protocol. Sometimes, they have to enrich their metadata using tools like GeoSource and accessed through an SDI with their own metadata records. ISO GEO wants to be able to integrate all the different implementations of ISO-19139 in different tools in a single framework to better understand the thousands of metadata records they use in their day-to-day business. Types of information recorded in each file include: contact info (metadata) [data issued], spatial representation, reference system info [code space], spatial resolution, geographic extension of the data, file distribution, data quality and process step ( example ).
Challenges:
Requires: R-VocabReference , R-MetadataStandardized and R-GranularityLevels
(Contributed by Christophe Guéret)
URL:
http://www.e-overheid.nl/onderwerpen/stelselinformatiepunt/stelsel-van-basisregistraties
The Netherlands has a set of registers that are under consideration for exposure as Linked (Open) Data in the context of the "PiLOD" project. The registers contain information about buildings, people, businesses that other individual public bodies may want to refer to for they daily activities. One of them is, for instance, the service of public taxes ("BelastingDienst") which regularly pulls out data from several registers, stores this data in a big Oracle instance and curates it. This costly and time consuming process could be optimized by providing on-demand access to up-to-date descriptions provided by the register owners.
Challenges:
In terms of challenges, linking is for once not much of an issue as registers already cross-reference unique identifiers (see also http://www.wikixl.nl/wiki/gemma/index.php/Ontsluiting_basisgegevens ). A URI scheme with predicable and persistent URIs is being considered for implementation. Actual challenges include:Requires: R-VocabReference , R-SensitivePrivacy , R-UniqueIdentifier , R-PersistentIdentification , R-MultipleRepresentations , R-CoreRegister and R-DataUnavailabilityReference .
(Contributed by Eric Stephan)
This use case describes a data management facility being constructed to support scientific offshore wind energy research for the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) Wind and Water Power Program. The Reference Facility for Renewable Energy (RFORE) project is responsible for collecting wind characterization data from remote sensing and in-situ instruments located on an offshore platform. This raw data is collected by the Data Management Facility and processed into a standardized NetCDF format. Both the raw measurements and processed data are archived in the PNNL Institutional Computing (PIC) petascale computing facility. The DMF will record all processing history, quality assurance work, problem reporting, and maintenance activities for both instrumentation and data. All datasets, instrumentation, and activities are cataloged providing a seamless knowledge representation of the scientific study. The DMF catalog relies on linked open vocabularies and domain vocabularies to make the study data searchable. Scientists will be able to use the catalog for faceted browsing, ad-hoc searches, query by example. For accessing individual datasets a REST GET interface to the archive will be provided.
Challenges:
For accessing numerous datasets scientists will be accessing the archive
directly using other protocols such as sftp, rsync, scp, and access
techniques such as
HPN-SSH
.
Requires: R-FormatStandardized , R-VocabReference , R-VocabOpen and R-AccessRealTime .
(Contributed by Christophe Guéret)
URL:
http://dans.knaw.nl/
Digital archives, such as DANS in the Netherlands, have so far been concerned with the preservation of what could be defined as "frozen" datasets. A frozen dataset is a finished, self-contained set of data that does not evolve after it has been constituted. The goal of the preserving institution is to ensure this dataset remains available and readable for as many years as possible. This can for example concern an audio recording, a digitized image, e-books or database dumps. Consumers of the data are expected to look for specific content based on its associated identifier, download it from the archive and use it. Now comes the question of the preservation of Linked Open Data. In opposition to "frozen" data sets, linked data can be qualified as "live" data. The resources it contains are part of a larger entity to which third parties contribute, one of the design principles indicate that other data producers and consumers should be able to point to data. As LD publishers stop offering their data (e.g. at the end of a project), taking the LD off-line as a dump and putting it in an archive effectively turns it into a frozen dataset, likewise SQL dumps and other kind of databases. The question then is to what extent this is an issue.
Challenges: The archive has to think about whether dereferencing for resources found in preserved datasets is required or not, also to think about providing a SPARQL endpoint or not. If data consumers and publishers are fine with having RDF data dumps to be downloaded from the archive prior to its usage - just like any other digital item so far - the technical challenges could be limited to handling the size of the dumps and taking care of serialization evolution over time (e.g. from N-Triples to TriG, or from RDF/XML to HDT ) as the preference for these formats evolves. Turning a live dataset into a frozen dump also raises the question of the scope. Considering that LD items are only part of a much larger graph that gives them meaning through context the only valid dump would be a complete snapshot of the entire connected component of the Web of Data graph the target dataset is part of.
Potential Requirements: Decide on the importance of the de-referencability of resources and the potential implications for domain names and naming of resources. Decide on the scope of the step that will turn a connected sub-graph into an isolated data dump.
Requires: R-VocabReference , R-UniqueIdentifier , R-PersistentIdentification R-Archiving and R-DataUnavailabilityReference .
(Contributed by Phil Archer )
URL:
http://articles.latimes.com/2014/mar/27/local/la-me-ln-gender-wage-gap-city-government-20140327
On 27 March 2014, the LA Times published a story Women earn 83 cents for every $1 men earn in L.A. city government . It was based on an Infographic released by LA's City Controller, Ron Galperin. The Infographic was based on a dataset published on LA's open data portal, Control Panel LA . That portal uses the Socrata platform which offers a number of spreadhseet-like tools for examining the data, the ability to download it as CSV, embed it in a Web page and see its metadata.
Positive aspects:
Negative aspects:
Challenges:
Other Data Journalism blogs:
Requires: R-MetadataStandardized , R-UniqueIdentifier , R-Citable and R-DataMissingIncomplete .
(Contributed by AGESIC )
URL:
http://datauy.org/
Uruguay's open data portal was launched in December 2012 and at the time of writing holds 85 datasets containing 114 resources. The open data initiative prioritizes the “use of data” rather than “quantity of data”, that’s why the catalog also promotes a number of applications using data resources in some way (in common with many other data portals). It’s important for the project to keep the ratio 1:3 between applications and datasets. Most of the resources are CSV and ESRI Shapefiles making this a catalog of 2 and 3 star resources according to the 5 Stars of Linked Open Data scheme. AGESIC does not have sufficient resources at government agencies to implement an open data liberation strategy and go to the next level. So when we are asked about opening data, keep it simple is the answer, and CSV is by far the easiest and smart way to start. Uruguay has an Access to public information law but doesn't have legislation about open data. The open data initiative is lead by AGESIC with the support of an open data working group drawn from multiple government agencies.
Elements:
Challenges: Consolidation of tools to manage datasets, improve visualizations and transform resources to higher level (4 – 5 stars). Automated publication process using harvesting or similar tools. Alerts or control panels to keep data updated.
Requires: R-VocabReference R-SynchronizedData , R-DataUnavailabilityReference and R-DataMissingIncomplete .
(Contributed by Mark Harrison (University of
Cambridge) & Eric Kauz (GS1) )
URL:
http://www.gs1.org/digital
Retailers and Manufacturers / Brand Owners are beginning to understand that there can be benefits to openly publishing structured data about products and product offerings on the Web as Linked Open Data. Some of the initial benefits may be enhanced search listing results (e.g. Google Rich Snippets) that improve the likelihood of consumers choosing such a product or product offer over an alternative product that lacks the enhanced search results. However, the longer term vision is that an ecosystem of new product-related services can be enabled if such data is available. Many of these will be consumer-facing and might be accessed via smartphones and other mobile devices, to help consumers to find the products and product offers that best match their search criteria and personal preferences or needs — and to alert them if a particular product is incompatible with their dietary preferences or other criteria such as ethical / environmental impact considerations — and to suggest an alternative product that may be a more suitable match. A more complete description of this use case is available.
Elements:
Challenges:
Potential Requirements:
Requires: R-FormatStandardized , R-FormatMultiple , R-ProvAvailable , R-AccessUptodate , R-LicenseLiability , R-PersistentIdentification , R-Citable , R-SynchronizedData and R-CoreRegister .
(Contributed by Luis Polo )
URL:
http://www.tabulaeapp.com/
Tabul.ae is a framework to publish and visually explore data that can used to deploy powerful and easy-to-exploit open data platforms, so allowing organizations to unleash the potential of their data. The aim is to enable data owners (public organizations) and consumers (citizens and business reusers) to transform the information they manage into added-value knowledge, empowering them to easily create data-centric Web applications. These applications are built upon interactive and powerful graphs, and take the shape of interactive charts, dashboards, infographics and reports. Tabulae provides a high degree of assistance to create these apps and also automate several data visualization tasks (e.g. recognition of geographical entities to automatically generate a map). In addition, the charts and maps are portable outside the platform and can be smartly integrated with any web content, enhancing the reusability of the information.
Elements:
Challenges:
Potential Requirements:
Requires: R-FormatStandardized , R-FormatLocalize , R-VocabReference , R-VocabVersion , R-ProvAvailable , R-SynchronizedData and R-QualityCompleteness .
(Contributed by Yasodara )
URL:
https://github.com/dataviz/retrato-da-violencia.org
This is a Data Visualization made in 2012 by Vitor Batista, Léo tartari and Thiago Bueno for a W3C Brazil Office challenge about data from Rio Grande do Sul (a brazilian region). The data was released in a .zip package, the original format was .csv. The code and the documentation of the project are in it's GitHub repository
Elements:
Positive Aspects: the decision to transform the CSV in to JSON was based on the necessity to have hierarchical data. The ability to map the CSV structure to XML or JSON was considered as a positive since JSON can cover more complex structures.
Negative Aspects: the data is already outdated (in 2014), there is no provision for new releases, and there's no associated metadata.
Requires: R-QualityCompleteness, R-AccessUptodate, R-MetadataAvailable, R-PersistentIdentification, R-SynchronizedData and R-SensitiveSecurity.
(Contributed by Carlos Laufer)
URL:
http://bio2rdf.org/
Bio2RDF 1 is an open source project that uses Semantic Web technologies to make possible the distributed querying of integrated life sciences data. Since its inception 2 , Bio2RDF has made use of the Resource Description Framework (RDF) and the RDF Schema (RDFS) to unify the representation of data obtained from diverse fields (molecules, enzymes, pathways, diseases, etc.) and heterogeneously formatted biological data (e.g. flat-files, tab-delimited files, SQL, dataset specific formats, XML etc.). Once converted to RDF, this biological data can be queried using the SPARQL Protocol and RDF Query Language (SPARQL), which can be used to federate queries across multiple SPARQL endpoints.
Elements:
wasDerivedFrom
such that one can query the dataset
SPARQL endpoint to retrieve all provenance records for datasets
created on different dates. Each resource in the dataset is linked
the date-unique dataset IRI that is part of the provenance record
using the VoID
inDataset
predicate. Other important features of
the provenance record include the use of the Dublin Core
creator
term to link a dataset to the script on Github that was used to
generate it, the VoID predicate
sparqlEndpoint
to point to the
dataset SPARQL endpoint, and VoID predicate
dataDump
to point to
the data download URL.
Dataset metrics
References:
Challenges:
Potential Requirements:
Requires: R-Archiving , R-VocabReference, R-FormatMultiple, R-AccessUptodate, R-PersistentIdentification, R-FormatStandardized , R-DataUnavailabilityReference and R-DataLifecyclePrivacy.
(Contributed by Deirdre Lee)
While many cases of data on the Web may contain metadata about creation data and last update, the regularity of the release schedule is not always clear. Similarly, how and by whom the dataset is supported should also be made clear in the metadata. These attributes are necessary to improve the reliability of the data so that third-party users can trust the timely delivery of the data, with a follow-up point should there be any issues.
Challenges:
Requires: R-AccessUptodate and R-SLAAvailable
(Contributed by Deirdre Lee based on work by Pieter Colpaert)
One of the advantages of publishing Open Data is often quoted as improving the quality of the data. Many eyes looking at a dataset helps spot errors and holes quicker than a public body may identify this themselves. For example, in his paper Route planning using Linked Open Data Colpaert looks at how feedback can be incorporated into transport data to improve its data quality. How can this 'improved' data be fed back into the public body, processed an incorporated into the original dataset? Should there be an automated mechanism for this? How can the improvement be described in a machine readable format? What is best practice for reincorporating such improvements?
Technical Challenges:
Requires: R-QualityOpinions and R-IncorporateFeedback
(Contributed by Deirdre Lee based on Prof
Vassilis Vescoukis' talk at the
OKF
Greece workshop)
URL:
http://okfn.gr/2014/03/okf-meetup/
Many of the datasets that are required for Natural Hazards Management, for example critical infrastructure, utility services, road networks, are not available online as they are also deemed to be datasets that could be used for homeland security attacks.
Requires: SensitiveSecurity .
(Contributed by Deirdre Lee based on the 2012 ePSI Open Transport Data Manifesto )
The Context: Transportation is an important contemporary issue that has a direct impact on economic strength, environmental sustainability and social equity. Accordingly, transport data — largely produced or gathered by public sector organisations or semi-private entites, quite often locally — represents one of the most valuable sources of public sector information (PSI, also called ‘open data’), a key policy area for many, including the European Commission.
The Challenge: Combined with the advancement of Web technologies and the increasing use of smart phones, the demand for high quality machine-readable and openly licensed transport data, allowing for reuse in commercial and non-commercial products and services, is rising rapidly. Unfortunately this demand is not met by current supply: many transport data producers and holders (from the public and private sectors) have not managed to respond adequately to these new challenges set by society and technology.
So what do we need?
Why is this not happening?
Requires: R-AccessBulk, R-FormatOpen, R-VocabOpen, R-QualityMetrics, R-FormatLocalize, R-LicenseLiability, R-DataUnavailabilityReference and R-DataMissingIncomplete.
(Contributed by Deirdre Lee & Phil Archer)
There are many potential/perceived benefits of Open Data, however in order to publish data, some initial investment/resources are required by public bodies. When justifying these resources and evaluating the impact of the investment, many open data providers express the desire to be able to track how the datasets are being used. However open data by design often requires no registration, explanation or feedback to enable the access to and usage of the data. How can data usage be tracked in order to inform the Open Data ecosystem and improve data provision?
An example of this is the UK Mapping Agency, Ordnance Survey. Under an agreement with the UK government, the Ordnance Survey has published a lot of its mapping data as open data, including some pioneering work in Linked Geospatial Data. Doing this has required significant effort and public investment, as has the effort to include semantics in the European Union's INSPIRE data model. In common with just about all organizations, public and private, investment in such an activity requires justification and so, speaking at the Linking Geospatial Data workshop in March 2014, the Ordnance Survey's Peter Parslow said that maintaining the service depends on showing that it is being used.
Server logs only tell you so much, i.e. the number of requests, but they don't show you the quality of the usage, or what the data is being used for. A small number of high quality, high impact uses of the data might very well have more significance than a large number of low quality ones.
Such a desire to know more about what data is being used for is not unique to Ordnance Survey. For example, the equivalent body in Denmark, the Danish Geodata Agency, offers all its data for free but requires you to register and give information about your intended use. Even where data is provided for free, the provider is very likely to want some recognition for their efforts as an encouragement to keep providing it, often in the face of demands for justification from line managers.
At the same time, users of the data need an incentive, other than simple politeness, to recognize the efforts made by data providers. Therefore any vocabulary that describes the use made of a dataset must also help in the discovery of that usage, i.e. in the discovery of the user's own work. Usage in this context means anything from usage within an application to citation in academic research.
Elements
Challenges:
Requires: R-TrackDataUsage .
(Contributed by Deirdre Lee, based on a presentation by Axel Polleres at EDF 2014 ).
The Open City Data Pipeline aims to to provide an extensible platform to support citizens and city administrators by providing city key performance indicators (KPIs), leveraging open data sources. The assumption of open data is that “Added value comes from comparable open datasets being combined.” Open data needs stronger standards to be useful, in particular for industrial uptake. Industrial usage has different requirements than app hobbyist or civil society, it's important to think how open data can be used by industry at the time of publication. They have developed a data pipeline to:
Current Data Summary
Base assumption (for our use case): Added value comes from comparable open datasets being combined
Challenges & Lessons Learnt:
:populationDensity = :population/:area
;
dbpedia:populationTotal
dbpedia:populationCensus
Challenges:
Requires: R-FormatStandardized, R-IndustryReuse, R-QualityCompleteness and R-QualityComparable.
(Contributed by Deirdre Lee, based on post by Leigh Dodds )
There are many different licenses available under which data can be published on the Web, e.g. Creative Commons, Open Data Commons, national licenses, etc. It is important that the license is available in a machine-readable format. Leigh Dodds has done some work towards this with the Open Data Rights Statement Vocabulary including guides for publishers and reusers. Another issue is when data under different licenses are combined, the license terms under which the data is available also have to be merged. This interoperability of licenses is a challenge.
Challenges:
Requires: R-LicenseAvailable
NB there is also a requirement for licenses to be interoperable but this is out of scope as defined by the Working Group's charter .
(Contributed by Deirdre Lee based on a number of talks at EDF 2014 )
A main focus of publishing data on the Web is to facilitate industry resuse for commercial purposes. In order for a commercial body to reuse data, the terms of reuse must be clear. The legal terms of reuse are included in the license, but there are other factors that are important for commercial reuse, e.g. reliabiliy, support, incident recovery, etc. These could be included in an Service Level Agreement (SLA).
Challenges:
Requires: R-SLAAvailable .
(Contributed by Deirdre Lee)
APIs are commonly used to publish data in formats designed for machine-consumption, as opposed to the corresponding HTML pages whose main aim is to deliver content suitable for human-consumption. There remain questions around how APIs can best be designed to publish data, and even if APIs are the most suitable way for publishing data at all. Could the use of HTTP and URIs be sufficient? If the goal is to facilitate machine-readable data, what is best-practice?
Challenges:
Requires: R-AccessBulk and R-AccessRealTime .
(Contributed by Sumit Purohit)
URL:
http://rdesc.org/
RDESC's objective is to develop a capability for describing, linking, searching and discovering scientific resources used in collaborative science. For the purpose of capturing semantic of context, RDESC adopt sets of existing ontologies where possible such as FOAF , BIBO and schema.org. RDESC also introduced new concepts in order to provide a semantically integrated view of the data. Such concepts have two distinct functions. The first is to preserve semantics of the source that are more specific than what already existed in the ontology. The second is to provide broad categorization of existing concepts as it becomes clear that concepts are forming general groups. These generalizations enable users to work with concepts they understand, rather than needing to understand the semantics of many different systems. It strive to provide lightweight enough framework to be used as a component in any software system such as desktop user environments or dashboards but also scalable to millions of resources.
Elements
Positive aspects:
Negative aspects:
Challenges:
Potential Requirements:
Requires: R-UniqueIdentifier, R-Citable, R-Archiving, R-TrackDataUsage, R-AccessRealTime, R-SLAAvailable, R-FormatStandardized, R-VocabOpen, R-PersistentIdentification, R-VocabReference, R-SelectHighValue, R-SelectDemand, R-ProvAvailable, R-DataUnavailabilityReference, R-DataMissingIncomplete, R-DataLifecyclePrivacy and R-SensitiveSecurity.
The use cases presented in the previous section illustrate a number of challenges faced by data publishers and data consumers. These challenges show that some guidance is required on specifc areas and therefore best practices should be provided. According to the challenges, a set of requirements were defined in such a way that a requirement motivates the creation of one or more best practices. Challenges related to Data Qaulity and Data Usage motivated the definition of specifc requirements for the Quality and Granularity Description Vocabulary and the Data Usage Vocabulary.
The Open Knowledge Foundation defines open data most succinctly as data that can be freely used, modified, and shared by anyone for any purpose . Data on the Web may be open but Web technologies are equally applicable to data that is not open, or to scenarios where open and closed data are combined. There are a number of areas where data may be on the Web but not open.
Closed data may be generated in an organization that then blocks general access using a firewall or other access control system. Generated data may have links to other "open" data hosted elsewhere and it may be represented using open Web standards but this cannot be considered "open data."
Data can be closed through the policies of the data publisher and data provider. Business-sensitive data that is not made accessible to rest of the world is an example of closed data. Data controlled by law or government policies are further examples of closed data e.g. national security data, law enforcement, health care etc.
There is often a period between the generation of data and its publication as open data and data in this state should be considered as "closed." The data may remain in a closed state for an indefinite period of time while it is validated and analyzed, and insights and discoveries are published. It may also remain closed because the data publisher prefers to maximize their advantage gained by availability of data before they publish it openly. This is current common practice in scientific research.
Historically data has been exposed using various non-HTTP IETF protocol based end points including, but not limited to FTP, SFTP, SCP, Rsync. While these protocols are considered "open," their inter-operability with HTTP based Web protocol is currently a limiting factor. From an open data perspective, data only available using these these non-HTTP protocols should be considered as closed data and, by definition, is not on the Web. It follows that data accessibile by private or application-specific proprietary access protocol end points are also deemed as both closed data and out of scope for data on the Web.
In the following section we summarize the requirements derived from all the use cases, grouped according to theme. Closed data cuts across those themes (it's all data on the Web) but it's worth highlighting R-DataUnavailabilityReference, R-DataMissingIncomplete, R-DataLifecyclePrivacy and R-SensitiveSecurity as being of particular relevance to closed data.
The table below groups the reqirements derived from the use cases according to the challenges faced by producers and users of data on the Web.
Data should be available in a machine-readable format that is adequate for its intended or potential use
Motivation: BuildingEye and TheLandPortal
Data should be availabe in a standardized format. Through standardization, interoperability is also expected.
Motivation: OpenCityDataPipeline, WindCharacterizationScientificStudy, BuildingEye, TheLandPortal, GS1 Digital, Tabulae and RDESC.
Data should be availabe in an open format
Motivation: BuildingEye,
Data should be availabe in multiple formats
Motivation: GS1 Digital
Information about locale parameters (date and number formats, language) should be made available
Motivation: TheLandPortal and Tabulae
Existing reference vocabularies should be reused where possible
Motivation: OpenCityDataPipeline, RecifeOpenDataPortal, DadosGovBr, ISOGEOStory, DutchBaseRegisters, DigitalArchivingofLinkedData, TheLandPortal, UruguayOpenDataCatalogue, Tabulae and RDESC.
Vocabularies should be clearly documented
Motivation: RecifeOpenDataPortal
Vocabularies should be shared in an open way
Motivation: RecifeOpenDataPortal, WindCharacterizationScientificStudy and RDESC.
Vocabularies should include versioning information
Motivation: TheLandPortal and Tabulae
Metadata should be available
Motivation: ViolenceMap
Metadata should be machine-readable
Motivation: RecifeOpenDataPortal, Bio2RDF and TheLandPortal
Metadata should be standardized. Through standardization, interoperability is also expected.
Motivation: RecifeOpenDataPortal, ISOGEOStory and LATimesReporting
Metadata vocabulary, or values if vocabulary is not standardized, should be well-documented
Motivation: RecifeOpenDataPortal
Note: Licenses are a form of metadata and so inherit metadata requirements.
Data should be associated with a license. License is a type of metadata, so all metadata requirements also apply here.
Motivation: MachineReadabilityandInteroperabilityofLicenses, DadosGovBr and BuildingEye .
Liability terms associated with usage of Data on the Web should be clearly outlined
Motivation: GS1 Digital
Note: Provenance data is a form of metadata and so inherits metadata requirements.
Data provenance information should be available. Provenance data i s a type of metadata, so all metadata requirements also apply here.
Motivation: TheLandPortal, GS1 Digital, Tabulae and RDESC.
Note: SLAs are a form of metadata and so inherit metadata requirements
Service Level Agreements (SLAs) for industry reuse of the data should be available if requested (via a defined contact point). An SLA is a type of metadata, so all metadata requirements also apply here.
Motivation: DocumentedSupportandRelease, MachineReadabilityofSLAs and RDESC.
Data should be suitable for industry reuse
Motivation: OpenCityDataPipeline
Potential revenue streams from data should be described
Motivation: DutchBaseRegisters
Data available at different levels of granularity should be accessible and modelled in a common way
Motivation: ISOGEOStory and TheLandPortal
Datasets selected for publication should be of high-value, which should be indicated in a quantifiable manner/property.
Motivation: RecifeOpenDataPortal and RDESC .
Datasets selected for publication should be in demand by potential users, which should be indicated in a quantifiable manner/property.
Motivation: RecifeOpenDataPortal and RDESC .
Preliminary steps in the data lifecycle should not infringe upon individual’s intellectual property rights .
Motivation: BuildingEye, Bio2RDF, RDESC.
Data should be available for bulk download
Motivation: PublicationofDataviaAPIs, BuildingEye and TheLandPortal
Where data is produced in real-time, it should be available on the Web in real-time
Motivation: PublicationofDataviaAPIs, WindCharacterizationScientificStudy, TheLandPortal and RDESC .
Data should be available in an up-to-date manner and the update cycle made explicit
Motivation: Documented Support and Release and GS1 Digital
Data should not infringe a person's right to privacy
Motivation: DutchBaseRegisters
Data should not infringe an organization's security (local government, national government, business)
Motivation: DatasetsforNaturalHazardsManagement
References to data that is not open, or is available under different restrictions to the origin of the reference, should provide context by explaining how or by whom the referred to data can be accessed.
Motivation: BuildingEye, DutchBaseRegisters, DigitalArchivingofLinkedData, UruguayOpenDataCatalog, Bio2RDF, OKFNTransportWG, RDESC.
Each data resource should be associated with a unique identifier
Motivation: DutchBaseRegisters, DigitalArchivingofLinkedData, LATimesReporting, UruguayOpenDataCatalogue and RDESC.
A data resource may have multiple representations, e.g. xml/html/json/rdf
Motivation: DutchBaseRegisters
It should be possible to cite data on the Web
Motivation: LATimesReporting, GS1 Digital and RDESC .
Dynamic generation of Data on the Web from non-Web data resources and automatic update when original data source is updated
Motivation: RecifeOpenDataPortal, UruguayOpenDataCatalogue, GS1 Digital, Tabulae, ViolenceMapCore registers should be accessible
Motivation: DutchBaseRegisters and GS1 Digital
An identifier for a particular resource should be resolvable on the Web and associated for the foreseeable future with a single resource or with information about why the resource is no longer available.
Motivation: DigitalArchivingofLinkedData, TheLandPortal, DutchBaseRegisters, GS1Digital, ViolenceMap and RDESC.
It should be possible to archive data
Motivation: DigitalArchivingofLinkedData and RDESC.
Data should be complete
Motivation: OpenCityDataPipeline, RecifeOpenDataPortal, TheLandPortal, Tabulae and ViolenceMap.
Publishers should indicate if data is partially missing or if the dataset is incomplete
Motivation: BuildingEye, LATimesReporting, UruguayOpenDataCatalogue, #UC-OKFNTransportWG, RDESC.
Data should be comparable with other datasets
Motivation: OpenCityDataPipeline
Data should be associated with a set of standardized, objective quality metrics
Motivation: TheLandPortal
Subjective quality opinions on the data should be supported
Motivation: FeedbackLoopforCorrections and DadosGovBr
It should be possible to track the usage of data
Motivation: TrackingofDataUsage and RDESC .
It should be possible to incorporate feedback on the data
Motivation: FeedbackLoopforCorrections
The editors wish to thank all those who have contributed use cases or commented on those provided by others.