Difference between revisions of "Data Cube Vocabulary/Use Cases"

From Government Linked Data (GLD) Working Group Wiki
Jump to: navigation, search
Line 39: Line 39:
 
This section presents use cases that would be enabled by the existence of a standard vocabulary for the representation of statistics as Linked Data. Since a draft of the specification of the cube vocabulary has been published, and the vocabulary already is in use, we will call this standard vocabulary after its current name RDF Data Cube vocabulary (short QB) throughout the document.  
 
This section presents use cases that would be enabled by the existence of a standard vocabulary for the representation of statistics as Linked Data. Since a draft of the specification of the cube vocabulary has been published, and the vocabulary already is in use, we will call this standard vocabulary after its current name RDF Data Cube vocabulary (short QB) throughout the document.  
  
===SDMX Web Dissemination Use Case===
 
* The ISO standard for exchanging and sharing statistical data and metadata among organizations is Statistical Data and Metadata eXchange (SDMX). Since this standard has proven applicable in many contexts, we adopt the multidimensional model that underlies SDMX and intend the standard vocabulary to be compatible to SDMX.
 
* Therefore, we have adopted the "Web Dissemination Use Case" (SDMX 2.1 User Guide Version. Version 0.1 - 19/09/2012. http://sdmx.org/wp-content/uploads/2012/11/SDMX_2-1_User_Guide_draft_0-1.pdf. Last visited on Jan 8 2013.) which is the prime use case for SDMX since it is an increasing popular use of SDMX and enables organisations to build a self-updating dissemination system.
 
* The Web Dissemination Use Case contains three actors, a structural metadata web service (registry) that collects metadata about statistical data in a registration fashion, a data web service (publisher) that publishes statistical data and its metadata as registered in the structural metadata web service, and a data consumption application (consumer) that first discovers data from the registry, then queries data from the corresponding publisher of selected data, and then visualises the data.
 
* Abstracted from the SDMX specificities, this use case contains the following processes, also illustrated in a process flow diagram by SDMX and in more detail described as follows:
 
* XXX Figure
 
  
* A structural metadata source (registry) collects metadata about statistical data.
 
* A data web service (publisher) registers statistical data in a registry, and provides statistical data from a database and metadata from a metadata repository for consumers. For that the publisher creates database tables (see 1 in figure), and loads statistical data in a database and metadata in a metadata repository.
 
* A consumer discovers data from a registry (3) and creates a query to the publisher for selected statistical data (4).
 
* The publisher translates the query to a query to its database (5) as well as metadata repository (6) and returns the statistical data and metadata.
 
* The consumer visualises the returned statistical data and metadata.
 
 
 
 
 
===Publishing statistical data===
 
 
====Publishing SDMX as Linked Data (UC 3)====
 
XXX: The QB spec should maybe also prefer the term "multidimensional model" instead of the less clear "cube model" term. Therefore, it should be possible to re-publish SDMX data using QB.
 
 
The scenario for this use case is Eurostat [http://epp.eurostat.ec.europa.eu/], which publishes large amounts of European statistics coming from a data warehouse as SDMX and other formats on the web. Eurostat also provides an interface to browse and explore the datasets. However, linking such multidimensional data to related data sets and concepts would require download of interesting datasets and manual integration.
 
 
The goal of this use case is to improve integration with other datasets; Eurostat data should be published on the web in a machine-readable format, possible to be linked with other datasets, and possible to be freeley consumed by applications. This use case is fulfilled if QB can be used for publishing the data from Eurostat as Linked Data for integration.
 
 
A publisher wants to make available Eurostat data as Linked Data. The statistical data shall be published as is. It is not necessary to represent information for validation. Data is read from tsv only. There are two concrete examples of this use case: Eurostat Linked Data Wrapper (http://estatwrap.ontologycentral.com/), and Linked Statistics Eurostat Data (http://eurostat.linked-statistics.org/). They have slightly different focus (e.g., with respect to completeness, performance, and agility).
 
 
Challenges of this use case are:
 
* There are large amounts of SDMX data; the Eurostat dataset comprises 350 GB of data. This may influence decisions about toolsets and architectures to use. One important task is to decide whether to structure the data in separate datasets. 
 
* Again, the question comes up whether slices are useful.
 
 
Unanticipated Uses ''(optional)'': -
 
 
Existing Work ''(optional)'': -
 
 
====Publishing sensor data as statistics (UC 4)====
 
Typically, multidimensional data is aggregated. However, there are cases where non-aggregated data needs to be published, e.g., observational, sensor network and forecast data sets. Such raw data may be available in RDF, already, but using a different vocabulary.
 
 
The goal of this use case is to demonstrate that publishing of aggregate values or of raw data should not make much of a difference in QB.
 
 
For example the Environment Agency uses it to publish (at least weekly) information on the quality of bathing waters around England and Wales [http://www.epimorphics.com/web/wiki/bathing-water-quality-structure-published-linked-data]. In another scenario DERI tracks from measurements about printing for a sustainability report. In the DERI scenario, raw data (number of printouts per person) is collected, then aggregated on a unit level, and then modelled using QB.
 
 
Problems and Limitations:
 
* This use case also shall demonstrate how to link statistics with other statistics or non-statistical data (metadata).
 
 
Unanticipated Uses ''(optional)'': -
 
 
Existing Work ''(optional)'': Semantic Sensor Network ontology (SSN) [http://purl.oclc.org/NET/ssnx/ssn] already provides a way to publish sensor information. SSN data provides statistical Linked Data and grounds its data to the domain, e.g., sensors that collect observations (e.g., sensors measuring average of temperature over location and time). A number of organizations, particularly in the Climate and Meteorological area already have some commitment to the OGC "Observations and Measurements" (O&M) logical data model, also published as ISO 19156. XXX: Are there any statements about compatibility and interoperability between O&M and Data Cube that can be made to give guidance to such organizations?
 
 
====Registering statistical data in dataset catalogs (UC 5)====
 
After statistics have been published as Linked Data, the question remains how to communicate the publication and let users find the statistics. There are catalogs to register datasets, e.g., CKAN, datacite.org [http://www.datacite.org/], da|ra [http://www.gesis.org/dara/en/home/?lang=en], and Pangea [http://pangaea.de/]. Those catalogs require specific configurations to register statistical data.
 
 
The goal of this use case is to demonstrate how to expose and distribute statistics after modeling using QB. For instance, to allow automatic registration of statistical data in such catalogs, for finding and evaluating datasets. To solve this issue, it should be possible to transform QB data into formats that can be used by data catalogs.
 
 
* Use Case Scenario: XXX: Find specific use case or ask how other publishers of QB data have dealt with this issue Maybe relation to DCAT?
 
 
* Problems and Limitations: -
 
 
* Unanticipated Uses ''(optional)'': If data catalogs contain statistics, they do not expose those using Linked Data but for instance using CSV or HTML (Pangea [http://doi.pangaea.de/10.1594/PANGAEA.728676]). It could also be a use case to publish such data using QB.
 
 
* Existing Work ''(optional)'': -
 
 
 
====Making transparent transformations on or different versions of statistical data (UC 6)====
 
Statistical data often is used and further transformed for analysis and reporting. There is the risk that data has been incorrectly transformed so that the result is not interpretable any more. Therefore, if statistical data has been derived from other statistical data, this should be made transparent.
 
 
 
The goal of this use case is to describe provenance and versioning around statistical data, so that the history of statistics published on the web becomes clear. This may also relate to the issue of having relationships between datasets published using QB. To fulfil this use case QB should recommend specific approaches to transforming and deriving of datasets which can be tracked and stored with the statistical data.
 
 
* Use Case Scenario: XXX: Add concrete example.
 
 
Challenges of this use case are:
 
* Operations on statistical data result in new statistical data, depending on the operation. For instance, in terms of Data Cube, operations such as slice, dice, roll-up, drill-down will result in new Data Cubes. This may require representing general relationships between cubes (as discussed here: [http://groups.google.com/group/publishing-statistical-data/browse_thread/thread/75762788de10de95]).
 
 
* Unanticipated Uses ''(optional)'':
 
 
* Existing Work ''(optional)'': Possible relation to Best Practices part on Versioning [http://www.w3.org/2011/gld/wiki/Best_Practices_Discussion_Summary#Versioning], where it is specified how to publish data which has multiple versions.
 
 
===Consuming published statistical data===
 
 
====Simple chart visualizations of (integrated) published statistical datasets (UC 7)====
 
Data that is published on the Web is typically visualized by transforming it manually into CSV or Excel and then creating a visualization on top of these formats using Excel, Tableau, RapidMiner, Rattle, Weka etc.
 
 
This use case shall demonstrate how statistical data published on the web can be directly visualized, without using commercial or highly-complex tools. This use case is fulfilled if data that is published in QB can be directly visualized inside a webpage.
 
 
An example scenario is environmental research done within the SMART research project (http://www.iwrm-smart.org/). Here, statistics about environmental aspects (e.g., measurements about the climate in the Lower Jordan Valley) shall be visualized for scientists and decision makers. Statistics should also be possible to be integrated and displayed together. The data is available as XML files on the web. On a separate website, specific parts of the data shall be queried and visualized in simple charts, e.g., line diagrams. XXX: Figure shows the wanted display of an environmental measure over time for three regions in the lower Jordan valley; displayed inside a web page:
 
 
XXX: Figure shows the same measures in a pivot table. Here, the aggregate COUNT of measures per cell is given.
 
 
The use case uses Google App Engine, Qcrumb.com, and Spark. An example of a line diagram is given at [http://129.13.109.100/~dropedia/index.php/Level_above_msl_at_AB3149,_AB3148,_AB3143] (some loading time needed). Current work tries to integrate current datasets with additional data sources, and then having queries that take data from both datasets and display them together.
 
 
Challenges of this use case are:
 
 
* The difficulties lay in structuring the data appropriately so that the specific information can be queried.
 
* Also, data shall be published with having potential integration in mind. Therefore, e.g., units of measurements need to be represented.
 
* Integration becomes much more difficult if publishers use different measures, dimensions.
 
 
* Problems and Limitations: 
 
* Unanticipated Uses ''(optional)'': -
 
* Existing Work ''(optional)'': -
 
 
====Uploading published statistical data in Google Public Data Explorer (UC 8)====
 
Google Public Data Explorer (GPDE - http://code.google.com/apis/publicdata/) provides an easy possibility to visualize and explore statistical data. Data needs to be in the Dataset Publishing Language (DSPL - https://developers.google.com/public-data/overview) to be uploaded to the data explorer. A DSPL dataset is a bundle that contains an XML file, the schema, and a set of CSV files, the actual data. Google provides a tutorial to create a DSPL dataset from your data, e.g., in CSV. This requires a good understanding of XML, as well as a good understanding of the data that shall be visualized and explored.
 
 
In this use case, it shall be demonstrate how to take any published QB dataset and to transform it automatically into DSPL for visualization and exploration. A dataset that is published conforming to QB will provide the level of detail that is needed for such a transformation.
 
 
In an example scenario, a publisher P has published data using QB. There are two different ways to fulfil this use case: 1) A customer C is downloading this data into a triple store; SPARQL queries on this data can be used to transform the data into DSPL and uploaded and visualized using GPDE. 2) or, one or more XLST transformation on the RDF/XML transforms the data into DSPL.
 
 
Challenges of this use case are:
 
* The technical challenges for the consumer here lay in knowing where to download what data and how to get it transformed into DSPL without knowing the data.
 
 
* Unanticipated Uses ''(optional)'': DSPL is representative for using statistical data published on the web in available tools for analysis. Similar tools that may be automatically covered are: Weka (arff data format), Tableau, etc.
 
 
* Existing Work ''(optional)'': -
 
 
====Allow Online Analytical Processing on published datasets of statistical data (UC 9)====
 
Online Analytical Processing [http://en.wikipedia.org/wiki/Online_analytical_processing] is an analysis method on multidimensional data. It is an explorative analysis methode that allows users to interactively view the data on different angles (rotate, select) or granularities (drill-down, roll-up), and filter it for specific information (slice, dice).
 
 
The multidimensional model used in QB to model statistics should be usable by OLAP systems. More specifically, data that conforms to QB can be used to define a Data Cube within an OLAP engine and can then be queries by OLAP clients.
 
 
An example scenario of this use case is the Financial Information Observation System (FIOS) [http://fios.ontologycentral.com/], where XBRL data has been re-published using QB and made analysable for stakeholders in a web-based OLAP client. XXX: Figure shows an example of using FIOS. Here, for three different companies, cost of goods sold as disclosed in XBRL documents are analysed. As cell values either the number of disclosures or - if only one available - the actual number in USD is given:
 
 
Challenges of this use case are:
 
* A problem lies in the strict separation between queries for the structure of data, and queries for actual aggregated values.
 
* Another problem lies in defining Data Cubes without greater insight in the data beforehand.
 
* Depending on the expressivity of the OLAP queries (e.g., aggregation functions, hierarchies, ordering), performance plays an important role.
 
* QB allows flexibility in describing statistics, e.g., in order to reduce redundancy of information in single observations. These alternatives make general consumption of QB data more complex. Also, it is not clear, what "conforms" to QB means, e.g., is a qb:DataStructureDefinition required?
 
 
* Unanticipated Uses ''(optional)'': -
 
 
* Existing Work ''(optional)'': -
 
  
 
====Transforming published statistics into XBRL (UC 10)====
 
====Transforming published statistics into XBRL (UC 10)====

Revision as of 09:19, 10 January 2013

Abstract

Many national, regional and local governments, as well as other organizations in- and outside of the public sector, collect numeric data and aggregate this data into statistics. There is a need to publish theses statistics in a standardised, machine-readable way on the web, so that they can be freely integrated and reused in consuming applications. This document is a collection of use cases for a standard vocabulary to publish statistics as Linked Data.

Status of This Document

  • ...

Introduction

Many national, regional and local governments, as well as other organizations inside and outside of the public sector, create statistics. There is a need to publish those statistics in a standardized, machine-readable way on the web, so that statistics can be freely linked, integrated and reused in consuming applications. This document is a collection of use cases for a standard vocabulary to publish statistics as Linked Data.

Publishing statistics is challenging for the following reasons:

  • Representing observations and measurements requires more complex modelling as discussed by Martin Fowler [Fowler, 1997]: Recording a statistic simply as an attribute to an object (e.g., a the fact that a person weighs 185 pounds) fails with representing important concepts such as quantity, measurement, and observation.
  • Quantity comprises necessary information to interpret the value, e.g., the unit and arithmetical and comparative operations; humans and machines can appropriately visualize such quantities or have conversions between different quantities.
  • A Measurement separates a quantity from the actual event at which it was collected; a measurement assigns a quantity to a specific phenomenon type (e.g., strength). Also, a measurement can record metadata such as who did the measurement (person), and when was it done (time).
  • Observations, eventually, abstract from measurements only recording numeric quantities. An Observation can also assign a category observation (e.g., blood group A) to an observation.
  • Figure demonstrates this relationship.
  • QB deploys the multidimensional model (made of observations with Measures depending on Dimensions and Dimension Members, and further contextualized by Attributes) and should cater for these complexity in modelling.


Terminology

  • Statistics is the study of the collection, organization, analysis, and interpretation of data. (Statistics. Wikipedia, http://en.wikipedia.org/wiki/Statistics, last visited at Jan 8 2013). Statistics comprise statistical data.
  • The basic structure of statistical data is a multidimensional table (also called a data cube) (SDMX User Guide Version 2009.1, http://sdmx.org/wp-content/uploads/2009/02/sdmx-userguide-version2009-1-71.pdf, last visited Jan 8 2013.), i.e., a set of observed values organized along a group of dimensions, together with associated metadata. If aggregated we refer to statistical data as "macro-data" whereas if not, we refer to "micro-data".
  • Source data is data from datastores such as RDBs or spreadsheets that acts as a source for the Linked Data publishing process.
  • A publisher is a person or organization that exposes source data as Linked Data on the Web.
  • A consumer is a person or agent that uses Linked Data from the Web.
  • A format is machine-readable if it is amenable to automated processing by a machine, as opposed to presentation to a human user.

Aim of this document

The aim of this document is to present use cases (rather than general scenarios) that would benefit from a standard vocabulary to represent statistics as Linked Data. These use cases will be used for derive and justify requirements for a specification of such a standard vocabulary and will be used to later evaluate the suitability of the vocabulary to fulfil the requirements. Use cases do not necessarily need to be implemented, their main aim is a "design decision FAQ" to make sure requirements to the vocabulary are derived systematically and not in an ad-hoc way and to bring together the vocabulary's specification and use cases.

Use cases

This section presents use cases that would be enabled by the existence of a standard vocabulary for the representation of statistics as Linked Data. Since a draft of the specification of the cube vocabulary has been published, and the vocabulary already is in use, we will call this standard vocabulary after its current name RDF Data Cube vocabulary (short QB) throughout the document.


Transforming published statistics into XBRL (UC 10)

XBRL is a standard data format for disclosing financial information. Typically, financial data is not managed within the organization using XBRL but instead, internal formats such as excel or relational databases are used. If different data sources are to be summarized in XBRL data formats to be published, an internally-used standard format such as QB could help integrate and transform the data into the appropriate format.

In this use case data that is available as data conforming to QB should also be possible to be automatically transformed into such XBRL data format. This use case is fulfilled if QB contains necessary information to derive XBRL data.

In an example scenario, DERI has had a use case to publish sustainable IT information as XBRL to the Global Reporting Initiative (GRI - https://www.globalreporting.org/). Here, raw data (number of printouts per person) is collected, then aggregated on a unit level and modelled using QB. QB data shall then be used directly to fill-in XBRL documents that can be published to the GRI.

Challenges of this use case are:

  • So far, QB data has been transformed into semantic XBRL, a vocabulary closer to XBRL. There is the chance that certain information required in a GRI XBRL document cannot be encoded using a vocabulary as general as QB. In this case, QB could be used in concordance with semantic XBRL. XXX: Add link to semantic XBRL.
  • Unanticipated Uses (optional): -
  • Existing Work (optional): -

Template use case

* Background and Current Practice:
* Goal:
* Use Case Scenario:
* Problems and Limitations:
* Unanticipated Uses ''(optional)'':
* Existing Work ''(optional)'':
  • Name: The Wiki page URL should be of the form "Use_Case_Name", where Name is a short name by which we can refer to the use case in discussions. The Wiki page URL can act as a URI identifier for the use case.
  • Person: The person responsible for maintaining the correctness/completeness of this use case. Most obviously, this would be the creator.
  • Dimension: The primary dimension which this use case illustrates, and secondary dimensions which the use case also illustrates.
  • Background and Current Practice: Where this use case takes place in a specific domain, and so requires some prior information to understand, this section is used to describe that domain. As far as possible, please put explanation of the domain in here, to keep the scenario as short as possible. If this scenario is best illustrated by showing how applying technology could replace current existing practice, then this section can be used to describe the current practice. This section can also be used to document statistical data within the use case.
  • Goal: Two short statements stating (1) what is achieved in the scenario without reference to RDF Data Cube vocabulary, and (2) how we use the RDF Data Cube vocabulary to achieve this goal.
  • Use Case Scenario: The use case scenario itself, described as a story in which actors interact with systems, each other etc. It should show who is using QB and for what purpose. Please mark the key steps which show requirements on QB in italics.
  • Problems and Limitations: The key to why a use case is important often lies in what problem would occur if it was not achieved, or what problem means it is hard to achieve. This section lists reasons why this scenario is or may be difficult to achieve, including pre-requisites which may not be met, technological obstacles etc. Important: Please explicitly list here the technical challenges (with regards to statistical data) made apparent by this use case. This will aid in creating a roadmap to overcome those challenges.
  • Unanticipated Uses (optional): The scenario above describes a particular case of using technology. However, by allowing this scenario to take place, the technology allows for other use cases. This section captures unanticipated uses of the same system apparent in the use case scenario.
  • Existing Work (optional): This section is used to refer to existing technologies or approaches which achieve the use case.

Requirements

  • The use cases presented in the previous section give rise to the following requirements for a standard representation of statistics.
  • Requirements are cross-linked with the use cases that motivate them.
  • Requirements are similarly categorized as deriving from publishing or consuming use cases.

Publishing use cases

Machine-readable and application-independent representation of statistics

  • It should be possible to add abstraction, multiple levels of description, summaries of statistics.
  • (UC 1-4)

Representing statistics from various resource

  • Statistics from various resource data should be possible to be translated into QB.
  • QB should be very general and should be usable for other data sets such as survey data, spreadsheets and OLAP data cubes.
  • What kind of statistics are described: simple CSV tables (UC 1), excel (UC 2) and more complex SDMX (UC 2) data about government statistics or other public-domain relevant data.

Communicating, exposing statistics on the web

  • It should become clear how to make statistical data available on the web, including how to expose it, and how to distribute it
  • (UC 5)

Coverage of typical statistics metadata

  • It should be possible to add metainformation to statistics as found in typical statistics or statistics catalogs.
  • (UC 1-5)

Expressing hierarchies

  • It should be possible to express hierarchies on Dimensions of statistics.
  • Some of this requirement is met by the work on ISO Extension to SKOS [1].
  • (UC 3, 9)

Expressing aggregation relationships in Data Cube

  • This requires some way to represent aggregation functions.
  • This requires information about
    • levels
    • hierarchies
    • relationships between members of a dimension
    • aggregation functions of a measure
  • Some of this requirement is met by the work on ISO Extension to SKOS [2].
  • (UC 0, 1,2,3,9)
  • Possibly, it would be good to be able to define several aggregation functions for the same measure.


Scale - how to publish large amounts of statistical data

  • Publishers that are restricted by the size of the statistics they publish, shall have possibilities to reduce the size or remove redundant information.
  • Scalability issues can both arise with peoples's effort and performance of applications.
  • (UC 1,2,3,4)

Compliance-levels or criteria for well-formedness

  • The formal RDF Data Cube vocabulary expresses few formal semantic constraints. Furthermore, in RDF then omission of otherwise-expected properties on resources does not lead to any formal inconsistencies.

However, to build reliable software to process Data Cubes then data consumers need to know what assumptions they can make about a dataset purporting to be a Data Cube.

  • What *well-formedness* criteria should Data Cube publishers conform to?
  • Specific areas which may need explicit clarification in the well-formedness criteria include (but may not be limited to):
    • use of abbreviated data layout based on attachment levels
    • use of qb:Slice when (completeness, requirements for an explicit qb:SliceKey?)
    • avoiding mixing two approaches to handling multiple-measures
    • optional triples (e.g. type triples)
  • (UC 1-11)

Declaring relations between Cubes

  • In some situations statistical data sets are used to derive further datasets. Should Data Cube be able to explicitly convey these relationships?
  • A simple specific use case is that the Welsh Assembly government publishes a variety of population datasets broken down in different ways. For many uses then population broken down by some category (e.g. ethnicity) is expressed as a percentage. Separate datasets give the actual counts per category and aggregate counts. In such cases it is common to talk about the denominator (often DENOM) which is the aggregate count against which the percentages can be interpreted.
  • Should Data Cube support explicit declaration of such relationships either between separated qb:DataSets or between measures with a single qb:DataSet (e.g. ex:populationCount and ex:populationPercent)?
  • If so should that be scoped to simple, common relationships like DENOM or allow expression of arbitrary mathematical relations?
  • Note that there has been some work towards this within the SDMX community as indicated here: http://groups.google.com/group/publishing-statistical-data/msg/b3fd023d8c33561d
  • (UC 6)

Consuming use cases

Finding statistical data

  • Finding statistical data should be possible, perhaps through an authoritative service
  • (UC 5)

Retrival of fine grained statistics

  • Query formulation and execution mechanisms
  • It should be possible to use SPARQL to query for fine grained statistics
  • (UC 7-10)

Understanding - End user consumption of statistical data

  • Presentation, visualization
  • (UC 7-10)

Comparing and trusting statistics

  • Finding what's in common in the statistics of two or more datasets
  • This requirement also deals with information quality - assessing statistical datasets.
  • Trust - making trust judgements on statistical data
  • (UC 5, 6, 9)

Integration of statistics

  • Interoperability - combining statistics produced by multiple different systems
  • It should be possible to combine two statistics that contain related data, and possibly were published independently
  • It should be possible to implement value conversions.
  • Required by (UC 1, 3, 4, 7, 9, 10)

Scale - how to consume large amounts of statistical data

  • Consumers that want to access large amounts of statistical data need guidance.
  • (UC 7, 9)

Internal format for other formats

  • QB data should be possible to be transformed into data formats such as XBRL which are required by certain institutions.
  • (UC 10)

Dealing with imperfect statistics

  • Imperfections - reasoning about statistical data that is not complete or correct
  • (UC 7-10)

References

Martin Fowler

ISBN-10: 0201895420 ISBN-13: 9780201895421

Publisher: Addison-Wesley Professional Copyright: 1997 Format: Cloth; 384 pp Published: 10/09/1996 Status: Instock