W3C VRE4EIC logo

Smart Descriptions & Smarter Vocabularies (SDSVoc)

30 November - 1 December, CWI, Amsterdam Science Park

Vocabulary Management · Application Profiles · Negotiation by Profile · Extending DCAT

Agenda

Communication

Wifi SSID: Amsterdam Science Park

Please use the hashtag #SDSvoc when tweeting about the event.

A scribe will be taking notes through each session using W3C's IRC system. You can follow the conversation, add to and correct the minutes by joining us in room #sdsvoc.

Raw Minutes: Day 1, Day 2.

Wednesday 30th November Skip to Thursday

- Coffee, Registration

Get your badge, get a coffee, get ready for the next two days.

- Workshop Opening

Jacco van Ossenbruggen, CWI

Phil Archer, W3C

Keith Jeffery, VRE4EIC project: Data descriptions in research infrastructures and virtual research environments, including

Why CERIF?

Keith Jeffery and Anne Asserson [paper (PDF)] [slides] [notes]

The VRE4EIC project uses CERIF as its metadata catalog. It is reasonable to question why.

In the context of e-nfrastructures, e-Research Infrastructures and Virtual Research Environments metadata is used to virtualise complexity and heterogeneity. This paper outlines the problems with other metadata standards and proposes a solution based on interoperability using CERIF.

- Dataset Description Models

Session chair: Keith Jeffery, Scribe: Phil [notes]

Timing: 4 x 15 minutes plus 1 x 5 minute, allowing 25 minutes' discussion

Different communities describe their datasets in different ways. In this session, we look at the apporoaches taken in four such communities.

The HCLS Community Profile: Describing Datasets, Versions, and Distributions

Michel Dumontier, Alasdair Gray and M. Scott Marshall [paper (PDF)] [slides]

Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.

The goal of this presentation is to give an overview of the HCLS Community Profile and explain how it extends and builds upon other approaches.

Using DCAT-AP for research data

Andrea Perego, Anders Friis-Christensen, Lorenzino Vaccari and Chrisa Tsinaraki [paper] [slides]

This paper outlines a set cross-domain requirements for the documentation of scientific data, identified during the development of the corporate data catalogue of the European Commission's Joint Research Centre (JRC). In particular, we illustrate how we have extended the DCAT application profile for European data portals (DCAT-AP) to accomodate requirements for scientific datasets, and we discuss a number of issues still to be addressed.

The Metadata Ecosystem of DataID

Markus Freudenberg, Martin Brümmer and Sebastian Hellmann [paper (PDF)] [slides]

The rapid increase of data produced in a data-centric economy emphasises the need for rich metadata descriptions of datasets, covering many domains and scenarios. While there are multiple metadata formats, describing datasets for specific purposes, exchanging metadata between them is often a difficult endeavour. More general approaches for domain-independent descriptions often lack the precision needed in many domain-specific use cases.

This paper introduces the multilayer ontology of DataID, providing semantically rich metadata for complex datasets. In particular, we focus on the extensibility of its core model and the interoperability with foreign ontologies and other metadata formats.

Towards a Common Description Vocabulary for Industrial Datasets

Christian Mader, Steffen Lohmann and Sören Auer [paper (PDF)] [slides]

In this position paper we motivate the need for establishing a common vocabulary for industrial datasets based on Linked Data technologies. We report on our progress in developing such a vocabulary within the Industrial Data Space project and providing insights on the general setting, the goals and our proposed methodology.

Loupe - An RDF Dataset Description Model for Expressing Vocabulary Usage Patterns

Nandana Mihindukulasooriya, Raúl García-Castro and Asunción Gómez-Pérez [paper (PDF)] [slides]

This position paper discusses the need for extending dataset descriptions, such as DCAT, in the case of RDF data to include comprehensive vocabulary usage and triple pattern information (for instance, as a DCAT profile for vocabulary usage and triple patterns in RDF data). As the basis of the discussion, the paper presents four use cases whose requirements can not be easily fulfilled by the current RDF dataset descriptions. To this end, we propose an extended RDF dataset description vocabulary, the Loupe model, which aims to capture an extensive set of vocabulary usage statistics and triple pattern information to satisfy such use cases.

- Coffee

- What's Wrong With DCAT?

Session chair: Deirdre Lee, Scribe: Øystein [notes]

Timing: 20 minutes for the first talk then 5 mins for each of the others, followed by panel discussion with all speakers in the session plus (DCAT editor) Fadi Maali

Several people take aim at W3C's Data Catalog Vocabulary.

DCAT Application Profile for Data Portals in Europe

Brecht Wyns, Makx Dekkers, Nikolaos Loutas, Vassilios Peristeras and Athanasios Karalopoulos [paper (PDF)] [slides]

The DCAT Application Profile for data portals in Europe (DCAT-AP) is a specification based on W3C’s Data Catalogue vocabulary (DCAT) for describing public sector datasets in Europe. The DCAT-AP was developed by a working group of experts following an open collaborative process. Since its initial development in 2013, this process has continued to further develop the specification, leading to the publication of DCAT-AP v1.1 in October 2015.

During the initial development of DCAT-AP in 2013 as well as in the process of revising the specification in 2015, a need was identified to provide more guidance for the application of the profile in practical situations, e.g. to identify existing practices or to formulate advice for implementers who need to map their local metadata to DCAT-AP-compliant metadata. Therefore, a DCAT-AP working group in the scope of the ISA² Programme of the European Commission developed implementation guidelines based on contributions from the implementer community. The ISA² Programme and the working group gathered and documented implementation challenges and proposed possible solutions and work-arounds.

The activity that led to the guidelines raised several issues that could not be further developed. A main issue that was raised and incited a lot of discussion without reaching a conclusion was the way that data can be accessed through services rather than as a fixed file. Another issue, for which a workaround was provided through the guidelines but no general consensus could be reached without impacting the DCAT specification itself, relates to the expression of relationships between datasets. These and other technical issues are the subject of current activities to further develop guidelines for DCAT-AP and to revise the DCAT-AP specification.

DCAT For a Long Term Global Future

Andreas Kuckartz [paper] [slides]

Talking points

  • avoid profiles
  • DCAT AP and national sub-profiles can partially be replaced by improved DCAT
  • use DCAT and metadata to promote DCAT
  • long term view is important for preservation

Using DCAT for Development Data Hub

Beata Lisowska [paper (PDF)] [slides]

The Development Data Hub is an example of one of many visualisation tools available on the web that aim to make data more accessible, easy to disaggregate and comparable in an intuitive way. As more of such data tools are becoming available and as the World Wide Web Consortium (W3C) argues that data published on the Web should always be coupled with metadata, this paper tests how easy it is to use one of the most widely used metadata standard (Data Catalog Vocabulary DCAT) for such a purpose.

DCAT is a well-documented, flexible and practical metadata standard that is grounded in the solid foundations of Dublin Core. DCAT is an elegant standard to use for datasets published by a single one source however, it became a rather messy when applied to Development Data Hub. As it turns out, retrieving the right datasets that correspond to the visualisation is not a problem. This can be easily addressed by assigning an individual identifier to each data source. In that case, it is possible to treat the Data Hub’s database as one instance of a DCAT. That would require 1 instance of DCAT with a corresponding 172 datasets.

The problem arises when the complexity of the data sources is considered, which is a case for a majority of data sources relevant for the development sector. For example, adult literacy rate, a World Development Indicators, can be disaggregated into the country, year and by how the data was collected or by whom. In that case, each data source in the database should be treated as an instance of a DCAT. It seems straightforward until we look at the number of the data sources Development Data Hub uses. As the application of any metadata standard involves certain degree of a manual curation, in this example, 172 DCAT instances for each data source would have to be created and described with “n” number of described datasets, where “n” corresponds to the number of fields/data fields in a given data source.

Data producers that provide the most interesting data for the development are usually based in a developing country that suffers from limited technical and staff capacity. How can we make sure that these data providers are not deterred to complement published data with the metadata because of how complex this task can be?

Metadata for business and open data

Jeroen Baltussen [paper (PDF)] [slides]

Netherlands Enterprise Agency (RVO.nl) encourages entrepreneurs in sustainable, agrarian, innovative and international business.

Metadata for Business

RVO.nl wants to use their data internally (and externally) in a more integrated way. To achieve this, the data which is mostly locked in applications, has to be opened and published. One of the first steps is to describe the data to make it suitable for use. With the use of the right meta-vocabularies this step can be taken. Most meta-vocabularies however are focused on quality aspects and less on other aspects of the dataset like its potency for re-use, the legal framework and the purpose for which the data was created. The challenge is to choose the right metadata models and also make it possible to connect these models with the outside world (other agencies and e-government initiatives in the Netherland and Europe). In this workshop we want to discuss how to use meta-models like DCAT, INSPIRE and PROV for this goal and what the gaps are in the meta-models for an organisation connected with the outside world.

Open data

In addition, RVO.nl and the whole government has strong goals to publish open data (both alpha numeric and spatial) externally, with the goal of stimulating the data driven economy. The data has to be fit for purpose, fit for purchase (the right quality) and fit for use (the right form). All these aspects have to be described in the metadata. Again, we want to discuss which metadata models are fit for use and how they are connected with other e-government programmes and standards.

Panel

All speakers in this session plus:

  • Dom Fripp, JISC, UK Research Data Discovery Service and the Research Data Shared Service
  • Fadi Maali, SAP, DCAT editor

- Lunch

A cold buffet lunch is provided, courtesy of the VRE4EIC project.

- Time and Space

Session chair: Bart van Leeuwen, Scribe: Jacco [notes]

Timing: 15 minutes x 2 plus 5 mins + panel

How can data be described so that its relevance to a time and place can discovered? How should the data be shared?

Spatial data on the web

Linda van den Brink, Ine de Visser [paper (PDF)] [slides]

Geonovum sees the web as an important dissemination channel and wants geospatial data to be accessible to web developers with no specific geospatial expertise. To explore the possibilities of making spatial data a useful, integral and common part of the web, Geonovum organised a ‘testbed’, an experimental project in which several market parties cooperated to make spatial data findable through search engines, and usable for web programmers. The paper describes the background and findings of this testbed.

GeoDCAT-AP: Use cases and open issues

Andrea Perego, Anders Friis-Christensen, Michael Lutz [paper] [slides]

This paper illustrates some issues and use cases identified during the design and implementation of GeoDCAT-AP, a metadata profile aiming to provide a representation of geospatial metadata compliant with the DCAT application profile for European data portals (DCAT-AP). In particular, the paper focuses on those issues that may have a possible relevance also outside the geospatial domain, covering topics concerning metadata profile-based negotiation, publishing metadata on the Web, representing API-based data access in metadata, and approaches to modelling data quality.

Applying DCAT vocabulary on RDF datasets

Ghislain Auguste Atemezing [paper (PDF)]

In this paper, we show the implementation of DCAT on geospatial data published in RDF for ING France. We consider an endpoint as a catalogue of RDF datasets, where each named graphs can be versioned and thus can be implemented. We highlight the choices made to manage the versions of the data often updated in a public endpoint, without breaking consumption by users of the data.

Panel

All speakers in this session plus:

  • Daniele Bailo, INGV
  • Otakar Čerba, University of West Bohemia

- Coffee

- Searching for data

Session chair: Kevin Ashley, Scribe: Deirdre [notes]

Timing: 15 minutes x 2 plus 4 x 5 mins + panel

How can we help users find what they're looking for?

The Public Sector DNA on the web: semantically marking up government portals.

Raf Buyle Laurens De Vocht, Dieter De Paepe, Mathias Van Compernolle, Geraldine Nolf, Ziggy Vanlishout, Bjorn De Vidts, Erik Mannens and Peter Mechant [paper (PDF)] [slides]

Base registries contain core public sector data. They are fundamental building blocks in supporting interaction between government and private sec-tor. To enable the private sector to discover, adopt and use information from base registries (e.g. addresses of organizations and public services), the gov-ernment needs a distribution model. Therefore, the Flemish government is working on a technical strategy to add markup to government portals to embed their ‘DNA’, semantic annotations, on third-party private sector platforms, to dissolve the existing governmental silos and to provide better public services. In this context, this paper reviews a potential strategy to ‘open up’ base registries that combines the best of both worlds: bridging between the schema.org and the European ISA Core vocabularies.

CivicOS: Governance & Campaigning Data Standard

Dmytro Potiekhin Eugenia Kuznetsova [paper (PDF)]

More and more websites embed structured data describing for instance products, people, organizations, places, events, and procedures into their HTML pages using markup formats such as RDFa, Microdata and Microformats. As of November 2015 about 30% of HTML pages are using structured data formats such as RDFa, Microdata, Embedded JSON-LD and Microformat. This is 541 million HTML pages out of the 1.77 billion pages contained in the crawl done by WebDataCommons project.

Although initially used to improve searchability of web-pages, murkup standards can also allow easy and reliable data integration solutions, which will increase transparency of governance at the local and national levels and availability of user-oriented services. Development of the existing commercial data description standards will also help efficient reuse of the governance-related data.

Although there are lots of great initiatives to describe governance-related data, democracy-related data is still poorly structured. With development of safe authorization and other tools done by such projects as DemocracyOS, Code for America and others, civic participation will become more reliable. However, the key bottleneck of the development of information technologies for democracy will be procession of the governance data and procedures due to the lack of

  1. the comprehensive democracy data markup standard describing nonviolent action and helping activists of different campaigns and countries coordinate their learning and actions across issue areas and continents;
  2. smooth interoperability of the existing and new democracy and transparency schemas and, therefore, services based on them.

This project proposes development of XML-based civic governance data description standard and democracy data exchange platform & civic action tool to be voluntarily and internationally used by campaigners, experts, data and service providers in order to make nonviolent actions and governance services more efficient, interoperable and accessible for people. The proposed standard can be developed as a set of governance-related extensions of the schema.org, which is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond. Even before approved as a part of the schema.org, such extensions can be used for development and integrations of the new services.

Several Ukrainian and Serbian NGOs are partnering in the CivicOS.net – an effort to develop an open interoperable civic campaigning and governance data description schema to be voluntarily used in online projects and mobile applications, such as national parliaments and local councils, voter education and parallel vote tabulation projects, nonviolent actions and amending existing legal systems.

How we search for data? Towards User-Driven dataset descriptions

Emilia Kacprzak, Laura Koesten, Luis-Daniel Ibáñez and Elena Simperl [paper] [slides]

We propose to the workshop the problem of understanding how people searches for data in the current portals and search engines. Our ultimate goal is to design and implement a dataset search engine, where describing datasets in the most appropriate way is critical. We argue that to converge to a description vocabulary optimal for dataset discovery and engines, we require to understand the needs of the actual data users and identify the key differences with other kinds of search and descriptions, thus, making the process more user-driven, instead of being completely driven by the dataset themselves.

CERN Analysis Preservation

Sünje Dallmeier-Tiessen, Artemis Lavasa Tibor Šimko, Javier Delgado Fernández, Pamfilos Fokianos, Robin Dasler, Anxhela Dani, Annemarie Mattmann, Ioannis Tsanaktsidis, Anna Trzcinska, Diego Rodriguez Rodriguez [paper (PDF)] [slides]

The CERN Analysis Preservation Framework is a central platform for the four LHC collaborations at CERN and it was developed to address the need for the long-term preservation of all the digital assets and associated knowledge in the data analysis process, in order to enable future reproducibility of research results. As the service continues to develop, new challenges arise, so in this paper we present the initial considerations for implementing a common data markup schema for all the information in the service, which is essential for enhanced search and discovery, and for supporting links with various other internal and external platforms.

DATS: dataset descriptions for data discovery in DataMed

Alejandra Gonzalez-Beltran Philippe Rocca-Serra, Susanna-Assunta Sansone and Biocaddie Team [paper] [slides]

This paper introduces DATS, which stands for DAta Tag Suite, and it is a data description model designed and developed to describe datasets being ingested in DataMed, a prototype for data discovery developed as part of the NIH Big Data 2 Knowledge bioCADDIE project. We want to share our experience with the Smart Descriptions & Smarter Vocabularies (SDSVoc) community in order to contribute to the discussion about development and management of vocabularies for the description of datasets, as well as learn from others at SDSVoc to feed back into the iterative development of DATS and DataMed.

In our presentation, we will explain the approach we followed in creating DATS, which combined the consideration of competency queries and an analysis of existing models/vocabularies for describing datasets, in an iterative approach to deliver the DATS model. We will also present our work on mapping DATS to schema.org, and proposals for required extensions.

Linked Data needs a Data Location Service

Richard Nagelmaeker [paper] [slides]

It's hard to find SPARQL endpoints on the internet, it's even harder to find out what relevant data such endpoints might have. Also their is a discrepancy between the (uri) domain of a SPARQL endpoint and the domain of some of the subject-uri's of datasets behind such an endpoint.

Panel

All speakers in this session plus:

  • Dan Brickley, Google/schema.org

- Wine, canapés and continued discussion

Informatie Vlaanderen logo

Kindly sponsored by Informatie Vlaanderen

The mission of the Informatie Vlaanderen Agency is to develop a coherent information policy across the Flemish public sector and to support and help realize its transition to an information-driven administration.


Thursday 1st December

- Coffee

Get ready for day 2.

- Negotiation by profile

Session chair: Eric Prud'hommeaux, Scribe: Tessel [notes]

Timing: 15 minutes x 2 + panel

Potential methods for requesting data not just in a given format but acccording to a specific vocabulary or data model.

Your JSON is not my JSON – A case for more fine-grained content negotiation

Ruben Verborgh [paper] [slides]

Information resources can be expressed in different representations along many dimensions such as format, language, and time. Through content negotiation, http clients and servers can agree on which representation is most appropriate for a given piece of data. For instance, interactive clients typically indicate they prefer html, whereas automated clients would ask for JSON or RDF. However, labels such as “JSON” and “RDF” are insufficient to negotiate between the rich variety of possibilities offered by today’s languages and data models. This position paper argues that, despite widespread misuse, content negotiation remains the way forward. However, we need to extend it with more granular negotiation options in order to serve different current and future Web clients sustainably.

An http Header for Metadata Schema Negotiation

Lars G. Svensson [paper] [slides]

This paper proposes two new http headers: "Accept-Schema" and "Schema" to be used for negotiating entity representations represented using different application profiles.

Panel

All speakers in this session plus:

  • Antoine Isaac, Europeana/VU
  • Herbert Van de Sompel, Los Alamos National Laboratory/DANS

- Lightning Talks

Session chair: Carlo Meghini, Scribe: Andrea [notes]

Timing: 2 x 5 minutes each plus Q&A, then 2 x 10 mins plus discussion

A series of lightning talks on topics related to the everyday running of portals and repositories.

Configuring the EntryScape platform to effectively support Application Profiles

Matthias Palmér and Hannes Ebner [paper] [slides]

Building web applications for every metadata standard is a daunting task. Writing applications for every Application Profile of a standard, for instance national or topical variations of a standard, is an even worse prospect. The EntryScape platform is built on the assumption that application profiles can be expressed as configurations for user interfaces oriented towards managing metadata. In this short paper we describe how this approach has been successfully realized. We also go into detail regarding the DCAT-AP recommendation and what we learned when implementing it in the EntryScape user interface. We present some generic conclusions regarding things to take into consideration when writing standards or application profiles, to minimize the risk for developers to rely on guesswork or ignore parts that are unclear.

Duplicate Evaluation

Simon Dutkowski and Andreas Schramm [paper (PDF)] [slides]

Today, the world's open data ecosystem is organized in a hierarchical structure, where datasets and their metadata are usually published on leaf nodes. These nodes are in most cases local area portals for specific regions or specialized portals for a specific class or type of data, like geo or statistic portals. This constellation potentially results in duplicate datasets in a portal that is in a higher position in the hierarchy. Many publishers are often not sure where to publish and finally they simply publish their data twice, once on the local portal and additionally e.g. on the national geo portal. A national open data portal probably harvests the local and the geo portal and, if no measures are taken, will finally contain the same data twice with slightly different metadata descriptions.

Another source for duplicates arises when datasets are harvested from several different portals which harvest from one portal. The European Data Portal (EDP), which is relatively high in the harvesting hierarchy (if not on top), is facing now the fact that there are really many possible duplicates, even harvested from single sources, as in most cases no other portal has a proven mechanism to avoid duplicates.

Challenges of mapping current CKAN metadata to DCAT

Sebastian Neumaier Jürgen Umbrich and Axel Polleres [paper (PDF)] [slides]

This report describes our experiences using the mapping of the metadata in CKAN powered Open Data portals to the DCAT model. CKAN is the most prominent portal software framework used for publishing Open Data and used by several governmental portals including data.gov.uk and data.gov. We studied the actual usage of DCAT in 133 existing Open Data portals and report key findings.

Interoperability between metadata standards: a reference implementation for metadata catalogues

Geraldine Nolf [paper (PDF)] [slides]

To enable the public and private sector to discover, adopt and reuse government information, administrations publish their data on data portals. The data is accompanied by structural metadata, providing information about the datasets. Governments publish information from different domains, including Geospatial Data, Open Data, Statistical Data, Archival Information, which is causing a wide variety of metadata standards. As these metadata standards often are not interoperable, it is a complex task for government administrations to publish their data in line with the regulations in the different data domains. This position paper reviews a potential strategy to simplify the management and reduce the costs of metadata portals.

Panel

All speakers in this session.

- Coffee

- Tooling

Session chair: Ronald Siebes, Scribe: DanBri [notes]

Timing: 15-20 minutes each plus Q&A

Effective management of vocabularies and profiles, and then testing that data conforms to them, requires easy to use tooling.

Distributed Vocabulary Development with Version Control Systems

Lavdim Halilaj, Steffen Lohmann, Christian Mader and Sören Auer [paper] [slides]

Vocabularies are increasingly being developed on platforms for hosting version-controlled repositories, such as GitHub. However, these platforms lack important features that have proven useful in vocabulary development. We present VoCol, an integrated environment that supports the development of vocabularies using ​Version Cont ​rol Systems. VoCol is based on a fundamental model of vocabulary development, consisting of the three core activities modeling, population, and testing. It uses a loose coupling of validation, querying, analytics, visualization, and documentation generation components on top of a standard Git repository. All components, including the version-controlled repository, can be configured and replaced with little effort to cater for various use cases.

Validata: A tool for testing profile conformance

Alasdair Gray [paper] [slides]

Validata is an online web application for validating a dataset description expressed in RDF against a community profile expressed as a Shape Expression (ShEx). Additionally it provides an API for programmatic access to the validator. Validata is capable of being used for multiple community agreed standards, e.g. DCAT, the HCLS community profile, or the Open PHACTS guidelines, and there are currently deployments to support each of these. Validata can be easily repurposed for different deployments by providing it with a new ShEx schema. The Validata code is available from GitHub.

Towards Executable Application Profiles for European vocabularies

Willem van Gemert and Eugeniu Costetchi [paper] [slides]

This paper describes current work done at the Publication Office of the European Union in the area of automatic validation of controlled vocabularies using a SHACL implementation of application profiles (AP) such as SKOS-AP-EU. The same implementation serves as a source for generating human readable AP documentation.p>

- Lunch

A cold buffet lunch is provided, courtesy of the VRE4EIC project.

- Show Me The Way

Session chair: Jacco van Ossenbruggen, Scribe: Phil [notes]

This session will attempt to draw together threads from the whole workshop and explore what further efforts, incuding W3C standardization, may be required.

Discovering Open Data Standards

Deirdre Lee [paper] [slides]

Publishing data using open data standards has clear benefits for users. However, the challenge of implementing open data standards lie with the data publisher. The first, and often most difficult question is often which open data standard should I use? This can be broken down further into (i) which open data standards are available and (ii) which open data standards are suitable for my needs?

At the recent International Open Data Conference 2016, Bill Anderson and James McKinney hosted a session on Open Data Standards1. A number of those involved in ongoing initiatives attended the session, and many of the issues and ideas presented in this position paper were discussed. The aims of the Smart Descriptions & Smarter Vocabularies (SDSVoc) Workshop are well aligned with the challenges of discovery and profiling of open data standards. Therefore, I propose to bring this discussion to SDSVoc, exploring possible solutions, including an open data standards catalogue and an open data standard metadata standard.

Panel

  • Deirdre Lee, Derilinx, W3C Data on the Web Best Practices WG co-chair
  • Keith Jeffery EuroCris/VRE4EIC project
  • Makx Dekkers, independent consultant, DCAT-AP editor [paper (PDF)]
  • Andrea Perego, European Commission/JRC
  • Alasdair Gray, Heriot-Watt University
  • Artemis Lavasa, CERN
  • Rebecca Williams, GovEx, USA

- Bar Camp

Session chair: Hans Overbeek

This is where you can pitch your idea for a discussion. It can be on any related topic. Perhaps you want to develop ideas you've heard during the workshop, or you want to add a new discussion into the mix. You'll have 1 minute to pitch your idea after which people will vote with their feet and join the session that interests them most. You'll be asked to report on your session just before the close of the workshop. Similar sessions may naturally merge into one. Remember to appoint a scribe!

Please let Phil Archer know your idea for a bar camp at any time before or during the workshop.

Coffee will be available for you to take to your choice of bar camp.

- Wrap up

2 Minute reports back from each bar camp

Final words:

  • Keith Jeffery, VRE4EIC [slides]
  • Phil Archer, W3C