Share-PSI 2.0 logo

The network for innovation in European public sector information

Agenda: Maximising interoperability — core vocabularies, location-aware data and more

Please use the Hashtag #sharepsi

Wednesday 25 November 2015

- Coffee, Registration and Networking

Get your badge, get a coffee, work out how you're going to spend the next two days.

- Welcome & Opening Plenary

Auditorium 1 (0.035) [scribe: Emma, Reinars] [notes]

  • Welcome from Fraunhofer FOKUS Director: Prof. Dr. Manfred Hauswirth
  • Workshop Introduction: Phil Archer, W3C
  • The European Data Portal - Opening up Europe's Public Data, Wendy Carrara, Capgemini [paper] [slides]

    The launch of the European Data Portal is one of the key steps the European Commission is taking to support the access to public data. The strategic objective of the European Data Portal project is to address the accessibility and the value of Open (Government) Data. Open Data refers to the information collected, produced or paid for by the public bodies (also referred to as PSI) and made freely available for re-use for any purpose. November 16th 2015 is the release date for the Beta version of the European Data Portal. It harvests the metadata of PSI available on public data portals across European countries. Portals can be national, regional, local or domain specific. They cover the EU Member States, EEA, countries involved in the EU's neighbourhood policy and Switzerland, 39 countries in total.

  • European Interoperability: The ISA Core Vocabularies, Athanasios Karalopoulos, European Commission [slides]
  • The Impact of Open Data in Public Sector, Heli Koski, The Research Institute of the Finnish Economy [paper] [slides]

    Various countries have implemented the open (government) data strategy aiming at providing wide access to government data in machine-readable format such that it can be freely used, reused and redistributed by anyone. Reported ex ante evaluations have estimated that the potential benefits of opening up public data resources are substantial. Very little is known about the underlying economic and organisational mechanisms and implications of open data use at the organisational level or at the level of economy as a whole. To my best knowledge, there is no reported comprehensive country-level ex-post impact assessment of opening up government data. Currently, Finland is among the leading countries in opening up the government data, and it has also a chance to be among the most advanced ones in the impact assessment of open data. The impacts of opening up government data can be divided to economic impacts and to other social impacts. The prerequisite for this is a careful development of the monitoring and evaluation model for opening up government data as well as a systematic gathering of data for the impact assessment. Furthermore, the usability and usefulness of different public data resources for consumers, firms and public sector organisations can be assessed via the users’ own evaluations. In addition, it is important to assess appropriate means to disseminate and promote efficient utilisation of information on the best practices of open data re-use in different organisations.

Come To My Session! Don't know which parallel session to go to after the break? Each facilitator will have 60 seconds to describe his/her session.

- Coffee

- Parallel Sessions A

European Data Portal Track
Auditorium 1 (0.035)

The EDP: A Technical View

Facilitators: Wendy Carrara (Capgemini), Yury Glikman, Benjamin Dittwald, Simon Dutkowski (Fraunhofer FOKUS), Marc Kleemann, Udo Einspanier(con terra) [slides] [notes]

The session will begin with a quick demo and tour of the new European Data Portal's features, including how to start an open data journey, how to ensure your metadata can be harvested by the portal, and a full suite of eLearning modules addressing open data. The developers will demonstrate the portal components, present the technical details and the lessons learned from its realisation.

The portal is the first official Open Data portal implementing the new DCAT Application Profile (DCAT-AP) specification. Harvesting metadata from heterogeneous open data portals of 28 EU and other 11 European countries and mapping it to DCAT-AP is one of the key tasks in the European Data Portal project. The project team will present the developed for this purpose tool and will provide insights in the harvesting process and its requirements to the source data portals.

GeoDCAT-AP is an extension of DCAT-AP for geospatial metadata encompassing the important INSPIRE and ISO 19115 standards. It is important to note that GeoDCAT-AP is not meant to replace these well-established standards, but provides an additional RDF-syntax binding and therefore a common way to transform the vast amount of existing geospatial metadata in Europe into a DCAT-AP compliant form. This enables integration of this metadata into the Linked Open Data world and general data portals including the EDP. For harvesting geo portals, an adapter was developed that implements the GeoDCAT-AP bindings. The adapter uses common protocols like CSW or OpenSearch Geo to harvest metadata from geo portals across Europe. Currently the beta version of the European Data Portal harvests 35 geo portals from 30 different European states. During implementation of the GeoDCAT-AP bindings, several issues were identified when mapping geospatial metadata to DCAT-AP. Most of these issues are caused by the fact that INSPIRE/ISO metadata is usually self-contained, while RDF-based data uses URIs to reference other, sometimes external, resources. But there are also other problems, e.g. it is not clear how to specify that a geospatial dataset can be accessed by a standardized geo service interface like WMS or WFS. Solving these problems would have to start with fixing deficiencies in the current guidelines for creating INSPIRE and ISO metadata.

Location Track
Room 0.019

The Importance of Location

Facilitators: Athina Trakas (OGC) & Arnulf Christl (Megaspatial) Scribe: Stuart Lester [notes]

Location is an integral part of public information. We typically know it in the form of maps but location data has a lot more potential than "just" maps. Location information puts data into a spatial context (what is where!). Interestingly, location information in many cases is the only link between two completely independent datasets. Linking two datasets through location information allows the discovery of otherwise completely opaque relations. The problem is that location information is fuzzy, often badly maintained and not an integral part of most alphanumeric data models.

The session aims to highlight the importance and potential of location data. It also describes the technical parameters in a nutshell and how the OGC and W3C cooperate to improve core vocabularies.

Share-PSI Track
Auditorium 2 (Room 0.036)

Impact and Interoperability in Finland

Facilitator: Mikael Vakkari, Anne Kauhanen-Simanainen, Margit Suurhasko, Ministry of Finance. Scribe: Ira Alanko [notes]

This session includes the paper: From open data to the innovative utilisation of information - The final report of the Finnish Open Data Programme 2013–2015 [paper]

The session will begin with a report on the impact of the Finnish Open Data Programme 2013–2015, launched in the spring of 2013 in order to accelerate and coordinate the opening of public sector data resources. This will be followed by a look at four specific actions.

The public sector has at its disposal extensive data resources which could generate significant financial and social benefits if used more efficiently. Some major data resources have been made available, such as terrain and weather data, traffic and vehicle data, statistics, financial data, and cultural resources.

The Open Data Programme was based on extensive cooperation between ministries, government agencies and institutions, local government, research institutes and developer communities. Programme outputs include an open data and interoperability portal, Avoindata.fi, and an open data development environment, JulkICTLab. So as to harmonise the terms of use, the public administration recommendation JHS 189 ’Licence for use of open data’ was prepared. The Finnish Open Data Programme has contributed as organiser and partner to several open data events and functions. International cooperation includes participation in the EU’s Share PSI project.

The preliminary assessment prepared during the programme suggests that research into the impacts of open data is just beginning. Systematic follow-up and improved methodologies will be needed in the future.

This publication proposes further pathways for moving from the opening of data resources to data utilisation and data competence enhancement. All open data policies should be part of a more comprehensive data policy, the principles of digitalisation, and the data infrastructure.

Four specific actions are relevant to the Open Data Programme:

- Parallel Session A Reports

Auditorium 1 (0.035)

Brief (3 minute) summaries from each session, focusing on three questions:

  1. What X is the thing that should be done to publish or reuse PSI?
  2. Why does X facilitate the publication or reuse of PSI?
  3. How can one achieve X and how can you measure or test it?

And the best practices discussed.

- Lunch

- Parallel Sessions B

Auditorium 1 (0.035)

Come To My Session! Don't know which parallel session to go to after the break? Each facilitator will have 60 seconds in the main hall to describe his/her session.

European Data Portal Track
Auditorium 1 (Room 0.035)

The Role of the Portal

Facilitators: Wendy Carrara (Capgemini), Bernhard Krabina, Georg Hittmair. [slides] Scribe: Daniel [notes]

A portal is more than a catalogue of datasets. It can be a multi-functional platform, a place where potential re-users can request datasets, a place where datasets that exist but are not yet open can be listed and so on. This session combines the perspectives of the EDP with ideas from Austria on internal data monitoring, and from the PSI Alliance on what is important for potential business users of PSI.

In this session, Wendy Carrara will discuss the EDP's model of Portal Maturity that measures the usability of a portal with regard to the availability of functionalities, the overall re-usability of data and the spread of data. In addition to these findings, recommendations are formulated based on common trends recorded across different Member States. This session will focus on portal usability and launch a discussion of the recommendations proposed and identify further recommendations and best practices.

Bernhard Krabina from the Austrian Zentrum für Verwaltungsforschung will reflect on the obligation on European Union member states to make their publicly funded data available at zero or, at most, marginal cost, leading to a need for a systematic approach of internal data monitoring. It is easy to find the first batch of suitable datasets to publish, but setting up an internal data catalogue is ever more important.

In the workshop we will discuss

  • organisational procedures for setting up an internal data catalogue: who is in charge, what (meta)data is collected - are the metrics proposed in the Open Government Implementation Model suitable for internal use, are they being used, do they need improvement?
  • tools for collection: how is information about potential data sets collected, what tools are used. As a potential best-practice, the OGD Cockpit will be presented and discussed. - what kind of information has to be added according to the PSI directive? e.g. marginal costs, legal information, etc? - how transparent must an internal data catalogue be?

Georg Hittmair (PSI Alliance) notes that while open data platforms contain information about data that were released voluntarily by public sector bodies, not all portals collect the requests from PSI re-users according to the PSI Directive. Those requests show the real needs of the private sector and therefore could lead to improvements in the reuse of public sector information. [paper]

Location Track
Room 0.019

Location in the Real World

Facilitators: Athina Trakas (OGC) & Arnulf Christl (Megaspatial) Scribe: Stuart [notes]

The second session introduces three real-world location information datastores which follow completely different approaches. One of them is driven by a comunity (crowd sourced), one is commercial and one is maintained by public authorities.

Smart Cities
Auditorium 2 (Room 0.036)

What is the role of standards for Smart Cities and how should they be created?

Facilitators: Hanna Niemi-Hugaerts, Muriel Foulonneau, Slim Turki. Scribe: Andras [notes]

What is the role of standards for Smart Cities and how should they be created?

This session includes:

  • Cities removing friction from open data driven business, Hanna Niemi-Hugaerts, Pekka Koponen, Forum Virium Helsinki [paper]
  • Data for Smart cities - data selection, data quality, and service reuse, Muriel Foulonneau, Slim Turki, LIST, [paper]

The session will discuss the ability for local communities to deploy e-government services or facilitate the take up of data in existing apps for citizens and companies. Can we define which apps have been useful and successful elsewhere and analyse the capacity of a city to become “smart” based on the availability of data resources? While data formats can be documented through DCAT for instance, the data characteristics required by reusable apps also need to be documented. In this session we propose discussing the availability of apps that can be applied beyond the boundaries of the current environments in which they are used and the data characteristics, including formats, granularity and licences that are necessary.

But are standards helping us to take the next leap ahead or are they slowing down the development? Agile developer friendly standards seem to be popping up like mushrooms after the rain: Open311, Open Contracting, Open Trails etc. Simultaneously standardisation bodies like ISO, W3C and OGC are tackling the issue with a more official, and therefore slower approach. Enterprise standards like XBRL, SIRI and GTFS are created that are more generic but also more complex to implement and use. Which way should cities follow? And how should cities co-operate in finding the best solutions and picking the best common specifications for their open APIs and data models? Active co-operation would lead to clear benefits by creating larger market for the developers and enabling "roaming" of the digital city services. Cities should also find an effective way to steer the development of future versions of the standards to meet the ever changing needs of the cities and react to constant change in the technical ecosystem. To ignite discussion some of the recent initiatives of joining the forces of the cities to push standardisation will be presented briefly, eg. CitySDK, The Finnish Six Cities Strategy, and the Open and Agile Smart Cities network (OASC).

- Parallel Session B Reports

Room TBC?

Brief (3 minute) summaries from each session, focusing on three questions:

  1. What X is the thing that should be done to publish or reuse PSI?
  2. Why does X facilitate the publication or reuse of PSI?
  3. How can one achieve X and how can you measure or test it?

And the best practices discussed.

- Coffee

- Plenary Talks

Auditorium 1 (0.035)

Facilitator: Noël Van Herreweghe

  • An Intelligent Fire Risk Monitor Based on Linked Open Data, Nicky van Oorschot, netage.nl [paper] [slides] [notes]

    Since the beginning of 2015 netage.nl has been working on a Linked Open Data use case within the Fire Department in the Netherlands. In this use case we are developing an self-service analytics platform where fire departments combine open available datasets to calculate dwelling fire risks in various neighbourhoods corresponding to forensic fire related research. During a research period the application has been proven and verified. The ambition is to spread the application internationally among different countries. In addition to the explanation of the use case, the presentation will include a specific talk according to location and geographic issues and challenges we encountered. We believe that more companies encounter the same issues and challenges that we can deal with more easily, by aligning location and geographic classifications and ontologies. We would love to share our ideas during a presentation.

  • The Connecting Europe Facility Programme, Daniele Rizzi, European Commission [slides] [notes]

- Reaching Consensus

Auditorium 1 (0.035)

Facilitators: Nancy Routzouni & Peter Winstanley, respectively of the Greek and Scottish Governments [scribe: PhilA]

This highly interactive session will begin with Nancy Routzouni introducing the accepted and candidate Share-PSI Best Practices. She'll describe how they're being collected from the Share-PSI workshops before Peter Winstanley explains how the BPs can help authorities implementing the revised PSI Directive.

Then stand by for quick explanations of several of the BPs from their authors. 60" max presentation, then 30" to write down your thoughts and on to the next one. Please consider:

  1. Will you/have you already implemented this recommendation?
  2. Do you have a story to tell about this?
  3. Do you think it's seriously wrong?

The following list is subject to change but is indicative:

End of Day 1

The Share-PSI social event at the Classic Remise.

A view of the event area at the Classic remise, showing a large hall with many diners

the TransoforMap logoIf you didn't book for the social event, TransforMap is running a fringe event: Federating Civic Data.

Thursday 26 November

- Coffee, Registration and Networking

Get a coffee, work out how you're going to spend the next day.

- Plenary Talks

Auditorium 1 (0.035)

Introduced by: Yury Glikman [scribe: Pekka] [notes]

  • Mobile positioning and statistical derivatives – The way forward? Aleš Veršič, Ministry of Public Administration, Slovenia [paper by Igor Kuzma] [slides] [video shown]

  • Core Public Service Vocabulary - The Italian Application Profile, Gabriele Ciasullo, Giorgia Lodi, Antonio Rotundo, AgID [paper] [slides]

    This paper introduces an on-going national initiative, carried out by the Agency for Digital Italy (AgID) in accordance with the relevant legislation, for the definition of the Italian catalogue of public services. The catalogue has three main objectives:

    1. it can be used to facilitate the discovery of public services by citizens and enterprises;
    2. it provides public administrations with a comprehensive platform through which sharing best practices on services, and building a community that discusses and potentially re-uses those best practices; and
    3. it can be used by AgID itself in order to monitor the degree of standardization and digitalization of the services of the public sector, thus reporting to the political level for strategic decision fine-tuning purposes.

    The catalogue is defined in terms of metadata that contribute to the specification of the so-called Core Public Service Italian Application Profile. The metadata can be specified by public administrations (be they local or central) to represent their available, or under-development, online and physical public services. The core public service Italian application profile is defined through the use of core vocabularies, as released by the European Commission in collaboration with W3C. In particular, the paper presents the current preliminary data model of the Italian profile that is mainly based on the core public service vocabulary and its application profile, although other core vocabularies are considered (e.g., Core Location, Organization Ontology and Registered Organization Vocabulary).

  • Linked Data for SMEs, Lena Farid and Andreas Schramm, LinDA/Fraunhofer FOKUS [slides]

    Linked Data is an active research field currently, driven by visions and promises. New ideas, concepts, and tools keep emerging in quick succession. In contrast, relatively little activity is being seen towards aspects like ease of use and accessibility of tools for non-experts, so as to promote Linked Data at SME level. This is a sign for the still developing maturity of said research field, and it is the motivation and starting point of the LinDA (“Linked Data Analytics”) project.

    In this project, various Linked Data tools have been designed and implemented, ranging from the renovation of public sector information and enterprise data to data analytics, and also an integrated workbench, to couple these components. This workbench also provides user guidance through simple standard workflows. The whole workbench has been designed with ease of use in mind, ranking simplicity higher than feature completeness, so as to make it fit for SMEs.

    In this session, we will first give an overview of the LinDA workbench, and the typical workflows it is designed for. The components comprise data renovation and consumption tools. Second, we will examine two of its components, viz. the data renovation tools and the vocabulary repository, and their interaction in more detail. Here, renovation refers to conversion from various source formats (structured and semi-structured data) towards Linked Data (RDF) under some modest semantic enrichment. For the latter purpose, the vocabulary repository has been incorporated, and we explain how these components are employed under the supervision of the user. Furthermore, the LinDA workbench will show how external data (SPARQL) endpoints can be easily integrated in order to facilitate the exploration and interlinking of various private and open data sources.

  • Brief Introduction on the ARE3NA project, Andrea Perego, Joint Research Centre, The European Commission [slides]

Come To My Session! Don't know which parallel session to go to after the break? Each facilitator will have 60 seconds to describe his/her session.

- Coffee

- Parallel Sessions C

European Data Portal Track
Auditorium 1 (Room 0.035)

Implementing interoperability
for Data Portals in Europe

Facilitators: Nikolaos Loutas, Dietmar Gattwinkel Scribe: Johann [notes]

This session combines:

  • Implementing the DCAT Application Profile for Data Portals in Europe, Nikolaos Loutas, PwC, Makx Dekkers, AMI Consult [paper] [slides]
  • First Steps towards interoperability of key open data asset descriptors, Dietmar Gattwinkel, Konrad Abicht, René Pietzsch, Michael Martin [paper] [slides]
  • European Data Portal: lessons learned on the use of DCAT-AP, Simon Dutkowski, Fraunhofer FOKUS [slides]

This session aims to bring together implementers of the DCAT-AP and national profiles based on it, in order to gather implementation experience and feedback. Since the first publication of the DCAT-AP in 2013, many member states have implemented national application profiles based on the European profile. Earlier in 2015, a revision of the DCAT-AP was developed based on contributions from various Member States, the European Commission and independent experts. During this work, it became clear that it would be useful for implementers to have an overview of current practices and to share common challenges in mapping national practices to the exchange of dataset descriptions on the European level. Possible topics of discussion include:

  • the implementation of the DCAT-AP as the native data model of existing data portals;
  • implementation of national/regional variants of the DCAT-AP;
  • harmonisation of national/regional codelists with codelists recommended by the DCAT-AP;
  • implementation of importers/exporters for DCAT-AP conformant metadata;
  • implementation of DCAT-AP validation frameworks and services.

DCAT-AP is an attempt to solve the problem of cross-border and cross-domain interoperability. Governmental authorities and virtual communities such as datahub.io have published a large number of open datasets. Whilst this is a fundamentally positive development, one can observe many differences between them. Established vocabularies are often augmented with portal-specific metadata standards and published in different (local) languages.

Integrating and aggregating across portals entails a lot of effort. In this session we present examples how the problems of interoperability and multilingualism could be addressed for key open data asset descriptors.

The European Data Portal defined DCAT-AP as the basic concept for handling metadata. If all metadata were exchanged using DCAT-AP this would be an easy venture but the EDP has to respect other de facto standards currently in use. Although the DCAT-AP specification tries to be clear and unambiguous, there are aspects that make it hard to use it in conjunction with much more restrictive formats like CKAN json. The new release of DCAT-AP already addresses some of these interoperability issues successfully but there are still challenges to be addressed.

Location Track
Room 0.019

INSPIRE in RDF

Facilitator: Andrea Perego, JRC Scribe: Athina [notes]

The INSPIRE Directive is putting in place an infrastructure and standards to help share geospatial data across Europe. The scope of the directive is broad, focussing on environmental policy but touching on diverse themes of interest to e-government such as addresses, public health, place-names, governmental services, utilities and transport networks. The default exchange format recommended for most INSPIRE data is the XML-based Geography Markup Language (GML). While GML is widely known within the geospatial information domain, many e-government applications and tools start to adopt ‘Linked Data’ to publish their data using RDF.

ARE3NA (A Reusable INSPIRE Reference Platform, Action 1.17 of the ISA Programme) has proposed a methodology for the creation of RDF vocabularies representing the INSPIRE data models and the transformation of INSPIRE data into RDF. These vocabularies and methodology need further testing and refinement through practical pilots that should also illustrate the benefit of reusing INSPIRE interoperability standards. The goal of the workshop will be to define the pilots that will further test the methodology and vocabularies. This will include outlining concrete applications that would benefit from linking between INSPIRE and other data, including statistics.

Share-PSI Track
Auditorium 2 (Room 0.036)

Tools for Everyone

Facilitators: Hannes Kiivet, Estonian Information Systems Authority, Sebastian Sklarß, ]init[, Lutz Rabe, KoSIT Scribe: Peter K [notes]

This session looks at two tools that help to manage public sector metadata:

  • Estonian Metadata Reference Architecture [description] [slides]
  • German XML for public administration “XÖV” tool chain in action [slides]

Estonia is a leader in providing its citizens with e-services. The central portal, eesti.ee, provides access to more than 800 basic public services. That wealth of services has clearly indicated a need for a more structured and methodical approach to national-level service portfolio management. In a wider context, a number of metadata collection needs have arisen from incident reporting to data element discovery. In this particular context, several non-technical pilots have taken place iterating various service management practices. The business aim of this pilot is to supply these with sound, validated, technical reference architecture.

The XÖV Zubehör is a German tool set for standard production. It covers a run through the standard generation process going from UML modelling, schema validation, generation and XSD/PDF production. The way and degree to which tools like the XGenerator, the InteropBrowser, the Genericoder or the XRepository support the process will be discussed. XÖV Zubehör has been used for years now in Germany to create and publish XML transport formats and interface descriptions for various registers (Person, Civil, Firearms) in a homogenous way.

- Parallel Session C Reports

Auditorium 1 (0.035)

Brief (3 minute) summaries from each session, focusing on three questions:

  1. What X is the thing that should be done to publish or reuse PSI?
  2. Why does X facilitate the publication or reuse of PSI?
  3. How can one achieve X and how can you measure or test it?

And the best practices discussed.

- Lunch

- Parallel Sessions D

Auditorium 1 (0.035)

Come To My Session! Don't know which parallel session to go to after the break? Each facilitator will have 60 seconds in the main hall to describe his/her session.

Share-PSI 2.0 Track
Auditorium 2 (Room 0.036)

Publishing and Using Linked Open Data in Public Sector

Facilitators: Jan Kučera, Džiugas Tornau Scribe: Jan [notes]

This session focusses on Linked Data technologies and includes two papers:

  • Self-describing Fiscal Data, Jindřich Mynarz, Jakub Klímek and Jan Kučera, University of Economics, Prague, Czech Republic [paper] [slides]
  • Publishing Linked Data with reusable declarative templates, Martynas Jusevičius and Džiugas Tornau, Graphity/UABLD [paper] [slides]

Fiscal data released by public sector bodies looks like impenetrable fog of accounting terms to most lay users without expertise in public finance. Relying on column labels to convey what data is about is a recipe for misinterpretation. Fiscal data tends to be poorly documented and lacking schemas that would guide its users. We propose to publish self-describing fiscal data to help resolve these issues. Machine-readable descriptions of data increase the degree to which data processing can be automated. Data descriptions can guide human reusers and improve their understanding of the described datasets. Self-describing data enable some processing without reaching for out-of-band information by reading documentation or contacting dataset maintainers. We describe 2 complementary approaches for self-describing data from the domain of public finance: a data model based on the RDF Data Cube Vocabulary and the Fiscal Data Package that is based on JSON descriptors.

If Linked Data itself presents barriers, the second part of the session might clear up some issues as it will offer a declarative approach to define “blueprints” for Linked Data: a vocabulary for Linked Data templates that can be shared and interpreted by different software systems and increase interoperability by doing so. The vocabulary has been developed as part of Graphity , a declarative platform for data-driven Web application development. Whereas RDF and Linked Data solve interoperability at the data layer, Graphity extends the declarative approach into software development. It delivers cost efficiency and software interoperability for data publishers and software developers alike. We will start the session with a short case presentation. Open data from the Copenhagen municipality (geo data in CSV format) will be imported, converted to RDF, published as Linked Data, and analyzed. The only tool used will be the Graphity Platform, to illustrate the viability and flexibility of Linked Data templates in real world data management tasks.

Share-PSI 2.0 Track
Auditorium 1 (Room 0.035)

Core Vocabularies and Grammar for Public Sector Information Management & Interoperability

Facilitators: Chris Harding (TOG), Yannis Charalabidis (UAEGEAN). Scribe: Chris [notes]

This session combines two papers:

  • Core Vocabularies and Grammar for Public Sector Information Interoperability, Chris Harding, The Open Group [paper] [slides]
  • Controlled Vocabularies and Metadata Sets for Public Sector Information Management, Yannis Charalabidis, [paper] [slides]

The sharing of public sector information requires common methods to support discovery and understanding of the discovered resources. This requires more than the use of core vocabularies, especially when sharing between public and private sectors, or across borders.

The Open Group is developing a common grammar, known as the Open Data Element Framework (O-DEF), which has a core vocabulary and a basic grammar for describing atomic units of data. This session will share the main lessons learned from this development, and their application to public sector information and will be illustrated by a simple example of information use in smart cities.

In Greece, an ontology-based extended metadata set has been created that embraces public sector services, documents, XML Schemas, codelists, public bodies and information systems. The session will discuss experiences of application within the Greek Public Sector, as part of the National Interoperability Framework specification. Such a metadata framework is an attempt to formalise the automated exchange of information between various portals and registries and further assist the service transformation and simplification efforts.

Location Track
Room 0.019

Ontological Arguments

Facilitators: Herbert Schentz [slides], András Micsik [slides]. Scribe: Stuart [notes]

This session is about using and adapting existing ontologies, particularly relating to location, and includes the paper: Semantics for the "Long Term Ecological Researchers", Herbert Schentz, Johannes Peterseil & Michael Mirtl, Austrian Environment Agency [paper]

The session will begin with a description of the application of semantics for LTER-Europe, a community of researchers dealing with long term ecosystem research. The needs from this community can be seen as representative for the environmental domain within research and public administration. Within the ALTER-Net project a test for semantic data integration has been carried out, where dislocated and highly heterogeneous data were mapped to a common ontology, thus allowing a seamless and homogenous data integration. This test showed whilst this is feasible, a lot of issues have to be overcome. One lesson, learned out of this test was, that one comprehensive, complete conceptual model for this big domain cannot be established in a reasonable timeframe. The work has to be split up into several steps, must make use of work already done and a simple start is needed.

Similarly, the Obuda Linked Open University Data (OLOUD) ontology, designed at the Obuda University of Budapest collects useful information for students and lecturers in the form of Linked Data. We found many relevant schemas filling various pieces of the modelling space, including ORG, Teach, OWL Time ontologies, schema.org, etc. None of those fitted the needs perfectly and massaging them into a single usable ontology was a challenge. A specific goal was the inclusion of location information in the model with support for in-house navigation. This extension to OLOUD ontology was designed from scratch, as none of the previous modelling efforts were still maintained and open. The location model connects to the usual ontologies such as GeoNames, and is capable of describing classrooms, hallways, stairs and points of interests in a university building.

- Coffee

- Parallel Session D Reports

Auditorium (0.035)

Brief (3 minute) summaries from each session, focusing on three questions:

  1. What X is the thing that should be done to publish or reuse PSI?
  2. Why does X facilitate the publication or reuse of PSI?
  3. How can one achieve X and how can you measure or test it?

And the best practices discussed.

- Short Talks with Pictures

Auditorium 1 (0.035) Moderator: Phil Archer [scribe: Hannes]

  • Government as a Developer - Challenges and Perils, André Lapa, AMA [slides] [notes]

    AMA, Portugal's Agency for the Administrative Modernization found a way to circumvent the classic “where are the developers pressuring us to open data?” issue by tapping into Government’s own app development agenda and identifying cases when closed data could be open to the public. There are currently two projects that are a good illustration of this principle: the Local Transparency Portal and the Citizen Map, which opened up data related to municipal transparency and the geolocation of all public services, by “forcing” the responsible entities to free the data that was powering those apps. Of course this raised several questions regarding data governance, interoperability between different public agents, and the disparity that arises from developers trying to compete with government resources regarding the use of open data. Obviously it’s not an ideal system, but one that produced interesting results, such as raising the general level of data quality on the Portuguese open data portal.

  • An Extensible Architecture for an Ecosystem of Visualisation Web Components for Open Data, Gennaro Cordasco, Delfina Malandrino, Pina Palmieri, Andrea Petta, Donato Pirozzi,Vittorio Scarano, Luigi Serra, Carmine Spagnuolo, Luca Vicidomini [paper] [slides] [notes]

    We present an architecture for open, extensible and modular Web Components for visualisation of Open Data datasets. The datalets are Web Components that can be easily built and included into any standard HTML page, as well as used in other systems. The architecture is developed as part of the infrastructure needed to build a Social Platform for Open Data (SPOD) in the Horizon 2020 funded project ROUTE-TO-PA. We present the motivations, the examples and a sketch of the architecture. The software is currently under development, in a very early stage, but is already available, with MIT open source licence.

- Bar Camp

Auditorium 1 (0.035)

Time keeper: Øystein Åsnes

Pitch your idea for a discussion on any topic in 60 seconds or less, then take your group to an available space. Remember to appoint a scribe. Please let Phil Archer know the title of your session as soon as convenient.

  1. Share-PSI Best Practices Round 2, Nancy Routzouni
  2. Volume, Variety, Velocity, Veracity: Irina Bolychevsky, W3C/BigDataEurope
  3. Open Government Data (OGD) Research Areas Taxonomy and the OGD Life Cycle, Harris Alexopoulos
  4. From Open Data Platforms for developers to Open Data platform for citizens, Yury Glikman
  5. The myths and realities of 5-star open data, Benjamin Cave, the ODI
  6. What do you want from CKAN?, Sebastian Moleski
  7. Native DCAT(-AP) Editor Validator Aggregator, Matthias Palmer
  8. Provenace of Civic Data, Jon Richter

- Bar Camp Reports

Auditorium 1 (0.035)

Facilitator: Øystein Åsnes

Followed by brief closing remarks from Yury Glikman (Fraunhofer FOKUS) and Phil Archer (W3C).

End of Workshop

$Id: agenda.php,v 1.68 2016/02/11 21:33:03 phila Exp $