Warning:
This wiki has been archived and is now read-only.

Berlin/Papers

From Share-PSI EC Project
Jump to: navigation, search

Sessions/tracks proposals:

1. Location Track (by Athina Trakas and Ingo Simonis)

Location is an integral part of public information. We typically know it in the form of maps but location data has a lot more potential than "just" maps. Location information puts data into a spatial context (what is where!). Interestingly, location information in many cases is the only link between two completely independent data set. Linking two datasets through location information allows to discover otherwise completely opaque relations. The problem is that location information is fuzzy, often badly maintained and not an integral part of most alphanumeric data models.

S1.1 Session 1

The first session aims to highlight the importance and potential of location data. It also describes the technical parameters in a nutshell and how the OGC and W3C cooperate to improve core vocabularies.

S1.2 Session 2

The second session introduces three real-world location information datastores which follow completely different approaches. One of them is driven by a comunity (crowd sourced), one is commercial and one is maintained by public authorities.


S2. Implementing interoperability (by Mikael Vakkari). This would include the following items / CASEs to the session/workshop
S3. ARE3NA work on INSPIRE RDF

The problem/issue you are going to address

The INSPIRE Directive is putting in place an infrastructure and standards to help share geospatial data across Europe. The scope of the directive is broad, focusing on environmental policy but touching on diverse themes of interest to e-government such as addresses, public health, place-names, governmental services, utilities and transport networks. The default exchange format recommended for most INSPIRE data is the XML-based Geography Markup Language (GML). While GML is widely known within the geospatial information domain, many e-government applications and tools start to adopt ‘Linked Data’ to publish their data using RDF. ARE3NA (A Reusable INSPIRE Reference Platform, Action 1.17 of the ISA Programme) has proposed a methodology for the creation of RDF vocabularies representing the INSPIRE data models and the transformation of INSPIRE data into RDF. These vocabularies and methodology need further testing and refinement through practical pilots that should also illustrate the benefit of reusing INSPIRE interoperability standards. The expected outcome of the workshop The goal of the workshop will be to define the pilots that will further test the methodology and vocabularies. This will include outlining concrete applications that would benefit from linking between INSPIRE and other data, including statistics.

The intended audience

• Experts in linked data, including those working on relevant projects for geospatial data. • e-Government application developers and users Moderation methods We would intend to share background documents in the Share-PSI project wiki, including our draft guidelines. We intend to invite some people and would encourage them to submit there, too.

The format of the workshop is foreseen to involve:

• An introductory presentation, including the context of INSPIRE and the work in ARE3NA • A few brief introductory presentations for initial pilot proposals • An invitation for the audience to make elevator pitch statements for other possible pilots or additions to initial proposals • Break-out groups • Report back

Desired facilities

Large enough space to allow discussion in 3-4 groups or two rooms/spaces. Projector, 4x flip-board, markers, paper.

S4. W3C Data on the Web Best Practices (by Phil Archer)

In parallel with Share-PSI, a formal standard-setting working group has been operating at W3C. The Data on the Web Best Practices WG is chartered to create three deliverables: a Best Practice document and two vocabularies that can be used as a framework for providing information about the quality and usage of a dataset. The intention is to encourage the further development of data sharing on the Web and, as such, it has a lot in common with Share-PSI. However, there are important differences. First of all, the W3C is solely concerned with technical matters and not the broader policy issues that surround public sector information. Secondly, the working group is not focussed on a particular part of the world and so the existence of the European PSI Directive, whilst interesting and noteworthy, is not a guiding force.

With members from Brazil, the USA and elsewhere, the W3C working group is approaching the end of its charter. In this session, we will look at the best practices and the two vocabularies, and discuss whether and how these can and should be used when sharing public sector information.

S5. -moved to EDP track
S6. Internal data monitoring (by Bernhard Krabina)

As the PSI directive revises and increases the obligations on European Union member states to make their publicly funded data available at zero or, at most, marginal cost, a systematic approach of internal data monitoring becomes more important for public sector organisations. In OGD initiatives it was quite easy to find first suitable datasets to publish on OGD portals, but as searching for more and more datasets is an issue, setting up an internal data catalogue becomes more important also from an OGD perspective.

In the workshop we will discuss - organisational procedure on how an internal data catalogue can be set up: who is in charge, what (meta)data is collected - are the metrics proposed in the Open Government Implementation Model [1] suitable for internal use, are they being used, do they need improvement? - tools for collection: how is information about potential data sets collected, what tools are used. As a best-practice, the OGD Cockpit [2] well be presented and discussed. - what kind of information has to be added according to the PSI directive? e.g. marginal costs, legal information, etc? - how transparent must an internal data catalogue be?

Expected outcome of the workshop: Insights to the questions above with concrete results from participating public sector orgnisations: how is the current practices, what needs to be improved. We well discuss what is an internal data catalogue, what are metrics for internal data monitoring and what are suitable tools for the monitoring process. The results will be included in the upcoming version 3 of the Open Government Implentation Model.

Intended audience: practicioners from public sector organisations as well as researchers in the field of Open Government and PSI.

Moderation methods you plan to use: short presentation on "Internal Data Monintoring - Processes and Tools" followed by a short paper-based collection of practices from the participants and discussion with practicioners and researchers.

Facilities you require: Beamer, WiFi, Pinboard/Flipchart including pins and markers

[1] Open Government Implementation Model: http://kdz.eu/de/node/2651 [2] OGD Cockpit: http://www.ogdcockpit.eu


S7. PSI Request Repository (by Georg Hittmair)

While open data platforms contain information about data that were released voluntarily by Public Sector bodies, nobody collects the requests from PSI Reusers according to the PSI directive. Those requests show the real needs of the private sector and therefore could lead to improvements in the reuse of public sector information.

S8. Linked Data for SMEs (by Lena Farid and Andreas Schramm)

Linked Data is an active research field currently, driven by visions and promises. New ideas, concepts, and tools keep emerging in quick succession. In contrast, relatively little activity is being seen towards aspects like ease of use and accessibility of tools for non-experts, so as to promote Linked Data at SME level. This is a sign for the still developing maturity of said research field, and it is the motivation and starting point of the LinDA (“Linked Data Analytics”) project.

In this project, various Linked Data tools have been designed and implemented, ranging from the renovation of public sector information and enterprise data to data analytics, and also an integrated workbench, to couple these components. This workbench also provides user guidance through simple standard workflows. The whole workbench has been designed with ease of use in mind, ranking simplicity higher than feature completeness, so as to make it fit for SMEs.

In this session, we will first give an overview of the LinDA workbench, and the typical workflows it is designed for. The components comprise data renovation and consumption tools. Second, we will examine two of its components, viz. the data renovation tools and the vocabulary repository, and their interaction in more detail. Here, renovation refers to conversion from various source formats (structured and semi-structured data) towards Linked Data (RDF) under some modest semantic enrichment. For the latter purpose, the vocabulary repository has been incorporated, and we explain how these components are employed under the supervision of the user. Furthermore, the LinDA workbench will show how external data (SPARQL) endpoints can be easily integrated in order to facilitate the exploration and interlinking of various private and open data sources.

S9. Linked Open University Data (by Barnabas Szasz, Rita Fleiner and Andras Micsik) - or a session...

At the Obuda University of Budapest, we designed the Obuda Linked Open University Data (OLOUD) ontology with the aim of collecting useful information for students and lecturers in the form of Linked Data. We found many relevant schemas filling various pieces of the modelling space, including org, teach, w3c time ontologies, schema.org, etc. None of those were perfectly fitting our needs, and massaging them into a single usable ontology was a challenge. The first aspect of the session is interoperability and reusability of schemas and ontologies, and the design of the OLOUD ontology. The second goal was the inclusion of location information in the model with support for in-house navigation. This extension to OLOUD ontology was designed from scratch, as none of the previous modeling efforts were still maintained and open. The location model connects to the usual ontologies such as GeoNames, and is capable of describing classrooms, hallways, stairs and points of interests in a university building. I think it is important to discuss how well the Linked Universities or Linked Science efforts can guide us in the creation of Linked Data for universities or for education in general, and what others are planning in this area.

S10. First Steps towards interoperability of key open data asset descriptors (by Dietmar Gattwinkel, Konrad Abicht, René Pietzsch, Michael Martin) (Session proposal)

In the context of the European Digital Agenda governmental authorities 1 as well as virtual communities (e.g. datahub.io) have published a large amount of open datasets. This is a fundamentally positive development, however one can observe that many different that both for the metadata and the data itself different vocabularies are in use. Furthermore, the established vocabularies are often augmented with portal specific metadata standards and published in different (local) languages. If Open Data are to be integrated and aggregated across portals, this entails a lot of effort. In this paper we present examples how the problems of interoperability (section 1) and multilingualism (section 2) could be addressed for key open data asset descriptors. We focus on the analysis of today’s problems and ways to solve them.

S11. Implementing the DCAT Application Profile for Data Portals in Europe. (by Nikolaos Loutas, Makx Dekkers) (Session proposal)

This session aims at bringing together implementers of the DCAT-AP and national profiles based on it, in order to gather implementation experience and feedback. Since the first publication of the DCAT-AP in 2013, many member states have implemented national application profiles based on the European profile. Earlier in 2015, a revision of the DCAT-AP was developed based on contributions from various Member States, the European Commission and independent experts. During this work, it became clear that it would be useful for implementers to have an overview of current practices and to share common challenges in mapping national practices to the exchange of dataset descriptions on the European level. Possible topics of discussion may include:

  • The implementation of the DCAT-AP as the native data model of existing data portals;
  • Implementation of national/regional variants of the DCAT-AP;
  • Harmonisation of national/regional codelists with codelists recommended by the DCAT-AP;
  • Implementation of importers/exporters for DCAT-AP conformant metadata;
  • Implementation of DCAT-AP validation frameworks and services;
S12. Self-describing fiscal data (by Jindřich Mynarz, Jakub Klímek, Jan Kučera ) (Session proposal) - only in the session B or later

Fiscal data released by public sector bodies looks like impenetrable fog of accounting terms to most lay users without expertise in public finance. Relying on column labels to convey what data is about is a recipe for misinterpretation. Fiscal data tends to be poorly documented and lacking schemas that would guide its users. We propose to publish self-describing fiscal data to help resolve these issues. Machine-readable descriptions of data increase the degree to which data processing can be automated. Data descriptions can guide human reusers and improve their understanding of the described datasets. Self-describing data enable some processing without reaching for out-of-band information by reading documentation or contacting dataset maintainers. We describe 2 complementary approaches for self-describing data from the domain of public finance: data model based on the RDF Data Cube Vocabulary and Fiscal Data Package that is based on JSON descriptors.

S13. German XML for public administration “XÖV” tool chain in action (by Sebastian Sklarss)

This session aims to give an insight view into the XÖV Zubehör, the German tool set for standard production. It covers a run through the standard generation process going from UML modelling, schema validation, generation and XSD/PDF production. The way and degree to what tools like the XGenerator, the InteropBrowser, the Genericoder or the XRepository support the process will be discussed. The XÖV Zubehör is used for years now in Germany to create and publish XML transport formats and interface descriptions for various registers (Person, Civil, Firearms) in a homogenous way. The interactive session may scale from 15 to 60 minutes and one ]init[ staff will be available with laptop, internet connection and tools to demonstrate and explain in English

s14. Cities removing friction from open data driven business (by Hanna Niemi-Hugaerts)

Abstract

s15. Publishing Linked Data with reusable declarative templates (by Martynas Jusevičius and Džiugas Tornau)

This session will address the declarative approach to define “blueprints” for Linked Data: a vocabulary for Linked Data templates that can be shared and interpreted by different software systems and increase interoperability by doing so. The vocabulary has been developed as part of Graphity , a declarative platform for datadriven Web application development. Whereas RDF and Linked Data solve interoperability at the data layer, Graphity extends the declarative approach into software development. It delivers costefficiency and software interoperability for data publishers and software developers alike. We will start the session with a short case presentation. Open data from the Copenhagen municipality (geo data in CSV format) will be imported, converted to RDF, published as Linked Data, and analyzed. The only tool used will be the Graphity Platform, to illustrate the viability and flexibility of Linked Data templates in realworld data management tasks. Full abstract

s16. Data for Smart cities - data selection, data quality, and service reuse (by Muriel Foulonneau, Slim Turki)

The session aims to discuss the ability for local communities to easily deploy e-government services or facilitate the take up of data in existing apps for citizens and companies. The smart city context by essence makes for heterogeneous but replicable experiences. Can we define which apps have been useful and successful elsewhere and analyse the capacity of a city to become “smart” based on the availability of data resources? While data formats can be documented through DCAT for instance, and standards have been defined on data, the data characteristics required by reusable apps also need to be documented. In this session we propose discussing the availability of apps that can be applied beyond the boundaries of the current environments in which they are used and the data characteristics, including formats, granularity and licences that are necessary. Full abstract

s17 Data and API Standards for the Smart Cities - Necessary Next Step or Never-ending Battle?(Forum Virium Helsinki, Hanna Niemi-Hugaerts, Pekka Koponen)

Are standards helping us to take the next leap ahead or are they slowing down the development? Agile developer friendly standards seem to be popping up like mushrooms after the rain: Open311, Open Contracting, Open Trails etc. Simultaneously standardization bodies like ISO, W3C and OGC are tackling the issue with a more official, and therefore slower approach. Enterprise standards like XBRL, SIRI and GTFS are created which are more generic but also more complex to implement and use. Which way should cities follow? And how should cities co-operate in finding the best solutions and picking the best common specifications for their open APIs and data models? Active co-operation would lead to clear benefits by creating larger market for the developers and enabling "roaming" of the digital city services. Cities should also find an effective way to steer the development of future versions of the standards to meet the ever changing needs of the cities and react to constant change in the technical ecosystem. To ignite discussion some of the recent initiatives of joining the forces of the cities to push standardisation will be presented briefly, eg. CitySDK (http://www.citysdk.eu/), The Finnish Six Cities Strategy (http://6aika.fi/in-english/) and the Open and Agile Smart Cities network (OASC) (http://connectedsmartcities.eu/open-and-agile-smart-cities/ ).

s18. Government as a Developer - Challenges and Perils (by André Lapa)

Facilitator: AMA - Agency for the Administrative Modernization (PT) AMA found a way to circumvent the classic “where are the developers pressuring us to open data?” issue by tapping into Government’s own app development agenda and identifying cases when closed data could be open to the public. There are currently two projects that are a good illustration of this principle: the Local Transparency Portal and the Citizen Map, which opened up data related to municipal transparency and the geolocation of all public services, by “forcing” the responsible entities to free the data that was powering those apps. Of course this raised several question regarding data governance, interoperability between different public agents, and the disparity that arises from developers trying to compete with government resources regarding the use of open data. Obviously it’s not an ideal system, but one that produced interesting results, such as raising the general level of data quality on the Portuguese open data portal. We propose to briefly present our experience in this process, from identifying the projects to coordinating with the different stakeholders (government, public bodies, private companies), and the lessons we have learned along the way. Then we would open up the discussion to the floor, drawing on the case main points, hoping to receive valuable feedback regarding possible different approaches and suggestions towards building a sustained governance model for these types of initiatives. Intended audience is everyone that engages with Public Sector Information, focusing on our colleagues from other governments and public administrations that may have faced similar challenges.


Papers/abstracts for plenary presentations

P1. The Impact of open data in public sector (by Ms. Heli Koski) (Paper)

Various countries have implemented the open (government) data strategy aiming at providing wide access to government data in machine-readable format such that it can be freely used, reused and redistributed by anyone. Reported ex ante evaluations have estimated that the potential benefits of opening up public data resources are substantial. Very little is known about the underlying economic and organizational mechanisms and implications of open data use at the organizational level or at the level of economy as a whole. To my best knowledge, there is no reported comprehensive country-level ex-post impact assessment of opening up government data. Currently, Finland is among the leading countries in opening up the government data, and it has also a chance to be among the most advanced ones in the impact assessment of open data. The impacts of opening up government data can be divided to economic impacts and to other social impacts. The prerequisite for this is a careful development of the monitoring and evaluation model for opening up government data as well as a systematic gathering of data for the impact assessment. Furthermore, the usability and usefulness of different public data resources for consumers, firms and public sector organizations can be assessed via the users’ own evaluations. In addition, it is important to assess appropriate means to disseminate and promote efficient utilization of information on the best practices of open data re-use in different organizations.

P1 Review: The paper reads well; however, a number of issues about it can be summarised as follows.

The first issue regards the content of the paper that seems out of scope with respect to the objectives of Berlin’s workshop. The paper extensively presents works available at the state of the art on the possible impact of open (government) data from an economic and societal points of views. From our point of view, however, it does not emerge anything new compared to what has been already presented in the first Share-PSI’s workshops and many other contexts. There is an interesting part which includes indicators for measuring the impact on different types of users of opening data. If there is space for accepting it, this part could be kept very short and can be considered as the real contribution. The paper in general is excessively long in the number of pages (27 pages in contrast to 5 pages required by the share-psi workshop) with many parts that seem repeated throughout the document itself.

The paper seems also dated "29 of January 2015". This latter point, together with previous observations, let us think that it was already available and that it was just sent to share-psi without checking the requirements for the specific Berlin’s workshop.

In conclusion,we strongly suggest to definitely reduce the pages of the paper trying to (i) concentrate on the real contribution only, just briefly mentioning other works in the literature; and (ii) connect the content of the paper to interoperability aspects (maybe thinking more to indicators for maximising interoperability) in order to be better aligned with the scope of the workshop.


P2. From open data to the innovative utilisation of information - The final report of the Finnish Open Data Programme 2013–2015 (by Anne Kauhanen-Simanainen, Margit Suurhasko and Mikael Vakkari) Paper;

The public sector has at its disposal extensive data resources which could generate significant financial and social benefits if used more efficiently. Some major data resources have been made available, such as terrain and weather data, traffic and vehicle data, statistics, financial data, and cultural resources. The Finnish Open Data Programme 2013–2015 was launched in the spring of 2013 in order to accelerate and coordinate the opening of the public sector data resources.

The Open Data Programme was based on extensive cooperation between ministries, government agencies and institutions, local government, research institutes and developer communities. Programme outputs include an open data and interoperability portal, Avoindata.fi, and an open data development environment, JulkICTLab. So as to harmonise the terms of use, the public administration recommendation JHS 189 ’Licence for use of open data’ was prepared. The Finnish Open Data Programme has contributed as organiser and partner to several open data events and functions. International cooperation includes participation in the EU’s Share PSI project.

The preliminary assessment prepared during the programme suggests that research into the impacts of open data is just beginning. Systematic follow-up and improved methodologies will be needed in the future.

This publication proposes further pathways for moving from the opening of data resources to data utilisation and data competence enhancement. All open data policies should be part of a more comprehensive data policy, the principles of digitalisation, and the data infrastructure.


P3. DigMap - Digital Map Excerpt Software (Paper)

DigMap is digital map excerpt (DME) and it represents a part of ICT GIS infrastructure that can be widely used in different areas. Its most recognized usage is going to be for printing out digital cadastral map excerpt composed from several layers (most common digital ortophoto, land use, parcels and buildings) used to locate, inventory, and appraise all owner’s property. Maps and map data are also important for other governmental agencies, the public, and the land information community (such as realtors, title companies, and surveyors). DigMap in PDF format significantly simplify the view of geospatial data and feature attributes while DigMap embedded files can enhance the capability to manage, analyse, summarize, display, and disseminate geographically referenced information. DigMap can be used in many different areas where digital map is needed. Exhaustive list of application areas can be found under INSPIRE Directive that addresses 34 spatial data themes needed for environmental applications. Due to DigMap standardization it can be used as a wide accepted technology and common format for digital geospatial data dissemination over Internet. As support for many public services connected with delivery of geospatial data DigMap is going to be availability online. According to the eGovernment benchmark method DigMap is five-stage maturity model that supports forth transaction, and finally fifth as well, which is the highest targetisation level. DigMap supports transactional maturity model - also called full electronic case handling – where the user applies for and receives the service online, without any additional paper work, which is increasingly becoming mainstream. DigMap also supports the fifth level, targetisation, which provides an indication of the extent by which front- and back-offices are integrated, data is reused and services are delivered proactively. DigMap allows online “one stop shop” approach to many public electronic services even when complexity of geospatial data is involved. Due to DigMap standardization it can be used as a wide accepted technology and common format for digital geospatial data dissemination over Internet. Since data are enveloped in today’s most common interoperable PDF format it enables easy Citizen Participation using wide range of devices (smartphones, tablets, PC). DigMap PDF enables easy view of geospatial data and feature attributes while DigMap embedded files can enhance the capability to manage, analyse, summarize, display, and disseminate geographically referenced information. DigMap non-functional requirement include: Standardization, Interoperability, Authenticity, Payable. Standardization. DigMap will enable sharing spatial data in standardized .pdf format, providing end users with possibility to have a direct view on spatial data presented as a map (picture in PDF file). Interoperability. DigMap will be based on wide accepted OGC SLD, WMS, WCS WFS and WPS standard and fully INSPIRE compliant. Authenticity. To be able to use issued DigMap for legal purpose it must be signed with digital signature enabling authentication and non-repudiation. Billing. DigMap will have billing ability based on price of point, area or number of objects delivered to the end user. “This project is funded by the European Union Seventh Framework Programme FP7/2007-2013 under grant agreement no 632838”

P3 Review 1: The paper describes a software product that is convenient for public sector information sharing. From that perspective, I think we should allow them a slot to present, though I would say it is a weak accept.

P3 Review 2: In general, I think this paper is poor to be considered for a plenary talk. It touches slightly the theme of the workshop and it’s focused on the solution itself and includes much superfluous information. This work was presented in Krems as well, and there is a good practice extracted from it [1].

This paper presents an open source GIS tool based on standards to manage and represent geographic information (OGC SLD, WMS, WCS, WFS) and compliant with the INSPIRE requirements.

Although this paper would fit in the "working with location data” track, I cannot see much information on how this tool can enhance the desired interoperability (apart from the previous references to OGC standards).

One of the non-functional requirements of the tool is standardization ("[…] DigMap will enable sharing spatial data in standardized .pdf format,”). Fair for a representing maps for humans, but we are looking for another level of standardization.

"Due to DigMap standardization it can be used as a wide accepted technology and common format for digital geospatial data dissemination over Internet.” -> This phrase is repeated in the document but I do not understand clearly how this tool may be used as common format to be shared. It will be helpful if this is clarified.

"Facilitate cross border use and data/service integration.” is one of the impacts, and this is key for the workshop but there is no much information about it. In order this paper to be accepted I suggest including more information about interoperability itself rather than explaining the benefits of the tool and how this can help Share PSI to draw up a best practice —different than the collected in Krems—. With the changes or the orientation proposed, this may fit into the “location data” sessions, but not in the plenary.


P4. Controlled Vocabularies and Metadata Sets for Public Sector Information Management (by Yannis Charalabidis) (Paper) -> available after 2 PM on the first day.

Public Sector Information management frameworks, usually in the form of ontologies and taxonomies containing controlled vocabularies and relevant metadata sets, appear as a key enabler that assists the classification and sharing of resources related to the provision of open data and efficient digital services towards citizen and enterprises. As different authorities typically use different terms to describe their resources and publish them in various data and service registries that may enhance the access to and delivery of governmental knowledge, but also need to communicate seamlessly at a national and pan-European level, the need for unifying and inclusive digital public service metadata standards emerges. This paper presents the creation of an ontology-based extended metadata set that embraces public sectors services, documents, XML Schemas, codelists, public bodies and information systems. Furthermore, the paper presents experiences of application within the Greek Public Sector, as part of the National Interoperability Framework specification. Such a metadata framework is an attempt to formalize the automated exchange of information between various portals and registries and further assist the service transformation and simplification efforts, while it can be further taken into consideration when applying Web 2.0 techniques in governance.

P4 Review: In overall, the paper is relevant to the topic of the workshop and presents the specifications created for a public sector information management initiative in Greece.

To a certain extent, I find that the related work is not always relevant. For example, when creating specifications for describing the metadata of public services or public administration organisations, there is far more relevant literature available that generic models for describing Web resources. The process followed for the development of the specifications is not explained. It would be interesting for the audience of the workshop, if information about it was given. Was it an expert-driven process? Was it open? Were there existing standards reused? If yes, what was the experience of reusing, if not, why? The paper also remains quite theoretical. It would be interesting for the attendees of the workshop to hear about how these models have been implemented in practice, what are the lessons learnt etc. Finally, the links to the codelists referred to in the paper should be included in the report, and ideally also examples of use of the specification.


P5. Core Vocabularies and Grammar for Public Sector Information Interoperability (The Open Group - Chris Harding) (Paper)

The European Directive on the re-use of public sector information focuses on economic aspects, with the aim of stimulating re-use by commercial enterprises. Enterprises cannot discover information that could be valuable to their customers, and integrate it into their products and services, unless the information is clearly described in a way that they can understand. Core vocabularies provide a way of doing this, but they are not by themselves enough. A basic grammar is needed also. The Open Group is developing the Open Data Element Framework (O-DEF), which has a core vocabulary and a basic grammar for describing atomic units of data. This paper describes the main lessons learned from this development, and their application to public sector information. It is illustrated by a simple example of information use in smart cities.

P5 Review1: This is an interesting paper as it aims to address common challenges in understanding descriptions across sectors, in this case for businesses that want to re-use public sector information. The paper uses specific terminology without clearly defining its meaning; e.g. it is not clearly defined what ‘core vocabulary’ and ‘grammar’ are. It would be necessary to explain very early on what those terms mean; otherwise the reader does not understand what problem the paper tries to solve: how do “core vocabularies provide a way of [clearly describing information]”, why are they not enough, and why is a ‘basic grammar’ needed?

The main problem that I have with the paper is that it actually does not provide a solution to the problem it addresses. It starts out saying that there is a need for basic grammar in addition to core vocabularies and concludes that you can use core vocabularies but need a basic grammar, but between those two parts there is no real information about how and why. It is also not helpful that there are other statements that are not explained, e.g. why “The grammar must be consistent with use of relational databases”, and what does that mean in practice?

Maybe the answer lies in O-DEF which is mentioned in the second paragraph, but I cannot find very much information about O-DEF on the Open Group’s website – and the article does not include a link to relevant information – other than a mention of a meeting in the Edinburgh 2015 conference that contains a comment that it will be based on UDEF.

Minor points:

The example of the vocabulary term ‘reading value’ seems to be based on a poor modelling approach, as it lacks, on one hand, information about the type of reading and, on the other hand, the type of value. The question about the meaning of ‘reading value’ is framed in the context of human communication. In machine-to-machine communication there should not be a need for interpretation because, of course, any vocabulary term needs to be properly defined.

I do not understand the statement that RDF “is more oriented to describing the real world than to describing data”. In what way is ‘data’ not part of the ‘real world’? In fact, RDF can be used as the model to describe absolutely anything you want. As far as I am concerned “piece of data giving the name of a person” is exactly what the RDF property foaf:name is supposed to do.

Overall, I think that the paper does not really contain a compelling argument or sufficient detail that would make it relevant for the workshop. Maybe the paper could be a contribution to the discussion on interoperability if it contained an explanation of where existing approaches (RDF, DC, FOAF, DCAT etc., in combination with Application Profiles like the EU DCAT-AP) fail to provide solutions to the problem, and at least an outline of how O-DEF intends to solve the challenges.

In summary, I tend to recommend rejection of the paper in its current form.

P5 Review No 2: The paper "Core Vocabularies and Grammar for Public Sector Information Interoperability" addresses the problem of interoperability between a data publisher and consumer (application). As an example, the paper describes the problem of understanding information available, e.g., a "reading", a measurement of a value of a certain type (e.g., temperature) at a certain date, time, and location. The paper proposes to define a core vocabulary and a basic grammar for describing atomic units of data. It mentions the Open Data Element Framework (O-DEF) as a possible solution. It describes requirements for such a solution, e.g., consistency with use of relational databases, usability with data represented in languages such as JSON and XML.

The paper discusses the important problem of helping application developers to select the right data model. I like that the paper refers to relevant topics in the previous Share-PSI workshops as a motivation of this work.

My main concern about having this paper presented in the plenary is that it motivates the problem but does not describe a possible solution and does not give concrete examples of how to start solving it.

RDF is mentioned to have difficulties to describing ideas such as "piece of data giving the name of a person". Indeed this sentence could be awkward to literally describe in RDF (though always possible using reification approaches). If I understand correctly, the author proposes a core vocabulary and grammar to describe the purpose of a class or property, e.g., a property "name" or a class "reading" and a property "value". Usually, such things would have human readable descriptions of what they mean, e.g., "foaf:name" has "name of an agent". Such human readable descriptions would not follow a core vocabulary and grammar and therefore it is difficult for machines to understand such descriptions in order to recommend application developers which to use.

The process of selecting the right classes and properties in an application is still a mostly manual task, which, however, can be supported by ontology repositories and semantic search engines (e.g., see http://vocab.cc/). Ontologies such as OWL or RDFS allow to put limitations to the possible meaning of properties and classes, e.g., by describing them as "datatype properties", "object properties", "inverse properties", "subclassOf", "disjointOf" etc. Indeed it is an interesting question to ask whether such logical axioms are sufficient to help machines to understand the meaning of a class or properties; this understanding then can be used, e.g., to automatically map/match/link classes and properties and to recommend the usage to developers. I am not sure whether such axioms could be improved upon (ontology matching and instance matching approaches may tell, and the research on expressivity, and the research on design patterns) but I am also not convinced that a core vocabulary and grammar would help here. The paper does not give an example or other arguments.

Other more minor concerns:

No further information of O-DEF can be found on the Web. The relationship to UDEF is unclear.

"Data descriptions require grammatical constructs such as object class and property, rather noun, verb etc." Is that not the definition of meta modelling? Work on meta modelling and design patterns seem to be highly related to this paper. In general, I think, a close look to related work would be very helpful, here:

  • Meta modelling (what are the basic constructs of an ontology?)
  • Modelling design patterns (what are basic building blocks, see Fowler for object-oriented data models or work by Pascal Hitzler for ontologies)
  • Linked Data query processing (light-weight/core vocabularies, linking of ontology parts, defining applications on one ontology and getting all data described with other ontologies "for free")

In summary, I think this paper would need more information about the possible solution (and ideally, some lessons learned of applying it) to be presented to the plenary. Instead, the paper is perfect to spark a discussion session bringing together ontology engineers, data publishers, data consumers, and application developers discussing open problems and possible solution of interoperability.


P5 Review No 3: The paper "Core Vocabularies and Grammar for Public Sector Information Interoperability" addresses the problem of interoperability between a data publisher and consumer (application). As an example, the paper describes the problem of understanding information available, e.g., a "reading", a measurement of a value of a certain type (e.g., temperature) at a certain date, time, and location. The paper proposes to define a core vocabulary and a basic grammar for describing atomic units of data. It mentions the Open Data Element Framework (O-DEF) as a possible solution. It describes requirements for such a solution, e.g., consistency with use of relational databases, usability with data represented in languages such as JSON and XML.

The paper discusses the important problem of helping application developers to select the right data model. I like that the paper refers to relevant topics in the previous Share-PSI workshops as a motivation of this work.

My main concern about having this paper presented in the plenary is that it motivates the problem but does not describe a possible solution and does not give concrete examples of how to start solving it.

RDF is mentioned to have difficulties to describing ideas such as "piece of data giving the name of a person". Indeed this sentence could be awkward to literally describe in RDF (though always possible using reification approaches). If I understand correctly, the author proposes a core vocabulary and grammar to describe the purpose of a class or property, e.g., a property "name" or a class "reading" and a property "value". Usually, such things would have human readable descriptions of what they mean, e.g., "foaf:name" has "name of an agent". Such human readable descriptions would not follow a core vocabulary and grammar and therefore it is difficult for machines to understand such descriptions in order to recommend application developers which to use.

The process of selecting the right classes and properties in an application is still a mostly manual task, which, however, can be supported by ontology repositories and semantic search engines (e.g., see http://vocab.cc/). Ontologies such as OWL or RDFS allow to put limitations to the possible meaning of properties and classes, e.g., by describing them as "datatype properties", "object properties", "inverse properties", "subclassOf", "disjointOf" etc. Indeed it is an interesting question to ask whether such logical axioms are sufficient to help machines to understand the meaning of a class or properties; this understanding then can be used, e.g., to automatically map/match/link classes and properties and to recommend the usage to developers. I am not sure whether such axioms could be improved upon (ontology matching and instance matching approaches may tell, and the research on expressivity, and the research on design patterns) but I am also not convinced that a core vocabulary and grammar would help here. The paper does not give an example or other arguments.

Other more minor concerns:

No further information of O-DEF can be found on the Web. The relationship to UDEF is unclear.

"Data descriptions require grammatical constructs such as object class and property, rather noun, verb etc." Is that not the definition of meta modelling? Work on meta modelling and design patterns seem to be highly related to this paper. In general, I think, a close look to related work would be very helpful, here:

  • Meta modelling (what are the basic constructs of an ontology?)
  • Modelling design patterns (what are basic building blocks, see Fowler for object-oriented data models or work by Pascal Hitzler for ontologies)
  • Linked Data query processing (light-weight/core vocabularies, linking of ontology parts, defining applications on one ontology and getting all data described with other ontologies "for free")

In summary, I think this paper would need more information about the possible solution (and ideally, some lessons learned of applying it) to be presented to the plenary. Instead, the paper is perfect to spark a discussion session bringing together ontology engineers, data publishers, data consumers, and application developers discussing open problems and possible solution of interoperability.

P6. Semantics for the "Long Term Ecological Researchers" (by Herbert Schentz, Johannes Peterseil & Michael Mirtl) (Paper)

In the presentation the application of semantics for LTER-Europe, a community of researchers dealing with long term ecosystem research, will be described. The needs from this community can be seen as representative for the environmental domain within research and public administration. Within the ALTER-Net project a test for semantic data integration has been carried out, where dislocated, very heterogeneous data were mapped to a common ontology, thus allowing a seamless and homogenous data integration. This test showed, that it is feasible, but a lot of issues have to be overcome. One lesson, learned out of this test was, that one comprehensive, complete conceptual model for this big domain cannot be established in a reasonable timeframe. The work has to be split up into several steps, must make use of work already done and a simple start is needed. We developed a thesaurus as a simple start on semantics and interlinked it with other existing vocabularies, which are important for the community (GEMET, EUROVOC, AGROVOC). EnvThes aims to cover the concepts for environmental monitoring and experimentation. So far this thesaurus is used for controlled keywords within the metadata system DEIMS and in the future the concepts should be mapped to an ontology and data should be annotated with the concepts. Building an ontology, derived from ISO19156 (observation and measurement) and annotate the underlying data would be the next steps.

P6 Review Scope: The topic of the paper is within scope of the Berlin Workshop. It discusses initiatives conducted to improve semantic interoperability for LTER-Europe, a community of researchers dealing with long term ecological research. Two approaches are described: an initial approach using a common ontology (SERONTO) and a more lightweight approach using a thesaurus (EnvThes). The paper provides some lessons learned and open issues that are still to be addressed. Suggestions: During the workshop, the authors are encouraged to: - Provide generic best practices (see also http://www.w3.org/2013/share-psi/bp/) on semantic interoperability, based on the lessons learned in the LTER field. - Link and compare with other initiatives on semantic interoperability such as the INSPIRE data specifications and linked data work, the ISA Core Vocabularies, schema.org, etc. - Demonstrate the added value of using semantic technologies (i.e. new ways of data integration that were previously not possible) and areas of prioritisation.

Some minor typos: - Section 2.3: “To provide.” (Isolated sentence) - many data provider >> many data providers Recommendation: accept

P7. An extensible architecture for an ecosystem of visualization web-components for Open Data (by Gennaro Cordasco, Delfina Malandrino, Pina Palmieri, Andrea Petta, Donato Pirozzi,Vittorio Scarano, Luigi Serra, Carmine Spagnuolo, Luca Vicidomini) (Paper)

We present here an architecture for open, extensible and modular web-components for visualization of Open Data datasets. The datalets are web-components that can be easily built, and included into any standard HTML page, as well as used in other systems. The architecture is developed as part of the infrastructure needed to build a Social Platform for Open Data (SPOD) in the Horizon 2020 funded project ROUTE-TO-PA (www.routetopa.eu). We present the motivations, the examples and a sketch of the architecture. The software is currently under development, in a very early stage, but is already available, with MIT open source license, at deep.routetopa.eu.


P8. Intelligent fire risk monitor based on Linked Open Data (by Nicky van Oorschot, Bart van Leeuwen)(Paper)

Since the beginning of this year netage.nl has been working on a Linked Open Data use case within the Fire Department in the Netherlands. In this use case we are developing an self-service analytics platform where fire departments combine open available datasets, to calculate dwelling fire risks in various neighbourhoods corresponding to forensic fire related research. During a research period the application has been proven and verified. The ambition is to spread the application internationally among different countries. We would like to give a presentation at the plenary part of the SharePSI workshop in Berlin coming November. In addition to the explanation of the use case, part of this presentation is going to be a specific talk according to location and geographic issues and challenges we encountered. We believe that more companies encounter the same issues and challenges that we can deal with more easily, by aligning location and geographic classifications and ontologies. We would love to share our ideas during a presentation. In the attachment you will find a 5 page paper which will explain our use case briefly. We hope you are just as enthusiastic as we are.

P8 Review The linked open data case presented in this paper is interesting and inspiring and highly relevant in the context of smart city services, especially when the quality of the data is critical. The main research question in the paper is “Does linked open data provide a qualitative and dynamic way to create a dynamic fire risk profile monitor for cities and neighbourhoods?” For the Share-PSI audience it would be interesting to learn details on how they improved and tested the quality of their data by linking, aggregating, analysing and overlaying wide set of open datasets. Having the professional firefighters evaluating the Proof of Concepts and comparing the data to real life experiences gives credibility to this process. I think the base story of firefighters using linked data to enable safer and quicker decisions in emergency situations was already presented in Samos workshop, but this paper focuses more on statistical data and analysis. Time has also passed, so it would be a good moment to get an update on how their work has progressed. It would be interesting to know more about what are the missing datasets they identified, from their point of view, what should be the next steps in opening up data by the public sector. That would give guidance to prioritising efforts in opening up data (or preferably APIs) in other countries/cities. This case is promoting also the benefits of opening data through APIs and in linked data format. Interoperability on EU or global scale isn't addressed at all in the paper, even though there has to be a lot of demand in utilising the results of this work in other countries. However they refer to international scaling in their email brief, so that will be in the scope of the presentation. This paper is relevant to the workshop themes, especially location data and smart cities. International interoperability is missing in the paper, interoperability between the local datasets (linked mainly by location) is of course built in the concept.


P9. Core Public Service Vocabulary - The Italian Application Profile (by Gabriele Ciasullo, Giorgia Lodi, Antonio Rotundo) (Paper)

This paper introduces an on-going national initiative, carried out by the Agency for Digital Italy (AgID) in accordance with the relative legislation, for the definition of the Italian catalogue of public services. The catalogue has three main objectives: (i) it can be used to facilitate the discovery of public services by citizens and enterprises; (ii) it provides public administrations with a comprehensive platform through which sharing best practices on services, and building a community that discusses and potentially re-uses those best practices; and (iii) it can be used by AgID itself in order to monitor the degree of standardization and digitalization of the services of the public sector, thus reporting to the political level for strategic decision fine-tuning purposes.The catalogue is defined in terms of metadata that contribute to the specification of the so-called Core Public Service Italian Application Profile. The metadata can be specified by public administrations (be they local or central) to represent their available, or under-development, online and physical public services. The core public service Italian application profile is defined through the use of core vocabularies, as released by the European Commission in collaboration with W3C. In particular, the paper presents the current preliminary data model of the Italian profile that is mainly based on the core public service vocabulary and its application profile, although other core vocabularies are considered (e.g., core location organization ontology and registered organization vocabulary).

P9 Review 1: The paper presents a good use case for modeling a domain using a controlled vocabulary. It follows best practices of vocabulary reuse in doing so. The paper discusses the architecture of a machine-readable semantic catalog, which is a necessary step in digitization of public services. However, while it might help solve the problem of automated discoverability of public services, it does not address the much harder problem of automated interaction with said services. We think the paper should, at least as future work or in a workshop session, incorporate read-write Linked Data services into the data architecture. That would be the next logical step in automation and digitization of public services: once they are discoverable in a machine-readable fashion, the goal should be to make the whole workflow digital and machine-readable in order to enable software agents to invoke and complete it on users behalf.

P9 Review 2: This paper is covers the main topic of the workshop: interoperability using and adapting core vocabularies. This case, the document is centered on the definition of public services. This proposal covers most of the proposed issues of the CfP. One positive thing is that the proposed solution is performed by a nation-wide public body (Agency for Digital Italy), so the solution is really powerful and solves the challenge of interoperability at national level. Technically, this approach develops a Core Public Service Application Profile for Italy, reusing various core standard vocabularies —i.e., CPSV, Org Ontology, Core Location Vocabulary—, as well as some existing national and supranational classification schemes —i.e., COFOG, NACE, themes in DCAT-AP, UK local service catalogue. The document is well structured and shows a tangible example of the benefits of adoption of common vocabularies. The OWL file, and documentation, should be released publicly (before the report of the event is published) in order to be explored by the people interested in the technical part. This serve as good practice to be replicated in the rest of the Member States, so this paper should be accepted and presented in the event —even in the plenary session. Furthermore, a best practice should be collected as outcome of the workshop so the author may be requested to do so.

P10. Towards real-time calculation of Financial Indexes using Linked Open Data (by Andreas Harth, Alexander Büscher; Karlsruhe Institute of Technology)

By regularly computing Key Performance Indicators for the case of financial stress test scenarios such as the unemployment rate growing by 3% within three months, governments evaluate the stability of their financial system. This work in progress paper discusses challenges in calculating financial stress values directly from Open Data to increase transparency, traceability and reproducibility. We select a financial stress index and give an overview of Open Data sources suitable to compute the index. For implementation, we propose to use standard vocabularies (RDF Data Cube Vocabulary) and an RDF rule engine (Linked Data-Fu).


P11 The European Data Portal - Opening up Europe’s public data (by Wendy Carrara) (Paper)

The launch of the European Data Portal is one of the key steps the European Commission is taking to support the access to public data. The strategic objective of the European Data Portal project is to address the accessibility and the value of Open (Government) Data. Open Data refers to the information collected, produced or paid for by the public bodies (also referred to as PSI) and made freely available for re-use for any purpose. On November 16th, 2015 the Beta version of the European Data Portal is released. The European Data Portal harvests the metadata of PSI available on public data portals across European countries. Portals can be national, regional, local or domain specific. They cover the EU Member States, EEA, countries involved in the EU's neighbourhood policy and Switzerland, 39 countries in total.

P11 Review: My notes on the paper are as follows: 1. The case’s action (European Open Data service) shows good alignement with the theme of the workshop as it could tackle three of the aims: maximize interoperability of data sets within and across domains, technical interoperability and persistent data publishing. 2. The paper mentions measuring the level of open data maturity. The paper should add some clarification how this measurement is done, also how can the user see and use it’s results. 3. In addition to maturity measurement, there are also other support actions for countries to publish more data. An elaboration of these actions would help understand how they help the publishing process (in accordance with technical and within/across-domain interoperability) 4. As the submitter of the paper is part of the SharePSI consortia, then it would be good to see a brief overview of the „Gold Book for data publishers“ and how (if) it is different from what we try to achieve with this project’s Best Practices?

Overall, the paper, after updated with details of the operation of the portal (see p2 and p3), is a good addition too our workshop. In the session presentation I would like to have a brief comparision of Gold Book and Best Practices (see p4).


P12 Estonian metadata reference architecture - a "proof of concept“ prototype of Public Service Metadata Editor that is based on linked open data and is able to output machine-readable, CPSV-conformant Estonian public service descriptions. Piloted in EIRA (European Interoperability Reference Architecture) and CarTool project.

The short paper can be seen together as the description and report (by Hannes Kiivet)

P13 Mobile positioning and statistical derivatives – The way forward? (by Igor Kuzma) Paper


Presentations proposed by the European Data Portal project:

EDP1. Portal State of Play (5-10min) (can also be a demo of the CKAN part + visualization + maps, however this might be part of point 3, this would then last at least 15min) – Capgemini

The European Data Portal will have gone live on 16 November during the European Data Forum. It contains the metadata of public sector information made available on national portals across Europe, including geospatial portals. More than a portal, additional content items address how to start an Open Data Journey, how to be harvested by the Portal as well as a full suite of eLearning modules addressing Open Data. The benefits of re-use are also addressed, including the findings of the economic assessment of Open Data for the EU28+. The purpose of this presentation is to offer a quick demo of the current content as well as the key features of the portal and its future developments.

EDP2. Open data practices in MS (10min) (can be extended to 15min if we split if from the points below) – Capgemini

In the context of the European Data Portal, support will be offered to European countries to help them accelerate on their Open Data journey. A first step in offering common and tailored support is to establish a full landscape of how countries perform with regard to Open Data. The focus of the landscaping is to understand the level of Open Data Maturity from the perspective of the public sector representatives. Open Data Maturity is measured based on two key indicators: Open Data Readiness and Portal Maturity. Open Data Readiness looks at the presence of Open Data policies, the use made of the Open Data available, and at the political, social and economic impact of Open Data. Portal Maturity measures the usability of a portal with regard to the availability of functionalities, the overall re-usability of data, as well as the spread of data. In addition to these findings, recommendations are formulated based on common trends recorded across the different countries. The purpose of this presentation would be to share the findings of the landscaping with a key focus on portal usability and to launch a discussion around the recommendations proposed and identify further recommendations and best practices.

EDP3. Technical architecture of the Portal (20min) – Fraunhofer (linked to point 1)

At the European Data Forum 2015 in Luxembourg the European Commission will officially lunch the new pan-European Open Data Portal. The portal is the first official Open Data portal implementing the new DCAT Application Profile specification, harvesting metadata from heterogeneous open data portals of 28 EU and other 11 European countries and providing metadata in all official languages of the European Union. In the presentation, the developers of the portal will demonstrate the portal components and present the technical details of its realization and lessons learned and.

EDP4. Lessons learned on the use of DCAT-AP (10-15min) – Fraunhofer

The European Data Portal defined DCAT-AP as the basic concept for handling metadata. In a world where almost all metadata is exchanged using DCAT-AP this would be an easy venture. But at present, the European Data Portal does have to respect the de facto standards currently in use. Although the DCAT-AP specification tries to be clear and unambiguous as possible, there are a lot of things that makes it hard to use it in conjunction with much more restrictive formats like e.g. CKAN json. The new release of DCAT-AP already addresses some of these interoperability issues and we can confirm that they work. But there are still open issues we have to solve.

EDP5. Lessons learned Geo DCAT-AP (10-15min) – con terra - ONLY on 25th Nov

GeoDCAT-AP is an upcoming specification for geospatial metadata. It is based on DCAT-AP, which is currently under revision by a working group led by the European Commission. DCAT-AP is an RDF-based data format that aims to establish interoperability in European data portals. While DCAT-AP is independent of a particular application domain and just defines the basic properties and classes common to all metadata, GeoDCAT-AP specifies how the prevalent geospatial metadata standards, namely INSPIRE and ISO 19115, can be mapped to DCAT-AP format. It is important to note that GeoDCAT-AP is not meant to replace these well-established standards, but provides an additional RDF-syntax binding and therefore a common way to transform the vast amount of existing geospatial metadata in Europe into DCAT-AP compliant form. This enables integration of this metadata into the linked open data world and general data portals. One such portal is the new European Data Portal, which is currently under development by Capgemini, Intrasoft, Fraunhofer FOKUS, Sogeti and – for the geo data management - con terra. Its goal is to harvest other data portals in Europe, thus providing a central access point. For harvesting geo portals, an adapter was developed that implements the GeoDCAT-AP bindings. The adapter uses common protocols like CSW or OpenSearch Geo to harvest metadata from geo portals across Europe. Currently the beta version of the European Data Portal harvests 35 geo portals from 30 different European states. During implementation of the GeoDCAT-AP bindings, several issues were identified when mapping geospatial metadata to DCAT-AP. Most of these issues are caused by the fact that INSPIRE/ISO metadata is usually self-contained, while RDF-based data uses URIs to reference other, sometimes external, resources. But there are also other problems, e.g. it is not clear how to specify that a geospatial dataset can be accessed by a standardized geo service interface like WMS or WFS. Solving these problems would have to start with fixing deficiencies in the current guidelines for creating INSPIRE and ISO metadata.

EDP6. Recommendations for Member State implementations (10-15min) this would include use of DCAT-AP as well as emphasis on needing APIs, harvesting, identification of more urls, etc. – Fraunhofer/Capgemini

The European Data Portal serves as a single access point for open data related to the whole European Union. Therefore, the main offering is based on data and metadata that is not provided by itself but collected from applicable portals of each member state. In order to be able to “harvest” all these individual sources, they should fulfill some requirements. Beside a few hard requirements, we identified a list of recommendations that these portals should keep in mind when they plan their roadmap for the future.

EDP7. From Open Data Platforms for developers to Open Data platform for citizens (by Fraunhofer)

Modern Open Data platforms are not the most visited web sites. They are boring. The content and presentation of data in the platforms implicitly restricts the target users of the platforms and makes them unsuitable for general public. The proposed interactive session will demonstrate “Policy Compass” - a new Open Data platform providing advanced functionalities for identifying, modelling and discussing impacts of policies on the basis of available data. This platform is the first online service relying on the dataset search functionalities of the European Data Portal. This hands-on demonstration of the platform will be the starting point for an interactive discussion on Open Data Platforms of next generation. What people expect from them? How to make them attractive for general public? Is it needed at all?


Other topics can be presented around the analytical reports (digital transformation and Open Data (n1) and eSkill and Open Data (n2), the economic analysis, etc. but this is less linked to interoperability.