From Share-PSI EC Project
Jump to: navigation, search

Project meeting Tuesday 24

Yury welcomed everyone. Emphasised need to be clear on numbers for tomorrow night.

Phil: Summarised work done on the project so far, especially wrt to the best practices. Also summarised state of the project as presented to the reviewers last week.

Review 2 feedback

Chair: Makx

Makx: Don't panic

... I've been a reviewer and been reviewed.

I think the reviewers' comments are mostly reasonable, although maybe not all.

Strategic error to be emotional and threaten to close the project. So I propose to look at what they said and see how we can/should respond. I think we should respond with an e-mail saying what we're going to do. The ball is then in their camp to see whether they're happy with what we're going to do. WE get the discussion out of the emotional territory.

I understand that CC wants to know before Monday how we're going to respond or whether we're going to step out. I assume we're not going to decide to stop the project. I would say we're going to address the comments, not that we're going to address the shortcomings. We're not going to be able to respond to the whole letter by Monday. I'm happy (Makx) to help draft the initial response.

My point would be that we should be as cooperative as possible but not more. We can stress that we're doing more than we promised.

We promised that we'd do technical work. But our group wants to do non-tech, so we should bring to the fore that we have listened to them in Lisbon and done more. We're not the bad guys, we're the good guys.

Noel: it seems so strange to me that there's such a difference between what they're expecting and what's in the DoW. Why can't we just point them to the DoW and do that?

Makx: The reason we're doing things a little differently, is that last time there was nothing that we could point to from W3C DWBP (now there is).

The flow chart in the DoW starts in the workshops, from them comes proposed BPs, but our workshops are not technical. We get policy and business issues. We said it would be a pity not to capture that. What the partners get out of the project is learning what other people are doing and we want to capture that. From that point we agreed to widen the scope to not just cover technical. I note that there's no classification step in our DoW workflow.

Nancy: Usually, reviewers cannot give comments from the previous period. I had the feeling that some comments touched the previous period. I think if the comment is from the first period, we shouldn't accept it.

Noel: Did we promise that we would expand the scope?

Phil: explains the expansion of scope issue to cover non-tech.

Yuri: David had 2 messages. They don't care what we do, they care about the output and the main output is the BPs. And in the 1st BP they gave us comments and we didn't address them properly in their POV. Philippe R also commented at the review about resources. Most are on the workshops, as have been spent.

Makx: Main question is what is a BP? They and we have a different definition. I think one of the issues in the discussion - we have not been clear in our discussion what a BP is. If you had a big contract, you'd go round all the MSs and compare all the guides. But that's not what we've done. WE have workshops and derive BPs from that. And we discuss which ones we agree with etc. I don't think we've made it clear to them that that's what we're doing.

Nancy: The reviewers were clear that they think our docs are recommendations, not BPs. And that was OK. I think Carola's mail is different. She's asking for BPs.

Makx: I don't think there's a lot of difference between them. Their later points suggest they're after actionable advice - well that's OK, we agree with that.

Noel: it's the content of a BP that we defined and they didn't agree with. If you look at the first 4 or 5 we did that. It's the comments from Paul that are more serious. Come up with a structure...

Makx: We did

PeterW: Reviews some of the discussion (from his notes, elided). Talks about self standing BPs.

Noel: Yes, it's the content of the BPs. In which countries has it been implemented. If it has been implemented, did it go well? What were the problems.

PeterW: This is a Thematic Network, not BP development project. I think they're incorrect to say that the instrumental side - the process side - is irrelevant. If we had just written papers and sent them in and someone had aggregated them, that would achieve a set of BPs but it wouldn't be a thematic network.

Noel: I agree, but I'm reporting what was reported to me. If you look at the comments that he did, down below, you can see what Paul wants and what David wants.

PeterW: They are negating what they call the instrumental aspects, that's my concern. The participative element is fundamental.

Makx: I think that's a message they we need to convey to the reviewers. The collaborative element is fundamental to Share-PSI network. It's the knowledge of each other, being in touch outside and beyond the project.

Makx: I think if we're going to have to rewrite 7.1.1, then something like what Peter said should be in the intro.

Valentina: We tried to tell them that we have collected a lot of knowledge and learnt a lot from each other but they weren't interested in that. I think that they will insist on this explicit text.

Makx: I think we need to stress that that's our background in our response to the review.

Nancy: Just to add... I don't think that they don't care, I think they're happy with it, but they see a problem with the BPs so that's where they focus.

Makx: Goes through the comments. We think we did point 1, they don't.

... Point 2 sounds like they want an investigation of guides, but that's not what we do. Now that we have our BPs we can do the comparison. Collect and classify isn't in the DoW.

Jan: After Lisbon, there's a wiki page collecting guidelines. Maybe we can point to that.

Makx: We can point to the stories and the guides.

Daniel: in 7.1.1. we did point to those guides and the structure of the BPs.

Nancy: They're not happy with 7.1 but if we improve 7.1.1 they'll accept it (currently rejected).

Daniel: The classification is on the website but not in the deliverable.

Makx: Explain for each BP... I read B1.1 there's a lot of text but it doesn't say anything. The workflow is workshops -> BPs. There's a reasonable comment on indicator 6 which is about the number of agreed standards across the EU. Maybe we can come up with a list?

PeterW: David was saying we should come up with a list of standards or gaps where standards.

PhilA: Talked about DWBP, SDW (with OGC) and future OPOE WG.

Makx: Write that down. Document what we have done.

PeterW: If we identify gaps in the standards, then we should make a note of those.

Makx: Present the list of elements in D7.1.1.

Makx: I think the picture in the DoW should be in the report. I think a summary of what was done should be enough, not the detail.

PeterW: One thing that Paul said in both reviews was that he was hearing stories/anecdotes in the review meetings that weren't in the reports. That would be good to include in the PR

Phila: It was

Makx: A methodology section - include the diagram

Daniel: On the relationship between BPs and elements - those elements aren't listed in the deliverable but are on the website. So we can easily include those.

Makx: Then we get to the points where Paul talks about what he wants us to do.

... take necessary steps to... We're doing that. Good, we're doing it already.

Noel: Yes, *if* the content of the BPs is as they would like to see it.

Makx: We're not going to do what they want taking into account their comments. Our definition of BPs. And our definition is that you haev a problem, a link to the PSID and a way of doing it. they say later that they want actionable advice. I think we need to tell them what we think a BP is.

Muriel: What they want I think is comprehensive documentation of a BP and its implementation

Noel: As well as what went wrong. I want to be able to go to the website and see what did and didn't work. he wants inspiraration.

Muriel: So that refers back to the workshops? maybe we need links back to the original papers. In some cases the papers were complete stories.

Noel: We plan to link to the handbooks, but they weren't satisfied with that.

Muriel: If we want to redfine BPs as a recommendation, maybe we link to the original story - here's how they implemented it. Maybe it's a cheap win.

Makx: So it includes links to implementations.

Muriel: We had the links originally but they don't appear in the final versions.

Makx: we're building a node linking to the stories and the guides.

Noel: we said we'd do that

Makx: Did we write it down.

Makx: No. 2 - define which criteria for selecting a BP.

Muriel: That's Phil's questionnaire

Makx: The implementation is more important than whether we think it's a good idea. We have the criteria, we just need to write them down.

Daniel: in D7.1.1 there is evidence for the support

Makx: yes but we need to be clearer on why we have them.

Valentina: Maybe we need that for each BP separately?

Muriel: We need the BPs to be implemented to be included

Makx: We have criteria. They may not be right but we have them.

Makx: 3 - well documented how exactly knowledge is gathered from a workshop and turned into a BP. isn't that the process?

Johann: I think they want proofs of the outputs, like the scribes from each session.

Makx: So we say thanks, we're doing it. But we should be better at documenting it.

Jan: We should highlight that we had a task force that collected the stories and created the BPs.

Maks: That's the process, Johann says we should document the BPs.

Muriel: Emphasises the need for link from the BP to the paper(s) it came from.

Makx: And a pointer to the wiki where the knowledge was gathered.

Johann: There's that diagram in the DoW that we can possibly use although Valentina's diagram from Luxembourg is better.

Maks: I think copying the image from the DoW is a good place holder.

Georg: Are these BPs addressing real problems? What are the problems trying to solve.

Makx: Noel is saying what are the problems when I implement?

Valentina: They would like these questions.

Makx: Come to the site, and see a list of questions that lead to answers.

Georg: We should like the problems as well as the BPs.

Makx: The link back to the story grounds the discussion.

Nancy: D7.1.1 includes a description of how they were done. I don't think we can say more

Makx: I think we should say less. A bulletted summary maybe.

Makx: On the issue of how to structure D7.1.1 I think we know how to do that.

Makx: Point 4 - adapt the process to ensure that BPs are practical. Then there are 4 suggestions.

Makx: Makes the point about maturity of the infrastructure. I'd say that our BPs are very general, it's the localised guides that give the detail. I'm not sure that it's true that we're not offering concrete advice.

PhilA: They vary in concreteness (e.g. spatial standards one)

Daniel: Both reviewers stressed the need for more detail on choices available. Member state 1 did X, member state 2 did y.

Noel: What was the challenge, how was it met. I'll give an example of what he gave me that he wants. We have to decide whether/how we're going to do it.

Makx: The one about clear PSID challenges - that's out challenges, I think we have that covered. We asked questions in the workshops and got the answers that we got. We are deriving the real questions from the stories. There may be more questions but we don't know them because we didn't have them in our workshops.

Noel: raises issue of level of maturity. Local legislation may prevent implementation of a BP. That's fine, but I want inspiration from BPs.

Makx: We can't give them inspiration from BPs.

Nancy: They also said, that a BP came from a specific partner. They want to validate this from the rest of the team. If Noel has a similar story, we can glue it together. Now we just have one story -> BP. They want to add experience to a specific story. I think that's what they expect.

Noel: He mentioned licensing as a problem in NL. Who has implemented that? What are the solutions?

Makx: We can say that we have BPs and now we're going to test them against national guidelines. But we can't necessarily answer a free set of questions.

Noel: The 13 I defined - licensing, marginal cost etc. are the gaps in the PSID. Have we discussed them? Have we come up with a solution?

Makx: I can answer that one. There's nothing in our collection about cost.

Noel: Not true.

Nancy: I think in some cases it would be nice to show different options in different countries with links to the PSID.

makx: It might be that there's a flaw in our process. Maybe we need to look at the 13 PSID aspects and see if we have something for each one.

Noel: I said to Paul that there's no way that our workshops can fill the gaps in the PSID. They can't look at us for all those solutions.

makx: But we can see whether we have something for each of the elements.

Makx: They say that the format should allow for certain things. We can do that, even if we don't fill each section. As soon as a localised guide is available we can fill it in.

Muriel: Sorry - the implementation of the BP in local guides. People trying things...

Makx: Our job is to look at guidelines.

Makx: Contact person who has implemented, as well as contact point for the BP

Makx: be more legalistic. They ask for a format that allows more people to be added. OK. we can have that in the format.

Johann: Ny linking back to countries already doing X gives you links to people.

Nancy: If we have 5 countries discussing marginal cost, then we have the contact points.

makx: So our answer is that yes, the format can accept more people to be listed.

makx: Per country overview of dos and don'ts. That sounds like a localised guide thing.

Muriel: I think that's a way for them to talk about the localised guides, but we haven't promised to do it.

Makx: I'd say that we can link to it if that info is available.

Jan: For some localised guides, there would be an implicit list of dos and don'ts.

Noel: Why is at hard to copy and paste into the BPs.

makx: because Jan's dos and don'ts will be in Czech for example.

Jan: the dos and don'ts are targeted at a specific audience, Writing in English won't go down well in Czech rep.

Makx: Final point - lessons learnt by whom?

PeterW: This is again just a format issue. The format should provide space for...

Makx: not in the reviewers' comments we have a collection of BPs. How can we make sure that people implementing the BPs can feedback? Maybe we need to host a channel for updating. So we can say that we can only do so much in the project but leave it open to future development. We have limited time and resources and we'll provide the hooks for future work.

Makx: And that's all I think. We have responses to all their issues I believe.

Noel: shows the template he mailed to Paul a while ago. Then he showed a longer doc that Paul has also seen and likes.

Makx: That's the structure we have used.

Noel: are we convinced that's the right structure. They seem unhappy with the content.

Nancy: Maybe we took out too much in a effort to be brief.

PeterW: Can we deep link to paragraphs?

Makx: I think we need to be careful about promising too much.

PeterW; My point is that if you had something like an RDF model you can slice and dice as you want.

Makx: Your project...

... We do what we can. Maybe things can be done later

PhilA: (Thanking Makx and taking over) Now we have to decide who is going to do what. We will discuss the BPs, the survey, and our methodology.

PhilA: Almost all draft BP's contain links to the papers where they originated. Upto Krems there has no been technical BP which did not already come out of W3Cs work. We were asked to do more on geospatial data which we will do. Concerning the non-technical ones we need to put in back-links. This will be done by me PhilA and others. What can we do to address the reviewers comments that BPs should be more structurized and deeper in granularity.

Nancy: Structure was only one problem, but also the content. We had to squeeze many pages input to two pages. We may provide the existing short versions and link to the originating long "stories" where the BP originated.

Makx: My suggestion would be to dissect the stories that every BP only covers one aspect of the PSI directive. A solution would be to have different perspective on all the material we came up.

PhilA: We need element descriptions which discusses the overall problem. We need volunteers to do that from the SharePSI network. Timeline is by Christmas Georg: Will do Charign Martin for Formats Jan : Selection & Criteria Muriel: Quality JosefA: Ecosystems Nikos: Platforms Daniel: Techniques PeterK: Re-use Nancy: Policies and Legislation PeterW: Organisation PhilA: Discoverability & Documentation

PhilA: Form a web site perspective, this will help search engines to discover the comments.

Makx: May be should change titles of best practices to questions and maybe change them to "How To"

PeterW: We should also assign number.

Mikkael: We tried to do approach the problem from the "Why to do that and why is it beneficial", but that approach resulted in to much paper and work.

PhilA: If a best practice carries your name, you are in charge of it improving it.

Nancy: The section of "Applicability by other Member States" is helpful, we will stick to that. If we want to add information how other countries are implementing the BP, we will write it into a new, additional section and name it "Examples of this Recommendation in practice". We should point to it wheather it is a recommendation or a policy.

Noel: Instead of a mere link it would be helpful to have a --> few! sentences instead of a mere link. People should send their relevant examples, a few sentences and a contact person.

Jan: The problem is, that by the moment we include experience into the best practice, the best practice will not remain as stable and quicker become obsolete.

PhilA: PeterW, you provided the BP, how can you prove that other people are actually doing this?

Martin, Jens have been doing this.

PhilaA: A good source to ask people to provide evidence about the applicability of the BPs is to ask the people who participated in the survey concerning the relevance of BPs. By the way, all BPs have URIs, we will use these URIs instead of numbers to reference to them. Idealy a BP will point to the localized guide, section XY or page AZ.

PhilA: How to respond to the reviewers comments?

MakxD: Will draft an answer, understand the points, wait for the final report, expected by end of January. Documents for the final meeting should be ready one month before the required delivery.

PhilA: Deliverable D7.1.1 has to be rewritten. Who participates? PeterW, DanielP, Valentina

NancyR presenting the outline of the Session . Sponsors of BP are required to be there and provide feedback on these best practices.

PeterW: We need to tell people that they need to do something, that this is going to be an interactive session. Need to put the perspective on policy implementers who have to implement PSI not only open data. What is PSI, what is open data, and what needs to be done.

PeterW will make his introduction on PSI short of about 5 minutes in total.

EmmaB will consider to update the BP contribution with MartinA.

PhilA: Everybody of the presenters of the agreed BPs and BP candidates will have 2 Minutes. Scribes for tomorrow (recorded in word document track changes)

Further Discussion

Makx: We're following a slightly different process than the DoW as the W3C WG is doing tech and we're doing non-tech but the two are linked.

Welcome & Opening Plenary

Welcome from Prof. Dr. Ina Schieferdecker Head of Fraunhofer FOKUS

We are over 100 talking about open data linked data, SHare PSI, interoperability.

Here in the institute we run a large number of projects in this area incl European Portal. 250,000 datasets from European countries which went live this week.

Workshop Introduction: Phil Archer, W3C


We don’t want long presentations, want conversations. Encourage you to participate and communicate. Invite you to take part in the discussion. European Directive on PSI presents the public sector to do this. Doesn’t come naturally to people in the public sector.

There are many technical challenges, societal challenges and policy challenges, which we are focusing on in working in the open.

The aim is to extract from those discussions some best practices, recommendations that others can use and that can be shared across the EU.

We have 40 partners part of the network.

The European Data Portal - Opening Up Europe's Public Data

Make it short so can take a look at the portal.

Beta version went live last week.

Why does open data matter?

substantial amount of money

by 2020 worth up to 325 billion euro

Substantial gain for all of us. Public Admin savings, fight air pollution, increase education, demonstrate that we are democracies, we’re transparent.

There’s openness in what we do and what we share

270,000 datasets (metadata), 34 countries

national data portals and geodata portals

will extend to regional

encouraging countries to harvest those portals as well across different countries

data is structured according to DCAN

start encouraging countries to structure their data as well

gold book for data portals

e-learning tutorials

In European data communities: want people to access via learning

we should cross-promote

want to make the data accessible as that is where we get value

CKAN platform

who is contributing?

  • Fraunhofer FOKUS
  • CapGemini
  • ODI on learning
  • Univ of Southampton
  • Intrasoft
  • time fex
  • Sogeti
  • Con Terra

community around that - for European Commission

Languages: French English German at the moment

24 languages by the end of the project

we have technical limitations as well how many batches we can feed into CKAN

It’s more than a portal:

not just to aggregate, to accelerate - story to tell that is convincing to help release more open data

strong emphasis on training aspects

using data

providing data - about data publishing

Goldbook for data providers:

open data in a nutshell

how to build an open data strategy

technical preparation and implementation

putting in place an open data lifecycle

ensuring and monitoring success

ELearning Modules include:

Step one: Introduction to Open Data

Step two: Open Data Implementation

Step three: Technical deep dive

Step four: Where next with open data?


  • Introduction to government open data
  • introduction to linked data
  • more material coming online soon
  • Featured highlights:
  • Content promotion
  • Economic analysis
  • open data landscaping
  • CEF open data call

Four complementary work streams are focused on reuse:

Leveraging community engagement to make the most out of Open Data

Communicating and raising awareness about the Portal

Studying the economic impact of the reuse of public data resources

Preparing for the future and working on the sustainability

Community engagement - reports to drive understanding and reuse

Topic 1: Open data and digital transformation

Topic 2: Open data and eskills

Topic 3: Open data and privacy

Economic benefits of open data - metrics to measure economic impact

market size

jobs created

cost savings

efficiency gains

Future actions will consist in making the most of domain specific activities: weaving our activities into existing communities rather than creating communities of our own

beta version - happy to receive comments

social media

help us improve: write recommendations directly

if a comment on a dataset, we will send back to the person who published it


Phil: What evidence do you have that gold book is a ‘standard’?

Have looked at what works on eg Open Knowledge, Share PSI, Socrata, what countries have done already.

Not looking at what is right or wrong. We have used what is out there.

To walk someone through that story to make sure that their open data initiative is sustainable.

Tom Heath ODI:

Great to hear you talking about stories.

Do you have a particular wish list for the sorts of stories you would like to see?

Don’t have the luxury of being selective

  • public admin
  • NGO
  • private sector
  • preferably European
  • diversity of country coverage
  • data publishing

Wish list: if we end up having a lot, might look at domain level where we don’t have a lot of stories

Benjamin Cave:

Challenge at European level - wanting to localise case studies, and also wanting to fight against

Do we need more local level or European best practices?

Wendy doesn’t think the two things are opposed.

European best practice - how would we define that?

At the European level, locality doesn’t mean it’s not relevant.

Share best practices. Even if that locality shares something, someone will get something from it.

Share PSI - only two members who haven’t managed to contact Cyprus and Denmark

European Interoperability: The ISA Core Vocabularies, Athanasios Karalopoulos

  • Slides -
  • ISA – Interoperability Solutions for European Public Administrations
  • Huge amounts of information is produced but there is no information strategy.
  • Data is stored is isolated islands without easy ways of integration.
  • Every information system is built from scratch with local goals and optimizations without thinking about global optimum that could be achieved by integrating data.
  • Analogy with a similar situation in railroads – used to be that each country had a unique rail system and the trains could run only on local tracks. A solution was development and deployment of international railroad standards so that trains could run across borders.
  • Open Semantic Data standards aim to achieve the same results in government information systems. So that data can be linked across systems.
  • Two main types of Data Standards:
    • Data Models - meta level
    • Reference Data - data level
  • ISA Core Vocabularies
    • Common concepts used in most public sector information systems
    • Currently four vocabularies finished: Core Person, Core Business, Core Location, and Core Public Service
  • Examples of Use
    • OSLO - Open Standards for Linked Administrations in Flanders
      • Extends the core vocabularies
    • Integrated portfolio management of public services - Estonia
    • The Italian Application Profile of the Core Public Service Vocabulary
  • Pilot implementations
    • to learn what problems are experienced
    • lessons learned, extensions required
  • Future work – started work on two new vocabularies – Core Criterion / Core Evidence and Core Public Organization
  • Comunity of Practice
    • Created a community for sharing experience and
  • Core Data Models Mapping Directory
    • a common template to describe public sector datasets and data catalogs
    • DCAT-AP validator – provide a service for validation of metadata


Q: most standards have been written by people in this room. What is your role?

A: EIRA project is just facilitator; others create

Q: you are creating standards. Different people have a different definition of what is a standard. What do you see as your main contribution?

A: Our primary effort is on a conceptual level. Decouple implementation from the conceptual level. It has been proved by pilots that you can create a model that complies with the conceptual model but is implemented with different technologies, e.g. RDF, XML, UML, etc.

Q: Standards are good, but they are used only when economic advantages outweigh the costs. Can you provide proof that they provide an economic advantage?

A: No, not yet. But some participants have mentioned savings. After analyzing the answers, it is our intention to talk with numbers. We will start to provide numbers, but it is hard.

The Impact Of Open Data. Towards Ex Post Assessment, Heli Koski, The Research Institute of the Finnish Economy

  • You have heard lots of claims about the expected impacts of Open Data, this talk is about what we actually know about the impacts of Open Data
  • Citation from The Economists (21 Nov 2015)
    • the open data has not lived up to expectations
    • gives examples of use of open data (see slides)
    • asks why has not more been achieved and gives four answers
      • released data is often useless
      • the data portals are difficult to navigate
      • too few people are capable of using the data
      • privacy issues
  • socio-economic research of open data use is scarce
    • the 2013 open data Barometer reported that in 2013 none of the surveyed 77 countries have conducted comprehensive assessment of open data benefits
  • what we actually know
    • impacts on firms and industries
      • Studies suggest that opening data may facilitate innovation and growth not among large companies but in SME
      • size and growth in certain markets linked to openness of data
    • impacts on citizens
      • time savings (rush-hours avoided, hyperlocal weather forecasts)
      • participatory democracy (but little information on how much monitoring is done by general public)
    • impacts on public sector
      • proved efficiency
      • increased tax revenues
    • Economy-level impacts
      • only a few innovatiotns have had significant macroeconomic impacts – electricy, … (see slides)
      • but they have taken a long time to achieve the massive impact
      • thus not realistic to expect such impacts from open data soon, if at all.
  • Towards ex-post evaluation
    • research will require a long-term systematic effort
    • assessment model
    • three key questions
      • what are potential indicators
      • what useful data is already being collected
      • what else needs to be collected
    • Key economic impacts
      • see slide for a picture
    • Indicators of the identified impacts
      • If we know the expected impacts we need to locate the indicators for the expected impacts
        • e.g. if we expect more products and services than we need to look for how many new products and services are based on open data
        • for some impacts the indicators are easier to find than for others; see slides
      • Summary of what useful data currently collected; see slides
    • Missing data that is not currently collected and how to collect it
      • Data collection needs: Citizens
        • types of open data used and for what they are used
        • add question to the anual survey
      • Data collection needs: Firms
        • extent and purpose for which firms use open data
        • add question to the annual survey
      • Data collection needs: Public Sector & Economy-wide
        • impacts: costs & productivity
        • no good idea to which existing survey this can be added
        • at the economy level – measure the quantity of open data
    • What to do after we have decided on impacts and indicators
      • after collection the cost-benefit analysis can be conducted
      • qualitative assessment
      • not one time, but we need to do the surveys systematically in long term to do statistical analysis
  • Conclusions
    • we don't yet even know how much can be achieved with open data
    • only after we have that information, can we ask how much is currently achieved


Q: in the pharacetical industry we have patent protection and removal of patent protection. We have information about the consequences of the removal of patent protection (patent cliff) on economic indicators. Could that sort of thinking be applied to get estimates of the consequences of opening data?

A: Interesting idea. Have not thought about such approach. But we need to be careful. We need to collect data at European level using surveys so that we can use Ex Post analysis.

Q: Why to focus on macroscale model, rather than focusing on microscale consequences from existing use cases of open data?

A: Not proposing to focus on the macro level, but focus on different sectors. Focus on the micro level indicators. But you have a point. We need to look at what would have happened if the intervention would not have happened. Economics has statistical methods that we can use to answer such questions.

Q: An example from open research. You get high-level confirmations open research is useful. But the responses are not quantified. And it is hard to get quantified responses from surveys.

A: We don't even know what kinds of data are used. If we would know what they use, then we can use other databases to link the survey data with the objective data from those databases.

Q: Consideration. The first step is to explore how much open data could be used. Organizations internally have this cost analysis. Inspire has monitoring, where people report what kinds of data they use. They need to become part of the organization's surveys.

A: Using existing surveys, so the cost is only answering additional survey questions, not the cost of introducing new surveys.

Q: Most of the data is used by existing companies, not startups. The existing companies know what data they want.

A: It is important to start to collect the data so that we can do the Ex Post Analysis.

Q: Cities are opening data and are also interesting in economic analysis. Is there similar analysis for cities?

A: Not aware of what kind of data are gathered in cities. Need to know that first.

European Data Portal Track:

A technical view

During the session "The EDP: A Technical View" there were demos on the overall portal, the data catalog section including the licensing and SPARQL assistants, the Metadata Quality Assurance component, the geo data visualization as well as the harvesting of metadata.

A [high level presentation of the architecture]( was provided.

During the session the following questions were asked:

  • Question: Reasons why you picked CKAN and Solr: SOLR comes with CKAN.
    • Answer: CKAN is used by many data publishers across Europe.
  • Question: Was the choice of CKAN out of convenience?
    • Answer: No, EDP uses DCAT-AP and the project had to modify the underlying metadata model of CKAN
  • Question: Is there a specification on the mapping between CKAN and DCAT-AP?
    • Answer: Yes, will be accessible soon on the portal
  • Question: Is DCAT-AP 1.1. being used on EDP?
    • Answer: Yes
  • Question: Did you consider to use DKAN?
    • Answer: Yes, but consortium decided to go with CKAN due to the larger developer community
  • Question: There is going to be a statistical DCAT-Version. How will you manage these different DCAT-Versions?
    • Answer: We have experiences from the Geodata-Harvester providing Geo-DCAT. EDP-CKAN handles this.
  1. What X is the thing that should be done to publish or reuse PSI?
    • X is the development and usage of a European Data portal by national data publishers and by data re-users
  2. Why does X facilitate the publication or reuse of PSI?
    • the EDP provides easier access to open data across national borders
    • the EDP supports data publishers by providing feedback on data quality
    • the EDP supports location based search via Gazetteer functionality
    • the EDP provides easy summaries of data license texts next to each data set
    • the EDP provides eLearning modules around open data
  3. How can one achieve X and how can you measure or test it?
    • build a platform and measure the usage of the platform
    • measure the quality of the data quality on the platform
    • build beta version and gather feedback and improve on successive versions
    • provide feedback to the maintaining consortium of the EDP

Best practices discussed:

  • Establish Open Government Portal for data sharing,
  • Enable feedback channels for improving the quality of existing government data,
  • Standards for Geospatial Data

The Role of the Portal

Presenters: Wendy Carrara, Bernhard Krabina, Georg Hittmair

Scribe: Daniel Pop

Wendy chairs the session and introduces the speakers

Presentation 1: Results of Open Data maturity assessment (Wendy)

- Landscaping exercise - assess how mature are different Open Data EU portals

- various statistics were inferred based on datasets harvested for EDP

- see slides

Presentation 2: Proposal for a European PSI request repository (Georg)

- Based on Article 4 of PSI directive (request for data), PA should provide datasets that are not classified (personal data, security)

- a lot of work ahead of us in this direction

- basic idea: publish the requests for data & include them in data portals

- An example in Austria: a data request for transport commissioners to Ministry of Transport and how the ministry responded; Data not made available; In EDP only UK publish transport commissioners data

- see slides

Presentation 3: Internal Data Monitoring (Bernhard)

- What internal processes need to be put in place to facilitate data publishing inside PA bodies; Possible steps:

  • Build a Data Catalogue
  • Evaluate datasets against Data Monitoring Metrics (8 metrics) to infer whether dataset can be published?
  • Engage in Quality improvement

- Best practice: Setup an internal, in addition to the external/public, data portal

- OGD Cockpit - tool for internal data collection and monitoring

- Open Government Model

- see slides

Q&A / Discussions

- Harris: Does OGD Cockpit answer challenges presented by Georg?

- Bernhard: No, because a paper-based process is usually put in place, but the approach may be adapted.

- ODI: Distinguish between FoI requests and datasets requests. FoI need an evaluation by PA + answer with grounded rationale behind rejection of publishing the info.

- Georg: In Austrian legistation it is stated that citizen need to state the reason for information request, although this constraint is not present in PSI.

- Herbert: Data on request approach is not the way to go, because there will be too many excuses for not providing the data.

- Daniele Rizzi (EC): There is no right to refuse or ask for purposes. A justification for information request is in contradiction with EU regulations. One could fill-in a complain to EC. Certainly, this will take some time to be answered.

- Nancy: Greek legislation has been harmonized with PSI directive enforcing Open by Default approach. This is done in 2 steps: 1) Every PA in Greece must provide a list with which datasets they have, which can be open, which not (due to personal information, state security etc) 2) In a second step, they will need to open these datasets

- Dutch publication office: In the Netherlands, the approach is similar to Greece. We need to talk about data in general, where is the data in general? How has a specific dataset? We should talk about Data portals, not Open Data portals. These data portals will be searchable repositories of metadata, such as EDP.

- Oystein (Difi Norway): In Norway, we have a catalogue with datasets, each one having attached Data access rights (open/closed).

- Daniele: Metadata for geographical datasets must be made available even the data is not discloseable. It's important to know what exists.

- Georg: We identified 700 data registers in Austrain legislation.

- Sebastian (OKFN): What do you want from CKAN platform? What technologies do we need to provide to get what you want?

- ???: Evolution of CKAN platform should consider

  • Multi-lingual aspect of CKAN is important; meta-data in multiple languages.
  • Include the available extensions (eg those developed for UK open data portal) in the core of CKAN
  • DCAT Core Vocabularies included in CKAN

- Andrea Perego: How can I found what I'm looking for? how to improve searchability of data portals? Include context of the datasets, such as where dataset has been used.

- Wendy: For EDP we employed offline MT in order to enable multi-lingual search. Quality of metadata is an issue.

- Nikos: Who and how is using a dataset is a valuable information that need to be shared, being beneficial for both providers and consumers of data. Contextual encourages the providers to provide the data.

- Makx: Federated Portals (eg EDP) not only collects data, but they are building new information systems, collecting data, enriching it, connecting it between different sources. Thus, portals can and should provide services to improve harvested data, supporting data providers in their efforts.

In summary:

- Open by default vs Data on request

- Shift the discussion towards data portals instead of Open Data portals, which will allow catalogueing the data (open or closed) available.

- Provide contextual information on datasets; it's beneficial for both providers and consumers.

Implementing interoperability for Data Portals in Europe

Nikos Loutas, PwC, Makx Dekkers, AMI Consult

Nikos: DCAT-AP got revised between March and September 2016. Reviewers were invited It is stable now as DCAT-AP 1.1


  • Incorporating request for changes coming from implementers. Focus was on improving DCAT-AP 1.0, no groundbreaking changes
  • Major changes in classification and media types. Now a checksum takes care for integrity
  • Relationships between Datasets, how to model timeseries
  • How to reference datasets
  • Mapping national code lists to DCAT controlled vocabularies
  • Clarifications on lineage and provenance
  • Declaration of openness
  • account for no-file distributions like data exposed via API eg. SPARQL endpoints
  • New class Checksum

The working group is now actively seeking/working on implementation guidelines. Outcomes of this session will be input into our work on guidelines.

Florian Kirstein (Fraunhofer FOKUS) presents the role of DCAT-AP on the European Data Portal:

Challenges were to deal with the flat JSON data structure of CKAN and store DCAT-AP datastructures in CKAN and Virtuoso and how to present that on the interface. If possible, every DCAT-AP field has been mapped to an existing, semantically equivalent CKAN field. 25 extra field have been created for CKAN. Cardinalities have been implemented as JSON structures like Lists and Dictonaries. JSON-LD was first investigated but in the end the project decided to go with plain JSON.

Next steps are to store DCAT-AP sources directly in Virtuoso, consider automatic resolution of nested resources, improve the validation of input data and a better presentation of the data in the frontend.

Three issues were discovered during this process:

1. Difficulties to address where the data comes from. Provenance vocabulary might be a solution to that so to store provenance and increase accountability. 2. License and usage information. Now the German metadata standard has two fields: License and Terms of Use. One thing was to define what an open license actually means. 3. From a user point of view, faceted search is neat, but people like to search for keywords. The German administration uses a lot of abbreviations, abbreviations the regular user is not aware of. Proposed solution would be a centralized term service to look up tags and keywords.

Discussion with the audience on the topic of keywords.

Martín Alvarez: The mentioned taxonomy: Can't it be EUROVOC?

Response form Agnieszka Zajac (EU Publications Office): Eurovoc is already used in the European data portal. Terms are translated into 24 languages.

Josef Azzopardi (Malta): Provenance can not be at the DCAT level. It has to come from the national administration, probably from the law. Issue here is controlled vocabulary for legal entities.

Giorgia Lodi (AgID Italy): Create an subset of controlled vocabularies. Issues are licenses and type of licenses.

Nikos: It's up to the member state to define controlled vocabularies. The problem is, many implemented a existing standard 'kind of', but what does that mean?

Matthias Palmér (Sweden): Experience on the Swedish Portal identified many places where we had to make specific decisions as the standard was not specific enough. It's worried to leave to much flexibility in the standard.

Makx: There is this tension between what to allow to specify locally and what to define in a central level and what should later be standardized back into DCAT-AP. DCAT-AP was developed primarily the facilitate cross-portal searching.

Martín Alvarez: Example of Spain: Created an application profile which was aligned with the profile of DCAT-AP 1.0

Peter Winstanley: The use of CKAN is problematic. People think if they implement CKAN they will be automatically compatible with EU standards. We will end up to confuse people if we keep using "CKAN" to mean something that gives interoperability for free. Releasing the code of the European data portal would help people to use the same data structures.

Makx: Internally CKAN is speaking CKAN, but externally the European Data Portal speaks DCAT.

  • Nikos is presenting the clustered issues people from the audience wrote on moderator cards*

Nikos: one interesting question is also how to deal with duplicated data sets.

Open Knowledge also extended the original CKAN data model to support DCAT-AP. It's available at

Voice from the Audience: In fact CKAN is not forcing to use whatever data model. It is meant to be extended.

Makx: The guidelines we are developing for DCAT-AP should also include a tools section which are useful to bridge

NN: Is there a webservice to query?

Agnieszka: Eurovoc is now electronically available in different formats from the EU Publications Office. We can contribute to make this available according to needs.

Johann: Keep an eye on the Swiss implementation. They use a Database called TERMDAT. If a keyword is put into the CKAN tags field, it will go to a committee to be considered for inclusion into the the database.

Matthias: What is the roadmap for changes to the European Data Portal? We need sandboxes for testing. Concern is the frequent changes to tools which support our portals.

Makx: For ISA vocabularies, including DCAT-AP, there is a scheduled cycle. Main changes w2ill only occur every two years. The review cycle includes public consultation, so people get an early idea to see what is coming. The current revision, DCAT-AP 1.1, contains only minor changes.

Mathias: Joinup is 'hostile' in terms of collaboration. Please consider using something else.

Comment from representative of Swedish Data Portal: Is it possible to make changes in the CKAN core so everybody can profit from that?

Sebastian Moleski (Open Knowledge): CKAN is a global product, of course we can and will account for global use cases. Ther is a barcamp on Thursday afternoon for CKAN users to bring up changes that they want.

Hans Overbeek (Dutch publication office): What is missing is to express if data is official data. Links to the law where it is stated that is particular piece of data has to be released according to a certain law. Another aspect is to link back to the institution which as the mandate for the publication.

Nikos: There has been a couple of request to have a field to be used to point to the legally endowed authority in charge.

Location Track

Athina - At

Arnulf - Ar

"If your data is somewhere in a space it is probably linked to something else in the same space"

What X is the thing that should be done to publish or reuse PSI?

Why does X facilitate the publication or reuse of PSI?

How can one achieve X and how can you measure or test it?

At starts

Michael Gordon from Ordnance Survey steps in for someone talk later.

OGC - standards consortium, members driven, 520+ members form industry, academia, government

e.g. INSPIRE based on OGC / ISO standards

WMS is an OGC standard (used in EDP presented this morning)


start from different angle - geospatial thinkers are all about coordinates!!!

Some people use it in their work - it is a technical thing using coordinates Many don't know they are using spatial info!

You could use geo information if you know HOW to use them. We need to understand how to be more flexible and not say coordinates are the centre of the world. How can spatial data be made available for wider use?

- points - e.g. about emission data exhaust, air quality, noise, water, diff environment. Industry has to be registered on German portal provided as a list - but how does this relate to people and what is effect on other locations (not just HQ of business) includes a map and location search (by postcode) - postcode is geographic it has an extent, geographic area

example shows toxic pollutant where search by postcode no response, but on map there is a nearby water pollution - it is spatially connected

point for location is based on address

how go form 1 point to an area?

- boundary data

topology is the relationship between areas e.g. river (upper and lower part) used ot track pollutant movement. cannot be put in context by a table

- geonames


search for Bonn - all the Bonn’s in the world (Bonnes in France). Returns rough coordinates fine for big city

how does your data relate to space?

is it an understanding problem?

is it a licensing problem? e.g. cost to use location data

link lists with addresses - good organising criteria - so can map this data

problem is that address data changes - e..g road is no longer there

weak link it can break

makes comparison over time more difficult due to change

Jules - at country level a country can change name - need more indicators to locate a place.

Ar - fuzziness of data. e.g. a surveyor thinks at mm level!! precision is dependent upon what you are doing and what problem you are looking at

technical parameters

WKT - Well Know Text - simplest form you can encode geo info so others can understand.

but needs a Coordinate Ref system to understand - adds complexity such as lat/long degrees and minutes - decimal more useful (like GPS) 2700+ coordinate system. Not knowing the coordinate system makes the data useless

e.g. point(x y)

linestring (x y, x y, x y, x y)

x and y in maths is different - in coordinates it is the opposite way round (relates to old navigation systems)

x and y is different on a screen 0 is top right

causes a lot of problems!!!

metadata is IMPORTANT - describe the coordinate system

polygon describes an area (lots of lines that join back up) - can cope with holes in polygons and islands

WKB - is WKT in binary form - no longer human readable, software can read it

Google Maps uses KML - coordinates plus cartographic information, e.g. colour the point red or text associated with point at certain font size

OGC WMS - presents the information visually - standard can be viewed at - you get a map image returned e.g. 5000 points aggregated and made into an image dynamically, generated at server, image returned to web browser

Jules - WMS is it a web service that gives you layers?

Ar - just a flat image, layers are flattened down into 1 image. Map image is just "dumb" not the underlying data

Can use WMS in lots of different software (ArcView, QGIS)

points go into a database - map server software generates the map image using the WMS standard output image can be SVG, Jpeg, PNG, etc.

WMS very flexible to manipulate the image

e.g. postcode can link an address to an area (polygon of postcode area) can then colour the map according to what are you trying to do.

Phil Archer - how is spatial community dealing with map accessibility? e.g. access to blind people

Ar - very difficult - currently no OGC workgroup. Maps do not make sense to the blind when presented as an image BUT other ways to return spatial data e.g. using an address NEAR TO a another place, provides context

bringing together tabular data and map data - table joining standard, unique ID needed e.g. postcode or URI currently requires data transformation to bring tabular data together with map data (boundary data)

At - will share presentation

Table joining Service - discovery, access and join in operations

used eurostats health stats and EuroBoundary maps as an example to show how high burnout syndrome in Netherlands

Demonstrates maps are just areas no content, putting together with tabular data allow you to show statistics.

Output can be in various formats - e.g. RDF, GeopJSON, XML, GeoPackage

table joining is like linked data (current work explored between OGC and W3C)

OGC and W3C cooperation

Phil and At met at INSPIRE conference. lots of GeoSPARQL, other linked data standards talk March 2014 in London - workshop about linked data and geigraphy - outcome was a a joint working group - Spatial Data on the web

Phil Archer - at end of workshop, taken 9 months to get into operation. Denise McKenzie involved.

Geo world has too many standards - so work is about joinig them up.

Which one should you use and when is it appropriate

Web go from URI

Geo go from point/line/poly

how come together?

THEN identified that time is also important - started OWL ontology for time

one for sensors

coverages (gridded dataset - excel spreadsheet on a map!)

difficult to do but pushing forward

linking the 2 worlds together

OGC has a GeoSparql standard

sparql good for querying data like SQL (search, use wildcards, etc.)

GeoSparql - query language but includes for example radius around a point e.g. everything within 10metres of this point OR distance between 2 countries one of the most comprehensive query languages that we have

Pekka Finnish chap - utilising standards, do standards include time e.g. regions change all the time?

Ar - always a time component but can be difficult where the geography changes - just starting to appear e.g. CityGML for 3D data

At - activities going on in the OGC - time series, triggered by Meteorological community as they have major need for it

Joseph - add more complexity! what about locations that move

Ar - aeroplanes, other vehicles - locations that move - direction, speed, etc. need to be known

group work

split into 4 groups

  • how does your data link to geospatial?
  • what new insights?
  • what do we need to improve soft linking through geonames?
  • how do you prefer access geodata?

software engineers need to be guided by people who understand the data

Phil Archer - geonames cited specifically is a community project, good, etc. BUT EU now funding a rival project...! But as it is community driven deemed to lack "authority"

Ar - boss of GGIM said I have money to spend on improving geo reference data - but OSM, geonames are too small to have large money thrown at them

Fabian - community centre in Hamburg, platform for urban gardening project

Andras - Hungary - open university data

Adrien - TransforMap - e.g. community mapping compare with other data - how to bring together (esp. grass routes groups)

Ralf - software developer global ecological network - attempt to visualise data

Besjana - NGO working with open data using maps on platforms (visualisation)

Joseph from Malta - public admin responsible for PSI and INSPIRE directives - locations registers are important can be used for innovation and growth

currently doesn't link -address id just text - map helps to quality checking data via the map (e.g. duplicates)

Hamburg start with the map and then collect the data - other way round

Besjana - asking people for putting information on map e.g. photo data capture

Adrien - visualising data form different maps - can get duplicates data - bur sometime

insight needs to reflect quantity

geonames can be different in different languages although talking about the same place

buildings in Hungary are represented by points linked to geonames

how to access geodata?

on a map to visualise

machine readable?

it depends how data is used.

granularity is important (e.g. in hospital bed level)

point level is the easiest to use

outline an area is also easier to capture on a map but how link

capture footpath

point is minimum info

presentation back from groups

Adam from ODI:

linked data by coordinates, addresses, services

  • use of identified space (by coordinates) needs to be more granular
  • use geonames better - point to polygon needed point needs to be seen either as part of a network or a surrounding area (i.e. topology)

Group 3 Use case:

  • in UK polling stations exists but not in machine readable format
  • in fire department in Netherlands - didn't have optimum location to find and access building e.g. for safety of children
  • notification delivery based on location

integration difficult through different APIs or data not available in the first place

other things to pick up...

Query by Adam at ODI:

Is BIM widely followed across Europe? 3D models

Linked data:

e.g. steps to bring Linked Data together with geo data

  • What X is the thing that should be done to publish or reuse PSI?
  • Why does X facilitate the publication or reuse of PSI?
  • How can one achieve X and how can you measure or test it?

Granularity is important for insight and sharing data - e.g. Coordinate systems coordinates leading into topology

Location Track Session 2

Summary of morning session for anyone who didn't attend

Examples are around community gathered data.

Ar -

Accessing data via is most common / comprehensive site for finding places

Name is stored in multiple languages / alphabets

You get the areas that a place falls within, any administrative boundaries lat/long returned ETS (European CRS)

bring name together with coordinates - most important starting point to link data that uses a name but does not have coordinates


like a wikipedia for maps (sort of) - community collects the data (crowd sourced) can download the map as geographic data

Jules - Land Portal

5 yr old project from a UN organisation (FAIO) within ILC (international Land Coalition) - goal was to unify and share land related data mostly agriculture at the start

out of UN now - but continues.

harvest and collect data from NGOs, UN, private, public

statistical data in the landbook e.g. population, life expectancy

bibliographical data (research papers, books, peer reviewed, NGO, etc.)

most organisations give you the name of the country with the data - not yet geodata

attempt to get to high level of linked data

load the data into linked data database

Look at the library on the website

linked data is completely open for reuse - can be queried and get json, rdf, etc. back

relies on metadata - try to fit the data into known/common ontologies

standard FAO agriculture data description

data is currently not geodata

Ar - what is your geograhic reference?

Jules - data was modelled orginally the FAO publication list of countries when matching data 9o% is done at country level - some regionla but hit problems around how people view a region used an existing vocabulary that was already linked data - all done via word matching

Phil Archer - EU publications office produces a URI with each country and name in different languages

Plan is to connect to richer vocabularies / ontologies - to match more names / regions then bind that more strongly to geographical taxonomies / ontologies - allows you to look at the data in a different way to compare to nearby information as an example you can leverage information based on proximity and not just using names

Adrien - take data form different places - how deal with licensing?

Jules - because not a new project there are existing agreements in place, don't always provision the full dataset, fill in metadata and link to it some data provisioned directly on the portal - especially community engagement where they enter it themselves needs more work to document the sources and agreements often found that gaps not in the technology but it is the tool you use as an end user and how they interact depending on who that person is - currently this is better for higher technical level to interact with than a casual user

Ar - what I take form this is location based ontology based on persistent names of countries - so the data can be linked by that it is like a URI

Phil Archer - geonames is the de facto tool everyone uses - although there are other options

Ar quick demo of metadata using EDP

e.g. WMS Arnulf search - a lot of metadata in the 2 datasets returned, very comprehensive has info about the CRS very important

info that Arnulf has entered was done filling in all the possible fields AND keeping it up to date

demonstrates WMS with different layers (mapbender) and importance of CRS to show how it affects the map calculation of coordinates is determined by the OGC - the software implements the calculation - demonstrates how standards can be used

demo of commercial application - Google

e.g. tharpaling

Google recognises it is a place and returns information about it in Jakarta and a map when searching google maps look for tharpaling germany - shows you the temple retreat in Germany in normal Google search brings back geonames result as the top result (not a map).

So Google Search and Google maps have come together (unlike at the start)

Combined geoinformation and location information with normal information so you no longer see the difference e.g. in search for Berlin - it shows map with a dotted red line to show the City Boundary BUT cannot download that outline it is within the Google system and pay licensing for some data so cannot give it away. As a user you can view it but not print off and reuse

OpenStreetMap allows you to download data for free

Phil Archer - in working group a WxS (e.g. WFS or WMS) service is a good way to get information. talk about standard pages that makes them more discoverable in the likes of Google (Ed Parsons from Google is on the working group)

Michael Gordon from OS:

OS is NMA of Great Britain

Product Manager for APIs

Still owned by government but there is an interest in using open data to push forward innovation - OS OpenData available for downloads in standard geo spatial data formats roads, rivers, names of places

data is used in other places or to verify data in other sources

Releasing services around both open and premium (paid for) data. Example of working with OpenReach (who install fibre optic cable) - they were getting lots of fines for digging up the wrong place. Use of a service to display a map showing the right place (e.g. showing that cable is under a pavement and not a road) - gave huge savings

Addressing - service called OS Places - 40 million addresses approx. with description of address and what that place is. Plus coordinates of where it is

APIs can be integrated into other tools

postcode search example

DVLA example - used to register your car. They need to know who has bought, sold, registered a car (where they live) especially if a number plate is used to report a car as stolen in background DVLA connects to other government systems incl. Ordnance Survey. Creates greater efficiencies and foster innovation. Makes it easier to consume geo data uses standards

Geovation Hub ( ) helping to spread the knowledge of how to work with location data, not just Ordnance Survey data. Future is allowing greater machine readable data, link up the data (like table joining service) - make geo data easier to use

Q. Premium data what is that?

A. some data is licensed to recover cost. Higher granularity data. Depends on what you are using the data for.

Ar - OS is one of the reasons we have OpenStreetMap - when OS data wasn't available for free (people used a GPS to capture the data themselves). 9 years ago it started, captured lots - led to OS releasing some data for free.

Why is OS selling it? Cost recovery is passed on. Political issue. Netherlands, Denmark have published the data for free.

When you select a street name can't enter a wrong one form a box when using the data and using APIs - so leads to greater data quality when entering in systems.

Data is more important than the software.

An API - originally an Application Processing Interface - offered to access their data e.g. LinkedIn. Better to have a Sparql endpoint!! URI is the best thing you can get, use Sparql end points

How to fund / finance creating map data:

  • pay taxes allows data to be released
  • pay for documenting the change cost e.g. building a highway, cost to build v cost to survey. Contractor could supply the data to mapping agencies

Germany had this way but removed that requirement - now there are white spots appearing in the maps

Q. Publish data in a way that is free for non commercial use but cost for commercial

A. difficult to distinguish activity as simply as whether it is commercial or non commercial

mention of What 3 words

Ar - it's a cool thing Easier to memorise 3 words than a coordinate. Allows people to remember more

Combine reusability of what 3 words - the gazetteer can be geocoded. Business model is about scaling - freemium

potential for education about licensing - data is complicated in terms of licensing

Really important to understand that it is licensed and that is all compatible

INSPIRE conference 3 years ago in Florence 2 workshops on licensing At to add link to the list

If license is not clear - do not use

If any further follow up contact Phil, Athena or Arnulf

What is the role of standards for Smart Cities and how should they be created?

[Muriel] Intro on current best practices on Formats, Quality, Documentation, Organisation. How to become smart city? Luxembourg creates a number of services/apps. International environment. Can we reuse apps?

[Slim] Classic app for smart cities: where someone or business to settle in Luxembourg. Based on census data: demography, houses, etc. 116 datasets to 106 cities. List of doctors and dentists was only available as scanned image. List of schools available as PDF. Train stations: one page for each station.

[Muriel] For smart living we need a lot of services and apps. How to avoid reinventing the wheel with apps and services?

[Pekka, Forum Virium Helsinki] Lack of interoperability: formats and APIs vary between cities. Therefore we work on CitySDK 8 cities, 3 domains: Participation, Mobility Tourism. Example: Smart Participation API; report what needs to be fixed.

Open311 for civic issues tracking W3C POI is also used.

Developer community: Helsinki Loves Devlopers

Open & Agile Smart Cities network 75 cities from 15 countries

[Hanna, Forum Virium Helsinki] How to get businesses involved:, 6aika: Six City Strategy for boosting data-driven business. Aim is to create a single market, to level up B2B requirements. We created an Open Data Business panel with 120 companies joined. Companies see themselves in many roles: Collector, Aggregator, Enabler. We use to collect examples.


[Tom] catalogs using harvesting hide many aspects of the data, they cannot be an optimal discovery tool; "your website is your API" Suggestion: publishers maintain soft descriptions that can be crawled

[Martin] 100+ different catalogs in Spain, including cities, it works well

[Irina] each publisher has to maintain its web site, know the vocabulary, etc. so there's lot more to that than just putting data on the web

[Yannis] Why go to these wery generic standards? In this way we can collect all standards of the world. Specific standards: ISO 37120 (100 indicators for smart cities), ITU, Smart Sustainable Cities FGSSC, BSI standards, NIST, etc. see The content is more important then the format.

[Hanna] How do I open budget/contracting data? Such down-to-earth problems need to be solved.

[Yannis] Don't forget security and other issues, but focus on data. Or you invite an elephant to the table.

[Peter] Local government business model exists. Links between vocabularies are needed. How to solve this?

[Muriel] Google provides free services for US cities: provide data in this format and you get services.

[Pekka] Can we be faster than the Google model?

[Mikael] One needs to document the data. Only APIs is not the answer. API is point2point integration, not scaling well. Semantic interoperability is needed.

[Matthias] Convincing developers to use linked data is hard. What would be important: referenced data; a way to refer to existing data, with a stable id.

[Muriel] Thinking about the description, should it be on global, national or local level?

[Mikael] There is the Finnish ontology service, but has many problems.

[Andras] W3C Linked Data Platform could add the data description facility to APIs, but probably hard to use.

[Pekka] That's the way to go, but hard to convince developers.

[Yannis] Smart City Standards: Samos will organize a workshop on this next June.

An Intelligent Fire Risk Monitor Based On Linked Open Data, Nicky van Oorschot,

  • The presenter (Nicky van Oorschot) raised a question whether Linked Data could provide an approach for Emergency Management
  • Nicky has been Involved in trainings with Bart van Leeuwen (15 years firefighting experience) and has learned that with linking / integrating data from different sources, different questions can be answered
  • In fire department they study different data e.g. could demographic data provide insights in fire risk by combining demographic with Forensic data
  • The work presented here was ODINE winner of the 2nd round, see
  • The application calculates residential fire risks in neighborhoods based on Open Data
  • Evidence - Since 2009 stop decline the number of residential fires. Department claims that this is correlated with general education
  • Currently calculations are based on history data
  • However RESC.Info Insight service works differently – it analyses demographic data (age of the persons, kind of families), based on this “We can provide the right people the right education”
  • Different sources used including building register (“Four out of seven datasets used in the Proof of Concept are from 2013 or older. ”)
  • Nicky: “After the proof of concept was developed, we had a verification session, Fire department of Rotterdam, we asked them to give risk evaluation from their side, see Section 3:PROOF OF CONCEPT EVALUATION,
  • RESC.Info Insight service produced map that is comparable with their
  • Nicky: “What is the value of the service ? “we can provide list of households that need fire detector, need education”
  • Nicky: “We can not only provide a tool, we have also moral obligation to inform people”
  • Challenges with RESC.Info Insight service:
    • Data availability for nl is ok but how about other countries
    • One needs knowledge to analyze the data
    • Main challenge - location data
    • Alignment of data, cross referencing
  • The prototype evolved in a product
  • Nicky: “We need to tackle challenges, we are open for collaboration”

[Question:Tom Hearth] Can this be applied on broader problems

  • Nicky: We had the knowledge in fire domain, but yes, can be applied

[Question:Makx] you have a predictive tool

  • Nicky: Yes, we have this neighborhood now (demographic data), in 10 years we will know if this changed. Yes, we can include other factors e.g. migrants

[Question:Noel] How did you convinced the Fire Department to use this

  • Nicky: “I hope they will see value, we still have to convince. In addition In Nederland we convinced several to use twitter to analyze what is happening.”

The Connecting Europe Facility Programme, Daniele Rizzi, European Commission

  • Daniele Rizzi provided general overview so the participants have an idea which projects can be financed by this program
  • Digital service infrastructure (DSI), from 2014 a component added to telecommunication
  • Procurement was topic in 2015
  • Currently topic *until 2017 (is the Portal), see slide 7
  • Daniele Rizzi talked about Public Open Data (Legal framework is the PSI directive)
  • Objectives from the CEF Call
    • Idea cross border, not necessary all 28
    • The legal issues are important , users know what they are allowed to do with data
    • EC defined certain priorities’
    • 4 point is very important / not a program to support research/innovation but rather EC needs mature technologies, should not be pilot, sustainability is important, mad for couple of years, in production process
    • Finally lot of activities supporting open data
    • Minimum criteria / cross boarder at least 2 countries
    • CEP program / eligibility 2
    • Take into account that you need a signature from your authorities, start with preparation on time
  • INEA will manage the call operationally
  • Data from the EDP can be addressed

Questions • Overhead / 7% • How many projects / call says up to 0.5 mil euro, we want to have enough proposals

Reaching Consensus

Nancy & Peter

Thursday Plenary Session

Mobile Positioning And Statistical Derivatives – The Way Forward?

Aleš Veršič, Ministry of Public Administration, Slovenia, paper by Igor Kuzma

Trying to analyse the data from the mobile operators in statistical way entire statistical data is based on location

data is quite confidential because lot of personal data

we can recognise people’s everyday habits by following their movement

Statistical dissemination (slide 4)

everybody can access the tool

data can be exported to different formats

these maps showing data from the municipalities eg. population of the municipalities by year

new challenge story of big data - like mobile data

they try to combine population stats and data from the mobile operators

Mobile operators (slide 6)

  • 4 mobile network operators involved
  • 3 service provider
  • 3 resellers

Mobile operators are primary data providers

Outcome (see slide 7)

  • 6 months trial
  • mobile operator data to stat office
  • no personal data should be transferred outside

1 operator provided data

Slide 8

case in this map - people from the north east corner of Slovenia

key question - is the data public or property of the company ?

public services - private company , challenging

Slide 9

visualisation, Ljublana

location of population using mobile phone location data

night time

2000 people registered

high peak - students in bars??

lot of people in the center

slide 10

same visualisation, Ljublana, but day time

people going to:

  • offices
  • companies
  • schools

Slide 11 graph

hourly distribution

peak is from 11 to 15

Slide 12 video

mobile activity during one day

Slide 13

we organised statistical day in Slovenia

Slide 14 conclusions, check the bullets on the slide

lot of potential but lack of official international common strategies of big data case studies produced

key question, how to provide public services by organisations which are not public institutions (telcos?) (scribe didn’t totally get the reasoning)

Q: in your visualisation mobile phone usage goes up during the day and down during the night - so what? What is the interesting stuff you found from the data? It could be any data showing activity of the citizens

A: Interesting findings were:

  • how people are moving in the city, from where to where
  • in emergence cases - how many people are at that location at that time

Core Public Service Vocabulary - The Italian Application Profile

Gabriele Ciasullo, Giorgia Lodi, Antonio Rotundo, AgID

Presented by Giorgia Lodi

cataloging public services project going on in Italy

classification work that we are going to continue at Digital Italy

at the moment, it's a mess. Information of the services what the administrators are providing at the moment

there is no classification, everyone has own classification

  • different names
  • there is no standard definition of service characteristics

Consequence: difficult to meet semantic and technical interoperability requirements (see slide 2)


  • general, common definition of interoperability profiles for data and metadata
  • definition of the Core Public Service Vocabulary - Italian Application Profile (CPSV-ITAP)

Potential benefits

  • harmonisation in data representation
  • facilitation in the exchange of data on public services..
  • basis for the development of Italian public service catalog

authentic source for the information (see slide 3)

Public service definition (see slide 4)

Italian public services catalog

started to construct the catalog

3 main advantages

  1. discovery ("I don't even know that there are services online")
  2. comprehensive platform for sharing best practices
  3. monitor digitalisation

(see slide 5)

Methodology (chart, slide 6)

generic use cases

documents published by the government

some KPIs considered from the documents

EU commission, anything that could be used

-> classes and properties

top down

-> vocabularies

full of free text properties

meaningless, no meaningful searches possible

looking DCAT etc

Data model, CPSV-ITAP

(chart, slide 7):

purple boxes are classes they added

properties in italic

rules that are basis of the services

wanted to connect and manage authentication standard

they also used possibility to localise the services

Controlled vocabularies, slide 8

DCAT-AP theme

they don't agree with some of the theme but it is important to be compliant


UK ESD toolkit

some national peculiarities

OWL ontology, slide 9

not yet finished

this is a screen shot

labels and comments will be in English

national open data portal

also data representation

data model should be there also


Recommendations, slide 10

  • use of controlled vocabularies, article 5, no free text
  • Use of core vocabularies application profiles for both metadata and transversal data

Future work, slide 11

  • include the profile in the upcoming revision of the Italian technical guidelines on the enhancement of PSI
  • use the profile for the implementation of the italian catalogue of public services
  • use the profile for the definition of the national data architecture (UK is doing something very similar)
  • not only open data
  • investigate how to take advantage of a common interoperability profile for service interactions within public administrations finding new paradigm

Q: Peter How would you get the public admin to enter data?

A: still under construction

we are going to ask them…

my dream:

to have catalogs to be interoperable

federation without asking them to upload their meta data

system identifiers

big issue we have to deal with

we have something for open data but we need also for this

Q: you want to structure the data and use the same vocabularies but nowadays you can use free text search also A: maybe in the future, but now it's a mess

why put free text eg for opening hours if it could be structural data

when the technologies get more mature, we might use them

we hope that there would be more flexibility in the future

Q: whether you can show cost savings in reusing existing work? A: ontologies are very useful. They have the same requirements

EU commission work will be exploited

They have used these references

Linked Data For SMEs

Lena Farid and Andreas Schramm, LinDA/Fraunhofer FOKUS

Presented by Andreas Schramm, Fraunhofer FOKUS

linked data usable for non experienced users

project lasted 2 years

coordinator National Technical University of Athens

partners, see slide 2

Linked data in a nutshell, slide 3

A self-descriptive, interoperable, machine readable format (RDF is the chosen format) Helps:

  • link data on the web
  • gain more insight into ones own data
  • enable decision support through broadening ones own knowledge base

The SME issue with linked data, slide 4

  • Tools are academic, rather than business oriented

Motivation behind Linda, slide 5

  • bring lay people closer to linked data and help them to unlock the potential of their data
  • help expand the Linked Open Data Cloud

Linda’s Offerings, slide6

  • design objective: ease of use, simplicity + automatisation
  • graphical user interface
  • tendency towards minimalistic concept

Tool set + APIs, slide 7

  • Transformation tools (semistructured means tabular data)
  • vocabulary & metadata repository
  • query tools
  • visualisation tool
  • analytics & data mining
  • consumption apps

Overview of the workflow, slide 8

  1. transform & enrich data, tabular data, csv or excel, to make data self descriptive
  2. store
  3. link
  4. visualise and analyse

transform your data into RDF, slide 9 -*simple intuitive web-based UI

  • 7 steps to an RDF graph
  • integrates “DBPedia lookup”
  • automatic type guessing

Use Oracle service to access properties and classes, slide 10

column headers and table names

  • access to repository with public ontologies
  • Oracle service for best matches
  • ranking based on popularity and re-use
  • cross-lingual suggestions

Oracle comes up with classes of... [scribe missed this, so it remains a mystery, you can guess…] that's why we call it Oracle

Interlink and Query data, slide 11

Interlink remote resources like open data portals

Visualise and analyse your data, slide 12

process quite automatic actually

provides diagrams

another screenshot for visualisations

- concept of clustering structure

Conclusion, slide 13

  • Linda project makes simple linked data workflows accessible to SMEs, public sector…
  • Knowledge required from user reduced to minimum

Demo outside! Come and try it!


We also noticed that there is this big gap. How many SMEs use this Linda?

A: 2 SMEs used the tool during the project

project finished now. It's all open source

there are 3 demos at the moment


transform function

you have different formats

about matching named entities

is it manual matching or automatic?



has to be triggered by the user, but automatic

DBpedia is used as a reference

user has to trigger it first


Can you take structured data that comes from different sources and…

A: yes we can if you …

we can map those to same IDs and...

(scribe missed this question and answer, sorry)



  • W3C just published CSV-> RFD

We have a working group in W3C It's struggling with this issue (of validation). There are 2 methods proposed

  1. One company promotes its Spin method
  2. others who don't like Spin, are proposing other options

They need use cases to prove that other approaches work, can Linda provide those?


I think we can, if we can use the right vocabularies

Q: who are the target users of this platform?


we are addressing users who have little or no experience of linked data

but do have experience of their own data in order to be able to set up transformation process

Parallel Sessions D

Ontological Arguments

HS – Herbet Schentz

AM – Andras Miscik

Moved to a different session

Modelling universities.

  • organisations
  • staff
  • locations
  • research
  • courses
  • events
  • learning materials
  • statistics

Landscape slide shown. Organisation is at the centre, links to lecturers (roles), then to courses (temporal manifestation of a subject)

Use a lot of existing vocabularies:

  • (too much overhead)
  • Aiiso – for universities
  • Foaf (staff) no address property
  • Vcard
  • W3C time
  • Dublin core mostly used in annotations
  • Teach

Etc. all on the slide

Goes through a demo of describing a single event (recurring event is more complex)

Template mechanism is needed for viewing and maintenance

Example where subject in the middle using aiiso. But aiiso is missing language and connection to the course.

Herbet Schentz Ecology research topic – many different elements, different databases, etc.

Common ontology with data mapped to it - Sorrento core

Short talks with Pictures

An Extensible Architecture For An Ecosystem Of Visualisation Web Components For Open Data

Gennaro Cordasco, Delfina Malandrino, Pina Palmieri, Andrea Petta, Donato Pirozzi,Vittorio Scarano, Luigi Serra, Carmine Spagnuolo, Luca Vicidomini

  • Why should data be open? Tranparency, make a read/write society
  • The key is to visualize the data in a very userfriendly way, to enable better access, reuse and sharing
  • ROUTE-PA (Raising Open and User-friendly ???)
  • Social platform to promote sharing of data
  • Datalet – web component that enables complex visualization of data. It needs as input: dataset URL, query and filter. Most of the calculation is done on client-side
  • Datalet Creator – copy paste of data structure, which is analysed and user can select patterns. One can include it in any HTML paste (eg Wordpress)
  • Future work: security, efficiency (caching and CDN) & smart suggestion
  • Differences with Google Drive? iframe vs web component, cannot include opendata sources, must trust Google

Government As A Developer – Challenges And Perils

André Lapa, AMA [abstract]

  • In 2013 state of play: lack of legal support, lack data management, 300*n possible data sources (siloed public administration), lack of financial resources
  • Senior officials wanted the benefits of open data and politicians wanted to announce success
  • AMA’s Analysis Team evaluates interoperability in projects and rejects/accepts them
  • A transparent municipality portal aggregated data from differenct sources, AMA put in the middle to help them get the data
  • Hope that data collected in will create new applications and uses
  • Over 150 high quality datasets, they are under scrutiny
  • Opened communication channels, to get feedback on the datasets and API
  • Important is to shift cost of opening data from the agencies responsible for the data
  • How did you convince the minister? We sold the outcome

Bar Camp on Best Practices

Nancy briefly presented the work done so far.

Question 1: is 'Best practice' the correct term or should we say 'Recommendation' or Good Practice'. After discussion, the clear view of the group (most of the people remaining at the workshop) was that we should stick with Best Practice.

Question 2: What is the process? What is the evidence? The group discussed linking to the original stories and the future localised guides. Then there's the survey that asks for evidence of use/implementation.

Question 3: What is the correct level of detail? Rather then think about length, BPs should be actionable. It's clear that some BPs need enhancement.

Wendy: We've been having conversations about... it's true that it's a challenge to work out what is a BP and what isn't. It's about having solutions to problems. There will be different approaches. By having connections to stories, one might see that the Mediterranean methods are consistent as are the Nordics but they might be different. I'd say that the prob and 10 word solution should be in the title. On the EDP we have about 40 pages that are at a very high level.

Ben Cave - Brevity - no case study should be over 2 pages long I'd say. On balance between detail and generality, as far as possible, include numbers/metrics/indicators. People know the generalities, what they may not know is what it might save them and how quickly it can be done. The 3 easiest ways, the 5 steps to a better version.

Nancy - maybe we have a section at the end that links to the papers/where the BP came from. I'm sure all authors can find these. Keeop the BPs simple and short and link to more info.

Makx: Maybe we can ask ODI to help us write one?

Ben: I'd be delighted.

Makx: someone like you with the experience... And that would be a great contribution from the ODI as a Share-PSI partner.

Jan: ODI and OK both work on generic advice and are best place to provide it. Member States will provide more context-specific info.

Emma: We've already been looking at incorporating this into the OD Handbook.

Nancy: so you'll take up this challenge.

Noel: Good idea - go for it.

Discussion on timing. Need good example very soon that we can copy in terms of structure and style.

PeterW: The outline we have - maybe three bullet points for the kernel and then more words. Management types will be turned off by anything to do with technology, but something terse in bullet points then that can draw them in. Maybe some semiotic guide to the stage at which the BP would be followed.

Ben: Interesting. We haven't really presented it in terms of a journey.

Andras: Think in terms of a recipe, ingredients, prep, method etc.

Nancy: I'd like to suggest that we get something by 10/12 so that we can agree on the structure by Christmas.

Nancy: I'd like to see how we can enhance some existing BPs.

??: There was one on encouraging crowd sourcing. My suggestion came out of brainstorming - we wanted to engage schools in cultural heritage crowd souring.

Nancy: Yes, we want stories to back up the BPs. Is there something relevant happening in our environment? It's good to describe the context. If we can further analyse this...

Harris: We're going to have a field called context?

Nancy: I'd say the section would be relevant implementations


I wanted to raise the specific issue of 5 star data.

People get fixated by the 5 stars. You here know about that. But local governments that we talk to often say they really want to go for 5 star and they end up with just 1 5 star dataset, rather than the (better) option of 10 x 3 stars.

PeterK: In Sweden, I agree that people aim for 5 stars and spend too long on it and some people then go for 1 or 2 stars. The Finlanders now have 7 stars...

Hans: We have 6 stars in NL.

PeterW: Talks about what he's been doing with RDF data Cube stats etc. and semantics

PhilA: Talked about the CSV on the Web as a way to create 5 star data in a CSV. And the importance of the network effect.

Makx: 5 stars isn't a goal, it's a means. People get what 1 and 2 stars mean, how much does it cost to go for 3+ stars? I'd de-empahsise the star system and think about what you get out of it.

Martin: I believe in standards and the Sem Web... we managed to convince the politicians to follow a 5 star strategy.

Dietmar: We have people very happy with 5 star, others are happy with CSV files. They might have some using PHP and APIs but SPARQL is out of the question. So 5 stars isn't always hte best for each case.

Jan: If you can get an expert, then it becomes easy :-)

Project meeting Friday 27

[Start first hour, Scribe: Jens]

Workshop review

  • Athina: Location track was very interesting, has a lot to take back to here community (spatial data on the web). Absolutely great! There were good conversations, good input.
  • Johann: Portal track was also good. Many knowledabgle people there
  • Hoski presentation had a lot of info going towards BP Holistic measurements
  • Muriel: Barcamp was good, good that is was kept short. Overall: Nice balance throughout between longer input and shorter sessions
  • Pekka: A lot of good contacts with smart city people
  • Phil: pleased there were so many people. Participation has gone up over two years. Hans X from the NL was there, which is good as he represents NL.
  • Peter W: BP time lead to collection of a lot of comments. He will interpret and transcribe those. To get back to commenters, the consortium has to move quickly in order to gather stories and more detailed comments
  • Phil: More emphasis now an the network working together instead of individual workshop hosts doing work. Phil suggests having at least monthly calls

BP discussion

  • Nancy: Outcome of the BP discussions will be a short manual with around 15 BPs categorized along the sections of the PSI directive. By 10th of December a template for writing down the BPs will be developed by the consortium especially with contributions from Open Knowledge and ODI.
    • Some authors will have to rewrite/re-structure their BPs in order to fit the new template
    • by the 10th of December the results of the BP discussions will be rehashed
    • use the time up until Christmas to organize the future work
    • Emma: A common understanding of a "best practice" would be helpful for the consortium
    • Makxx: Template is rather a style guide
    • Nancy: Communicate the consortium process of getting from workshop stories to BPs
    • Peter K: It is a lot easier working on BPs if we have an example of one finished BP
    • Johann: What about the BPs which are still under consideration?
      • Phil: Authors of these BPs can add to the BPs using the template
        • Please look at the survey if you haven´t done that yet!
        • Agreement about the status of each BP should be reached before Christmas!
        • Complete consensus on the BPs means, there is no STRONG objection and general support
    • Makxx: We are building a collection of knowledge. We should not delete any content. Everything is part of the collection. Out of that collection some content is considered best practice.
    • Johann: Let´s have a threshold of collected votes.
      • Support by more consortium members to this approach
    • How is the process with the new BPs collected in Berlin?
      • they should use the new template
    • What about localization in respect to the project timeline?
  • Phil: The DWBP ( has advanced a lot. Hoping that localized guides will also take these into respect
  • What is the task of the Open Group about relating the PSI directive and the BPs to their relevance on SMEs?
    • provide a document explaining the relevance to the SME community. Same is true for the geospatial community
  • How to work on the BPs?

Response to the Project Review

Phil showed the meeting a draft e-mail to Carola Carstens in response to a message following the recent project review, setting out the position of the members of the consortium. The position will be represented as "unanimous" as all the people in the meeting agreed with it. It was agreed to remove the phrase referring to "preliminary comments from reviewers" as these were not formal review comments. It was agreed that Phil should send the revised draft to Philippe Rohou and ask him to send it to Carola.

Although the review was critical on some points, Phil believes that the reviewers are happy with much of what the consortium has done, and in particular the workshops.

Zagreb Meeting

There will be a further project meeting in Zagreb. It will co-located with a meeting of the W3C Data on the Web Best Practices Work Group. The provisional dates (subject to final confirmation) are 14-16 March 2016.

The objectives of the meeting for SHARE-PSI are:

  • To ensure that the consortium members have consensus on the best practices. These should have been completed prior to the meeting.
  • To ensure that the best practices are as complete as possible
  • To ensure that consortium members are on target for delivery of the localised guides.

The meeting will be organized as follows.

  • Monday March 14 and morning of Tuesday 15th, meeting of W3C Data on the Web Best Practices Work Group, with SHARE-PSI representatives invited to participate as observers
  • Afternoon of Tuesday 15th, joint meeting of the two groups
  • Evening of Tuesday 15th, social event of some kind
  • Wednesday 16th, meeting of the SHARE-PSI project

The detailed agenda will be published in advance of the meeting on the project wiki, as will logistics and hotel information.

The meeting will be organized by the University of Zagreb. They could invite representatives of the Croatian government and other governments, but the meeting is primarily a working meeting, so substantial time should not be devoted to interaction with government representatives. They are welcome to attend if they are interested in the meeting topics.

Future Activity Post Project Completion

It would be good to set up the best practices in such a way that they could be added to over time. This would imply some form of ongoing support structure.

Various possibilities were discussed.

One is a workshop for the European Data Portal: keynote speakers but not other participants would be funded.

Another was to establish a European Data Portal Community. The EDP project description includes establishing publisher and user communities. Yuri will investigate the possibilities, which will depend on the terms of the project.

Another was contributing to the EDP "Gold Book". ODI are producing this.

The CEF is targeted towards public administrations, and has a 50% funding model, so may not be appropriate for the SHARE-PSI community as a whole.

Project timeline

  • 10th of December 2015: Style guides and an example BP from Ben (ODI) and Emma (Open Knowledge)
  • By Christmas: Comments from Berlin must be collected. Contact commenters on BPs from Berlin workshop. Write 0,5 pages per section from the PSI directive (
  • Until Mid January: Work on BPs
  • Mid March 2016 - BPs complete (before the Zagreb meeting)
  • 15./16.3.2016 - Consortium meeting in Zagreb
  • July 2016 - Majority of EU member states must have local guides. They must cite or use the BPs from Share-PSI
  • the translators for each local guide will report back, which BPs they have cited/used in their local guides

[End first hour]