Warning:
This wiki has been archived and is now read-only.

BP Narrative 2

From Spatial Data on the Web Working Group
Jump to: navigation, search

Best Practices Narrative (version 2)

(author: Jeremy Tandy) (Linda edited this version to make the language easier to understand)

In this second version of the narrative, I have tried to pick out discrete examples that can be used to illustrate the best practices.

(there’s lots going on in this story- but we don’t need to detail _everything_ … just the parts that help us illustrate best practices)

A forecast combination of heavy rainfall, high-tide and storm makes it likely that a flooding will occur in the next 120 hours. Specialists use hydrological and urban flood prediction models to estimate peak water-levels, when these will occur and which area will likely be flooded. Input data for this analysis includes: historical weather and river gauge data (water-level) and the authoritative high-resolution Digital Elevation Model (DEM) which contains the height of the terrain. Every 6 hours new weather forecast data is used in the ongoing analysis, resulting in a new version of the flooding dataset being made available.

(Jeremy can [ask one of his more knowledgable colleagues to] provide information about the flood prediction process, but it’s complicated and likely won’t help illustrate any of the Best Practices; recommend a short summary and treating this as an ‘upstream’ activity … no pun intended!)

(1) Publish flood inundation forecast data (the results of the urban flood prediction model) as a coverage dataset

actor: professional spatial data publisher / Web developer (?)

activity: publish flooding forecast data as a high-resolution coverage dataset (dimensions: latitude, longitude and time) suitable for use in Web applications

notes:

  • this dataset is updated every 6-hours - or, in actual fact, replaced every 6-hours when the flooding analysis is repeated with new weather forecast data … (BP4 - or is only DWBP bp8 relevant because were talking about versioning the _dataset_ rather than individual Features?)
  • [for the sake of this example] this high-resolution dataset is large so a RESTful API is used that enables a consumer to extract a specific part of the coverage (a.k.a. a subset) … (BP5, BP28, BP29)
  • the API includes an operation to extract the water-level (or the time-series of water-level) at a specific point which is intended to help Emergency Responders determine which protective measures (if any) may be applicable at any given location (BP28, BP29) … this ‘spot-data’ function is a very common API for accessing coverage data
  • applications like Reading eScience Centre’s ncWMS [1] (especially when augmented with the CoverageJSON effort [2]) might provide a good exemplar for exposing multidimensional coverage datasets on the Web?
  • should also illustrate how the entire dataset is made available for bulk download (BP27)

To make the flooding data easier to use a volunteer developer publishes a Web application that converts the flooding coverage dataset to discrete Features with vector geometry that represent the flooded areas. Each flooding area is linked to the administrative areas that it touches. The Water Board evaluate the developer’s application and, following successful validation, incorporate the algorithm into an operational service. The developer used an open license and is happy that her code contributes to the flood response effort.

(2) Publish information about administrative areas within the municipality

actor: professional spatial data publisher

activity: publish the information about administrative areas maintained [lvdb was: curated - is maintained a good alternative?] within the municipal Spatial Data Infrastructure - including the relationships between areas; make this information discoverable via search engines

( notes:

  • examples should follow the pattern developed in Geonovum’s testbed topic 4 “Spatial Data on the Web using the current SDI” [3] - we want the datasets (and the Features that they describe) to be discoverable via search engines (BP25, BP26)
  • details regarding how accuracy and precision of the data are described must also be included (BP10) - we want other ‘expert systems’, such as those used by government agencies and municipalities, to be able to use the data … although I’m not exactly sure how the accuracy and precision information would be used in this case … given an interest in precision and accuracy, it is also likely that this scenario would provide data in a ‘precision’ CRS (BP8)
  • identifiers used within the SDI need to be transformed to URLs (BP1, BP3)
  • this example also provides opportunity to demonstrate how spatial data may be published in many formats for the benefit of multiple communities (e.g. for expert systems to work with the data offline, and for use by web developers) … (BP6, BP7, BP12)
  • what about vocabulary choice? (not only BP12 but also the thematic vocabulary?)
  • how far do we want to pursue Linked Data / RDF?
  • we probably need a discussion about Features and real-world things … and the fact that in the end, we can be ambiguous (ref. punning) … [to be discussed!]

)

Data about administrative areas are often useful - perhaps they represent one of the most popular spatial datasets. In this case they are useful for coordinating the emergency response, i.e. predicting and tracking which neighbourhoods or districts are threatened. Because the names of local administrative areas such as neighbourhoods are very well known they are also useful for communication with citizens, i.e. letting them know if their neighbourhood is threatened by the flood or not.

Because the administrative area datasets is quite popular, all kinds of data users will want to use it - not only GIS experts. To enable them to find the data on the web, it was published in such a way that search engines can crawl the data, making the data findable using popular search engines.

There are two possible approaches:

  • Publish the spatial data using general web standards and best practices;
  • If an SDI with WMS, WFS, CSW services is already in place, using a proxy on top of the SDI is a good option. The proxy or layer on top serves to makes the data accessible via web standards / best practices.

Example of the first approach, information about a neighbourhood published as data on the web: [btw this is experimental data]

When looking up the URI, it resolves to a resource with all kinds of information such as the neighbourhoods name, population data, a geometry of its boundary, and links to its parts i.e. the districts within the municipality. To make this discoverable:

Example of neighbourhood HTML page with schema.org markup (coordinates are stripped out in the example but go in the string behind "polygon"):

<html lang="nl">
  <head>
    <script type="application/ld+json">{"@context": "http://schema.org", "@type": "Place", "geo": {"@type":"GeoShape","polygon":"..."}}</script>
    ....
  </head>
  <body>...</body>
</html>

In the second approach, the things done to get the data indexed by search engines is the same: sitemaps, links and schema.org markup. But because in this approach the data is made available on the web on top of WFS and CSW services, extra steps need to be taken.

  • automated mapping of spatial metadata to web metadata e.g. DCAT, schema.org
  • automated mapping of spatial data from WFS to schema.org
  • automated creation of links between data entities
  • Deployment of a proxy/layer that can perform these mappings dynamically, for example the open source LD-Proxy https://github.com/interactive-instruments/ldproxy

(3) Publish flooding forecast data as vector dataset and identify the administrative areas (?) that the flooding is predicted to impact

actor: web developer

activity: convert a coverage to a Feature dataset using the ‘typical’ Web environment of a javascript engine; publish each flooded area Feature with a unique, persistent URL; determine which administrative areas each flooded area touches

notes:

  • assign unique and durable identifiers to each flooded area Feature (BP1)

The flooded areas are new features, so they don’t have any identifier yet. The developer creates an identifier for each flooded area and prefixes this with an HTTP URI. The URI pattern s/he uses starts with a domain name, in this case http://www.example.com, followed by /id/ to indicate this is an identifier, followed by /floodedarea/ to indicate this identifiers a flooded area, followed by the unique identifier that s/he has created.

For example:

http://www.example.com/id/floodedarea/100001

  • relate each flooding Feature to the impacted administrative area(s) (BP13, BP20, BP21, BP22) - noting that we’re linking between the Spatial Things _not_ the geometry objects. This can be name of a unique locations, link to an area, etc.

[ … do we need to talk about the spatial analysis itself?]

The developer relates each flooded area to the administrative areas it touches. A spatial analysis is needed to do this; the developer can do this offline after downloading the administrative area dataset. As a result links are created from each flooded area to the admin areas that are affected by the flooding.

An example in JSON:

{
“id”: “100001”,
“affected_area”: "https://geo4web.apiwise.nl/id/buurt/BU03580000",
“affected_area”: “https://geo4web.apiwise.nl/id/buurt/BU03580001”
}


  • follow links to the administrative area resources to find related information (BP24)

By following this link the developer can find relevant information such as the area’s name, the municipality it’s part of, and the number of people living in it.

  • the Web application only works on the _current_ flooding forecast (ref. BP4 to ensure that identifiers remain stable over time)
  • the geometry of each flooded area will change during the forecast period (BP11)
  • given its derivation from high-resolution data, the flooded area geometries may be [over] complex; discuss performance and data compactness (as described in Geonovum’s testbed topic 3 [4]

The derived flooded areas have a very accurate geometry. Especially when querying collections of spatial things with geometries, this results in very large response payloads wasting bandwidth and causing slow response times. For the web application s/he’s building, these payload sizes are unacceptable; it would cause the application to become slow and maybe even unresponsive. First, the developer uses an easy method of degrading precision by reducing the number of decimals. This already results in smaller payloads. However, it’s not enough since the geometries are complex and each consist of thousands of coordinates. This level of precision is in this case unnecessary. To simplify geometries so that they suit the displayed resolution, he uses a simplification algorithm.

Possible approaches for this are:

Ramer–Douglas–Peucker

The Ramer–Douglas–Peucker algorithm (RDP) [5] is an algorithm for reducing the number of points in a curve that is approximated by a series of points. It does so by "thinking" of a line between the first and last point in a set of points that form the curve. It checks which point in between is farthest away from this line. If the point (and as follows, all other in-between points) is closer than a given distance epsilon, it removes all these in-between points. If on the other hand this outlier point is farther away from our imaginary line than epsilon, the curve is split in two parts. See https://github.com/geo4web-testbed/topic3/wiki/images/douglas-peucker.png.

Visvalingam–Whyatt

While Douglas–Peucker is the most well-known, the Visvalingam–Whyatt algorithm [6], also used by TopoJSON, may be more effective and has a remarkably intuitive explanation: it progressively removes points with the least-perceptible change. For example, the GeoJSON file used to draw the continental United States can be reduced from 531KB to 27KB with only minor visual changes (example [7]).

  • include provenance details for the derived Features so that (i) we know which [version of the] coverage dataset they were derived from, and (ii) the volunteer’s algorithm can be attributed …

The flooding forecast provides detailed information about areas that will be flooded - right down to the street-level. Emergency teams use this information to plan their response to the flooding. First, they prioritise critical infrastructure (e.g. hospitals, electricity sub-stations, other utilities etc.) that need to be protected from the flood and dispatch responders to deploy the appropriate measures; sandbags, flood barriers, pumps etc.

(4) Publish details of fixed assets (e.g. dikes & dams, buildings, roads, critical infrastructure etc.) and topographical features (e.g. water bodies)

actor: professional spatial data publisher

activity: expose the information about fixed assets / topographical features maintained within the municipal Spatial Data Infrastructure (BP…) plus associated metadata (BP26)

notes:

  • [add section about publishing metadata for discovery of these datasets about fixed assets]
  • this is similar to (2) above … alternatively, we could illustrate the approach that exposes the data directly via RESTful services (rather than via a proxy) using a technology like ElasticSearch?
  • demonstrate how to improve utility of a dataset by providing a search API (BP30) that includes spatial and textual filtering - in this case to find only prioritised categories of assets that fall within the inundated areas
  • at a minimum, the type and location of each asset Feature must be expressed (BP6)
  • OpenSearch [8] and the Geo extension [9] provide a good example of this capability

Next, the emergency teams determine the number of citizens likely to be impacted by flooding. They use census data, cross-referenced with the administrative areas flagged in the flooding dataset to determine the number of citizens that will need to be evacuated.

[ Andrea Perego (talk) 22:14, 24 May 2016 (UTC) : First try - @Bart, please check this out ]

To identify the datasets relevant to the intended purpose, data publisher should document and publish metadata (DWBP-BP1) that, besides free text descriptions (e.g., title, abstract), include the following information:

  • the type of objects/features described - e.g., with a thematic classification (DWBP-BP2)
  • spatial coverage / temporal coverage - to identify if data match the area of interest (BP26)
  • coordinate reference system(s) used - to correctly interpret geometries (BP7, BP26)
  • spatial resolution - to identify data with the right level of detail (BP10, BP26)
  • distribution format(s) and API to get access to the data (at a different level of granularity) - to identify those datasets consumable by the intended application(s) (DWBP-BP4, DWBP-BP13, BP26, BP27, BP28)
  • date of last modification - to see whether data are up to date (DWBP-BP8)
  • the parties responsible for the creation and maintenance of the data - to verify data authoritativeness (DWBP-BP6)

To facilitate data discoverability, metadata should be published via different channels and formats (DWBP-BP22). Typically, such metadata are maintained in geospatial catalogues, encoded based on ISO 19115 [10] - the standard for geospatial metadata. In addition to this, such metadata can be served in RDF, and made queryable via a SPARQL endpoint. E.g., GeoDCAT-AP [11] provides an XSLT-based mechanism [12] to automatically transform ISO 19115 metadata into RDF, following a schema based on the W3C Data Catalog Vocabulary (DCAT).

This solution can be further enhanced by making data discoverable and indexable via search engines (BP25). The advantage is that this would allow data consumers to discover the data even though they do not know the relevant catalogue(s), and to find alternative data sources.

This can be achieved, following Search Engine Optimisation (SEO) techniques, by embedding metadata in catalogue’s Web pages, with mechanisms like HTML-RDFa, Microdata, and Microformats. Examples of this approach include the following ones:

  • In the Geonovum testbed [13], dataset pages from a geospatial catalogue embed metadata, represented by using the Schema.org vocabulary, directly generated from the relevant ISO 19115 records.
  • The experimental GeoDCAT-AP API [14] allows data publishers to serve ISO 19115 records in different RDF serialisation formats, including HTML+RDFa, on top of a geospatial catalogue and/or an OGC Catalog Service for the Web (CSW) (BP28).

(5) Publish census data which contains population statistics for each administrative area

actor: open data publisher

activity: publish the census data in a way that supports re-use

notes:

  • Publishing census data is typically the responsibility of a national or regional statistics agency. Population data from a census is typically broken down by area, gender, age (and perhaps other statistical dimensions) and relates to a particular time.
  • Each area should be identified by a unique HTTP URI (Best Practice 6: Use globally unique HTTP identifiers for spatial things). This allows geometry information to be associated with the URI (Best Practice 7: Provide geometries on the Web in a usable way). That allows the intersection of the flood with administrative areas to be calculated. Using established and widely used identifiers for the areas allows data on other topics or from other publishers to be easily combined with the census data.
  • A typical statistics agency will publish many datasets, so should provide a search interface that helps the user find the dataset of interest
  • As a minimum the data should be downloadable in a popular machine readable format, such as CSV. It is important that the structure and meaning of the data is documented, by providing a definition for each column header and information on the type of data to be expected in the cells. This should follow the approach defined in the W3C Metadata Vocabulary for Tabular Data https://www.w3.org/TR/2015/REC-tabular-metadata-20151217/
  • A download of all the data may leave the data user with a large amount of information to work with, when they are only interested in the subset of areas affected by flooding. Therefore it is desirable that the publisher makes the data available via an API, where the user can select the area of interest and retrieve relevant information, possibly also narrowing down their choice by other statistical dimensions.
  • Census data naturally takes the form of a statistical 'data cube', with statistical dimensions of area, time, gender, age range etc. A useful standards-based approach to making the data available would be to represent it as RDF, using the RDF Data Cube Vocabulary https://www.w3.org/TR/vocab-data-cube/. This offers a standards based way to represent statistical data and associated metadata as RDF. API access to the data could be provided via a SPARQL endpoint, or a more specific API.

How has this been done in practice?

  • How easy is it to find population data for small areas of interest, that could support our flooding scenario?
  • How easy is it to associate the population data with the geometry of the associated area?
  • How suitable is it for automation?

Some examples of good practice include:

  • the Scottish Government http://statistics.gov.scot
    • finding data by topic - search for population information
    • finding data by area - search for area or browse hierarchy
    • getting geometry details of an area
    • data cart, to select data about a number of areas at once
    • API
    • could do better if...(make some suggestions for a better match to the best practices)

Other possible examples still to investigate - might be good examples, might be what not to do!


Once the number of citizens that need refuge has been determined, the Emergency Teams designate public buildings, such as schools and sports centres, as evacuation points and define safe transit routes to get to those points. The evacuation plan must be discoverable by the public. The Emergency Teams work with the telecommunications providers to alert affected citizens using Wireless Emergency Alerts (i.e. broadcasting an alert message to all mobile devices within coverage of each cell tower whose coverage overlaps with the predicted flooding zone; “Cell Broadcast”) and news / media agencies.

(6) Publish the evacuation plan

actor: web developer

activity: publish simple, authoritative Web pages that describe the evacuation plans; include structured mark-up to help search engines index the rich content (BP25)

notes:

  • each evacuation plan has a URL

The aim would be for each plan to be both human (primarily) and machine readable. The requirement for machine readability is mostly to support automated discovery of the content via web search. The URL itself ideally should also be "human friendly" as it should be easy to share verbally in addition to being embedded and linked to from other web pages, so for example "http://www.rijkswaterstaat.nl/Rijnmond Overstromingen". This URL should be persistent, the content in the form of html and embedded content updated during the flood event to represent the current status of the event.

  • the evacuation plans link to the Features (schools, sports centres, administrative areas etc.) published elsewhere (BP21, BP22, BP23); citizens can follow those links to find more information (BP24)

While making the plans clear and understandable to human readers is clear, the challenge is to make the content machine readable. The use of a simple tag based schema using microdata, RDFa or JOSN-LD is recommended. A simple first step might be to use the schema.org "Event" item tag <div class="event-wrapper" itemscope itemtype="http://schema.org/Event">, which has useful generic properties of date, location, duration etc. Of course this is a very generic item and a more specifc FloodEvent could be defined, although there is benefit in using the most generic item definition as it tends to have wider adoption. The "Event" tag is used across up to 250,000 domains currently. Following this principle the places of evacuation, schools, sports centres etc should be tagged using the generic "Place".

  • the links should make it clear what is intended to be described by each linked Feature- either an evacuation area or a refuge (BP20) - Would link itself describe feature class or resolved URL content ??
  • transit routes need to be clearly described (BP7)

Route should be provided ideally as a textual description ( perhaps machine readable using the schema.org "TravelAction" item, although this is rather limited) and a graphical representation. Potentially route information could be encoded using a format such as OpenLR but this has not achieved widespread adoption.

  • the plan should include temporal information (BP14, BP11)
  • the Wireless Emergency Alerts contain links to the relevant evacuation plan
  • news and media agencies provide Web applications that help communicate the evacuation to citizens as effectively as possible; e.g. by creating simple Web applications that direct one to the correct evacuation plan based on their postal code or online mapping tools
  • media agencies may cross-reference evacuation plans with Features that have non-official identifiers; e.g. from What3Words (W3W) [15] or Genomes [16] [as suggested by AndreaPerego]

As the flood event progresses, the Emergency Teams supplement the Wireless Emergency Alerts with door-to-door notification and evacuation assistance for the vulnerable. Spatial data is used to identify locations such as care homes where priority assistance needs to be provided. The Emergency Teams also monitor the rising water levels to ensure that these are consistent with the predictions - both in terms of timing and peak water-level. Fortunately, the prediction is sufficiently accurate that the evacuation plan remains effective.

(7) Publish real-time data-stream of water-level observations

actor: monitoring system

activity: publish a real-time data-stream of water-level captured by an automated monitoring system; publish metadata (BP18) about the data stream enabling (i) the data to be discovered, and (ii) the data to be interpreted by users (e.g. what quantity kind is being measured with which units of measurement, what is the sensor etc.) (BP18, BP14)

notes:

  • the sensor is part of a monitoring network established (by the Water [Control] Board?) to provide early warning of potential flood hazards within the municipal area
  • the narrative indicates that the intended users of this sensor data already know where to find the data - so we only need to provide cursory detail on making the content discoverable
  • relate the sensor (and the data-stream it provides) to the water body whose water level it is intended to monitor (BP12, BP13, BP16, BP21, BP22 etc.)
  • describe the sensor location (BP7). This can provide an opportunity to use relative positioning (BP9) to describe that the sensor / monitoring point is specified distance from a known Feature?
  • should we use the OGC WaterML2 Part 1 (OGC# 10-126r4 - see pp. 59) vocabulary to describe the monitoring point (based on O&M Sampling Feature)? ... mention this as a source for the vocabulary to describe the monitoring point; but don't go into details as this is both complex and somewhat out of scope (it's about describing the thematic domain rather than the spatial data)
  • the OGC SensorThings API [17] provides a mechanism to achieve this which is, AFAICT, consistent with the other best practices
  • also include SSN example :)
  • SAO ontology [18] (BP18) can be also used to describe the data-stream and to link it to SSN observation and provenance metadata.

Example (the code is adapted from the SAO example [19]):

    prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#> .
    prefix tl: <http://purl.org/NET/c4dm/timeline.owl#> .
    prefix sao: <http://purl.oclc.org/NET/UNIS/sao/sao#> .
    prefix ct: <http://www.insight-centre.org/ct#> .
    
    :cityofaarhus a foaf:Organisation, prov: Agent .
    
    <FloodDataStream158324>  a  ces:PrimitiveEventService , ssn:Sensor ;
        ssn:observes  "Water_Level" , "Measurement_Time" , "Raise_Level";
        prov: wasAttributedTo :cityofaarhus .
    
    
    
    <http://unis/trafficData158324FoI-001>
        ct:hasFirstNode   [ a                  ct:Node ;
                            ct:hasCity        "Hinnerup"^^xsd:string ;
                            ct:hasLatitude    "56.23172069428216"^^xsd:double ;
                            ct:hasLongtitude  "10.104986076057457"^^xsd:double ;
                            ct:hasStreet      "Ãrhusvej"^^xsd:string
                           ] ;
        ics:hasSecondNode  [ a                  ct:Node ;
                            ct:hasCity        "Hinnerup"^^xsd:string ;
                            ct:hasLatitude    "56.22579478256016"^^xsd:double ;
                            ct:hasLongtitude  "10.116589665412903"^^xsd:double ;
                            ct:hasStreet      "Ãrhusvej"^^xsd:string
                            ] .
    
    <http://unis/trafficDataFloodDataStream158324Property-001> a <http://www.surrey.ac.uk/ics#WaterLevel> ;
        ssn:isPropertyOf  <http://unis/FloodDataStream158324FoI-001> .
    
    
    <http://unis/FloodDataStream158324Observation-001> a sao:Point , ssn:Observation ;
        sao:hasUnitOfMeasurement  <http://unit1:milimeter>; “no brackets”
        sao:value   "50.0"^^xsd:double ;
        sao:time    [ a            tl:Instant ;
                        tl:at        "2014-09-30T06:00:00"^^xsd:dateTime ;
                        tl:duration  "PT5M"^^xsd:duration
                    ] ;
        ssn:observedBy  <FloodDataStream158324> ;
        ssn:observedProperty    <http://unis/FloodDataStream158324Property-001> .
        prov:wasAsscoatedWith :cityofaarhus . 

During the flood, citizens themselves become engaged with the flood event; they use social media to post geo-tagged messages regarding their observations of the flood (“the #flood has reached my location” or “Mount Pleasant Road closed at junction with Acacia Avenue #flood”, perhaps accompanied by photographs). Alternatively, citizens may offer localised help (“come charge your phone at my porch”).

(8) Publish ‘volunteer geographic information’ using social media

actor: citizen

activity: publish spatial data using social media

notes:

  • This will be used by social media platform providers to drive collection of particular information from users of the platform.
  • The key focus will be on the platform that is used for crowdsourcing. In this case people will use the web as the data sharing platform- either directly as Web resources or via an App or some other platforms.
  • For example someone might tweet: #flooding the water is 2 feet deep at my house. Analysis requires parsing 2 ambiguous statements. Alternately: #flooding [2 feet] @{link to my house feature}.

Or another example: someone might publish information about her/his house flooding in a blog entry; e.g. in native HTML.

  • This requires people to do to make their [data] contribution more usable e.g. by using a URL for the thing or events that they report.
  • This will allow for example emergency teams to use social media reports to determine the flood extent.

For example (source: [20]):

#Flooding in Vulture St West End. @abcnews #brisbane

In this example #brisbane can be specified with a Genomes concept:

#Flooding in Vulture St West End. @abcnews #brisbane [http://www.geonames.org/2174003/brisbane.html]

Or another example

#flooding in #Braunau am Inn

Can be specified with a URL link from DBPedia to the location:

#flooding in #Braunau am Inn [http://id.dbpedia.org/page/Braunau_am_Inn]

[ --- done [Payam 6/6/2016] ---needs to be changed to reflect the discussion of the Tue 3 May (http://www.w3.org/2016/05/03-sdwbp-minutes.html)--]


  • The key question is how best to include spatial information in social media (BP17); geo-tagging (lat-lon, geohash or geographic identifier), use of post code or place name? (BP17)
  • photographs may include geo-tags in the EXIF metadata
  • social media can be aggregated to provide additional situational awareness; e.g. #uksnow
  • how to reconcile place names with resources formally identified with URLs? … especially where local or informal place names (with ‘fuzzy’ boundaries) are used (e.g. “north of Downtown”)

The flood event reaches its peak. There has been no loss of life and the city’s critical infrastructure has been protected. However, there is still plenty of damage and a clean up operation required with an estimated cost running into the hundreds of millions of Euros. Insurance companies looking to quickly assess their financial exposure relating to the flood use observations of the flood extent cross-referenced with the locations of their insured properties to estimate likely costs. Social media reports of flood observations are combined with Synthetic Aperture Radar (SAR) data (from which areas of surface water can be identified) in order to identify which insured properties were affected by the flood event.

(9) Publish Synthetic Aperture Radar data as a coverage dataset

actor: professional spatial data publisher

activity: publish SAR data as a high-resolution coverage dataset (dimensions: latitude, longitude and time)

notes:

  • the SAR dataset is large; it is published as named subsets (BP5)
  • ... and more ...