As discussed during F2F3 in Amersfoort I volunteered to write up a narrative around a flooding scenario to be a guide to our BP-Doc
A flooding scenario has a lot of angles which are interesting to our work:
- Multidisciplinary, there are multiple official agencies involved in dealing with these scenario's
- Multilingual ( at least in the European context this is of great interest )
- The data around this scenario needs to be findable, indexable on the web
- The data should be communicated in a way that 'mortals' can use the spatial component of it.
- Data created by mortals should be usable by official entities
- The data should be interoperable across different systems and multiple platforms.
- It should be linked to other historical data and the information existing on the Web/relevant agencies information sources.
- The data can be also linked to breaking news feeds, social media streams, alerts, and other online crowdsourced and social media data.
- Latency, granularity, quality and provenance of the data should be also available for further analysis.
- Links to other monitoring resources in neighbouring areas, interconnection between different monitoring devices, networks and observation and measurement data in an area should be also specified.
During high water season both water boards and meteorological services monitor the levels of water and precipitation to indicate potential risks. If the worst case scenario combinations occur these agencies start dispatching their people to the field to monitor the real situation of dikes and dams. The people are dispatched to specific locations. when reporting they will need to report the situation on the specified location but also everything they have encountered on the way there.
In the mean time on various social media sources civilians will start reporting unusual situations about the weather and water level. The location information in these reports is not as exact as that of the official agencies.
The agencies will communicate the actual status of the various water works through official channels, but with their own jargon. combining all the sources to create a overall picture needs alignment and understanding of the terminology used.
The provenance, quality and trust information (if available or if it can be determined through (semi-)automated means) can also help to use and interpret the data more efficiently.
The agencies can use this information to make more information decisions, monitor the situations in near real-time (or with less delays) to take appropriate actions and/or to predict the upcoming events/situations. The data can be further analysed by agency staff or it can be fed into automated prediction and analysis systems that can be also connected to actuation and control systems (with human in the loop).
The same story, a little more fleshed out and combined with best practice references
- professional spatial data publisher
- other professionals using this data
- citizens using social media
- data analytics software
- data visualisation, integration (multi modal and heterogeneous data sources) and analysis systems
- actuation and control systems (with human in the loop)
- web/app developers
- emergency responders
- automated alert systems
- citizens who are in danger of being flooded
During high water season both water boards and meteorological services monitor the levels of water and precipitation to indicate potential risks. So, as a start, there’s always data about water level, about the weather and so on. And there is topographic data about where the dikes and dams, rivers, buildings, roads, water/land boundaries etc. are. There is also data about previous incidents and historical events related to flooding in the past.
First step to making this data available is publishing metadata about the datasets, including spatial information in the metadata (BP26). Provenance, accuracy and precision of the data should be also specified in a way that can be directly used by software applications and also staff in different agencies/common users (BP10).
It’s high water season. The water levels are being monitored. This involves sensor data so again, metadata is published about this sensor data (BP18) . The people who do this monitoring know how to do this and where the data can be found, and no one else is much interested as long as all is well, so for the moment this is enough. The data should be expressed with the right format to describe geometry. The geometry data should be expressed in a way that allows its integration with other data and using it by multiple users/agencies in a heterogeneous environment (BP7). The data and geometry publication should also use common and reusable vocabularies (BP12) to increase and improve the interoperability of the descriptions.
Because more organizations use this data, it provides useful information (BP6) and is published in a suitable spatial data format (BP7). The agency staff and software applications may also need background and contextual information to interpret the data correctly and efficiently. For example if density of the collected flood events/situations vary from location to location, it should be taken into consideration in processing and interpreting of the data. The latency and freshness of data, location and quality of measurements and observations all can have an impact in analysis and interpretation of the data. The data should be specified with clear metadata about context that is required to interpret observation values (BP14).
In some scenarios and often when agency staff are at the location and monitor the events and also in scenarios that high precision is not required, a common global CRS (WGS84) to specify the coordinates can be used; However, in automated control/reaction and actuation and high precision applications (such as precision agriculture and defence) the spatial referencing must be accurate to a few meters or even centimeters. In scenarios for which high precision is required (e.g. automated response and control in flood monitoring and prevention) a coordinate referencing system should be specified to locate geospatial entities (BP8) .
In real-time and continuously changing scenarios, such as flooding, it is difficult for agencies and also for software applications to work with large datasets. Some data such as satellite imagery data can be very large. It can take considerable time to download a large dataset and requires sufficient local storage to be available. To address this challenge, it can be useful to provide identifiers for conveniently sized (e.g. higher granularity) subsets of large datasets that agencies and applications can work with (BP5, BP27).
Now a situation begins to unfold. Due to several colliding circumstances there are realistic predictions that water levels will rise to unwelcome heights. More people start to get involved in the situation and need data. They search in their usual search engine. The data is published in a form that is crawlable by search engines (BP1-3, BP25), published with explicit links to other resources (BP19), linked to similar/related resources (BP23), and can be found by following links (BP24), so they can find it.
Real situation of dikes and dams starts being monitored in the field. People are dispatched to specific locations and report the situation on the specified location but also everything they have encountered on the way there. They report their findings linking them to data entities that already exist (BP22) (e.g. authoritative identifiers for dikes).
A flooding occurs. The water/land boundary of the river shifts, but its URI identifier remains the same (BP4, BP11).
On various social media sources civilians will also start searching for information, reporting their own observations, and offering localized help (‘come charge your phone at my porch) (BP17). Maybe ad-hoc apps will even quickly emerge. At this stage discoverable, accessible data is needed that can easily be incorporated in online maps. The current flooding information is published and information in relation to the flooding is published via a APIs (BP28-30).
In some cases it is necessary to describe the location of a flooding event in relation to another location or in relation to location of another entity. For example, the junction before London road; next to the spectrum leisure centre, etc. The flood alert and social media report may specify a relative location in which the events can be linked to a specific (and/or well-known) location/entity (BP9).
Meanwhile, professional organizations are dealing with the crisis and now need quick access to data as well as the ability to quickly make sense of data and merge data from different organisations and crowdsourced data. The data is published using well-known spatial vocabularies (BP12, BP13) so that the semantics can be known, and the data is linked to well-known resources (BP22, again), linked to spatial Things (BP21), using meaningful link relationships (BP20). The information can also specify links between the observation data and areas/buildings/places (BP16) that are well-known. For example, high street close to the supermarket is currently flooded, the water level is currently 35 centimetres.
In some cases the data can goes through different pre-processing steps before making it available to end-users. This data processing workflows should be also described (BP15). This can be especially helpful for software applications and data integration components to follow and track the changes that have been applied to the original raw data. Agencies can also use this when they store the data in their historical archives.
NOT YET MENTIONED:
- All BPs are mentioned now
- BP5, identifiers for parts of larger information resources
- BP8, 9 and 10 about specifying CRS, relative positioning, and inaccuracy