ODRL

Towards Remote Obligation Enforcement in IoT Systems

Michael Lux, Gerd Brost, Julian Schütte
Fraunhofer AISEC, Germany
{michael.lux,gerd.brost,julian.schuette}@aisec.fraunhofer.de

The Industrial Data Space initiative [1] aims to establish a
network for "secure exchange and the easy combination of data within value
networks" in the Internet of Things (IoT). The accurate enforcement of usage
control requirements is of uttermost importance to any IoT application dealing
with business-critical data and decision-making. It is the prerequisite
for establishing the level of trust that enables exchange of data across
the boundaries of enterprises and fosters the creation of new data-driven
business models.

While traditional usage control [6] refers mainly to the usage of
resources (e.g., a file or a web service), the IoT raises the need for
data-centric usage control by relying heavily on data flows across
trust boundaries. Data is created by sensors and travels across
devices, gateways and services which are not necessarily under the control of a
single user. In that respect, controlling the use of data in IoT scenarios
is similar to Digital Rights Management in the context of copyrighted works.
We investigate whether ODRL as a digital rights policy language might serve as a
standard for data usage control in IoT scenarios. For this purpose, we sketch an
exemplary use case of ODRL and raise open questions and challenges that need to
be tackled in order to apply a language from traditional digital rights
management of static assets in a priori modelled environments to highly dynamic
IoT scenarios.

Data Usage Control in IoT Scenarios

IoT scenarios are characterized by data flowing across devices and services,
typically managed by message brokers like Apache Camel or Kafka.
Policy definition languages like CamFlow [7] and LUCON [2,3] allow to
control data flows in the context of single instances of message brokers. However,
enforcing usage restrictions on remote endpoints requires means to serialize policies
into so-called sticky policies, which are bound to the messages they refer to,
or to specify policies at data source level. These policies then need to be enforced at
the remote site. Further, a common understanding of permissions,
obligations and duties is required. If no fully documented, open standard exists,
this implies that enforcement requires an identical policy evaluation for each
communicating party, which usually results in the need of an identical implementation.

Being such an open standard, ODLR [3] enables data exchange between
any trusted parties that guarantee conformity to this standard. This makes ODRL
a great tool for remote enforcement of data usage restrictions and obligations.
We thus set out to investigate the question whether ODRL can serve as a
policy language for regulating data usage in IoT applications and whether it can
live up to the following requirements.

We want to demonstrate the challenges of this approach with the following
technical scenario:
A car manufacturer (OEM) and a supplier join a common IoT application that
analyses sensor data from the supplier's production line to predict upcoming
supply chain bottlenecks and support the risk management at the OEM's side.
On one hand, the supplier would not be willing to provide this critical information,
as the OEM could use it to tune its purchasing and pricing strategy to the
disadvantage of the supplier.
On the other hand, improvements of the supply chain would also be in the
interest of the supplier.
Given suitable data usage control mechanisms, the supplier can restrict the
usage of that data to a certain purpose or specific organizational units of the OEM.

Figure 1 shows a LUCON policy that restricts data flows at the supplier's side,
depending on the intended purpose which is attached to the respective message.
When data leaves the supplier's network to the endpoint http://www.example.com/service,
it is bound to an obligation that the remote end is expected to enforce.
Figure 1 shows how a respective sticky ODRM policy might look like.

lucon_with_odrl
Figure 1 - LUCON policy with ODRL obligation

Sticky ODRL Challenges

Linking of Concepts to Technical Implementations: The examples
presented for the ODRL model [3] mention actions like "anonymize"
(as remedy in case of a prohibition violation) [4], yet it is not mentioned
how such actions are to be linked to concrete technical means.
Nevertheless, many applications require clear technical definitions of actions
that have to be taken or fulfilled in case of violations or obligations, respectively.
This also means that implementation details have direct effects on the security guarantees.
We want to discuss how abstract actions could be linked to standardized technical measures.

Ad-hoc Policy Support and Dynamic Identifiers: The ODRL Information
Model 2.2 [3] requires contracts to be identified by a unique identifier,
and promotes the assignment of such uid attributes to assets, parties,
constraints and other elements used in ODRL policies.
Further, some classes in ODRL require the definition of assigner,
assignee or target within policies.
In the provided example from Figure 1, the assigner is always
the sender of a message, the assignee always denotes the receiver,
and the target is always the message itself.
Therefore, those requirements are inconvenient and cause redundancy when policies
are directly attached to the entities they are referring to, especially when those
policies are ad-hoc-generated from a different representation.
We suggest to discuss whether some of those requirements should be simplified
or defined optionally to avoid redundancy and make ODRL more suitable for such
use cases.

Capability Profiles: Sometimes, the definition of failure paths
for cases of failed obligations or violated restrictions is not a sufficient
approach. For instance, it could turn out that a device cannot handle some
requirements for highly sensitive data after receiving it, whilst also lacking
capabilities for failure handling, like secure erasure. This would put the system
in an invalid, forbidden state with unforeseeable consequences. A possible
solution would be to publish policies ahead of time, and let the receiving party
check whether it can fulfil all requirements beforehand.
A more efficient approach without this additional verification round-trip could be
the standardization of "capability profiles", which the parties can use
to publish sets of supported capabilities.
We would like to discuss whether such profiles would be useful and feasible.


[1] http://www.industrialdataspace.org/
[2] https://industrial-data-space.github.io/trusted-connector-documentation/docs/usage_control/
[3] https://www.w3.org/TR/odrl-model/
[4] https://www.w3.org/TR/odrl-model/#duty-prohib
[5] G. S. Brost and J. Schütte. Lucon: Data flow control for message-based iot systems.
In The 17th IEEE International Conference On Trust, Security
And Privacy In Computing And Communications
.
[6] J. Park and R. Sandhu. The UCON usage control model.
ACM Trans. Inf. Syst. Secur., 7(1):128–174, Feb. 2004.
[7] T. F. J. M. Pasquier, J. Singh, D. Eyers, and J. Bacon.
Camflow: Managed data-sharing for cloud services.
IEEE Transactions on Cloud Computing, 5(3):472–484, 2017.