RDF Dataset Canonicalization and Hash Working Group Charter — Explainer and Use Cases

More details about this document
Latest published version:
https://www.w3.org/rch-wg-charter/
Latest editor's draft:
https://w3c.github.io/rch-wg-charter/explainer.html
History:
Commit history
Editors:
Ivan Herman (W3C)
Manu Sporny (Digital Bazaar)
Aidan Hogan (DCC, Universidad de Chile)
Feedback:
GitHub w3c/rch-wg-charter (pull requests, new issue, open issues)

Abstract

This is a supporting document for the proposed RDF Dataset Canonicalization and Hash Working Group Charter, providing some extra explanation of the problem space and associated use cases.

1. Terminology

1.1 Canonicalization terminology

For a precise definition of the various terms and concepts, the reader should refer to the formal RDF specification [rdf11-concepts].

RDF Datasets

R, R' and S each denote an RDF Dataset [rdf11-concepts].

Identical RDF Datasets

R = S denotes that R and S are identical RDF Datasets.

Two RDF Datasets are identical if and only if they have the same default graph (under set equality) and the same set of named graphs (under set equality).

If R and S are identical, we may equivalently say that they are the same RDF Dataset.

Isomorphic RDF Datasets

R ≈ S denotes that R and S are isomorphic RDF Datasets.

In particular, R is isomorphic with S  if and only if it is possible to map (i.e., relabel) the blank nodes of R to the blank nodes of S  in a one-to-one manner, generating an RDF dataset R'  such that R' = S.

RDF Dataset Canonicalization

RDF Dataset Canonicalization is a function C  that maps an RDF Dataset to an RDF Dataset in a manner that satisfies the following two properties for all RDF Datasets R and S:

  • R ≈ C(R) ; and
  • C(R) = C(S)  if and only if R ≈ S.

We may refer to C(R) as the canonical form of R (under C ).

Such a canonicalization function can be implemented, in practice, as a procedure that deterministically labels all blank nodes of an RDF Dataset in a one-to-one manner, without depending on any feature of the input serialization (blank node labels, order of the triples, etc.) of the input RDF Dataset.

Note

It is important to emphasize that the term “canonicalization” is used here in its very generic form, described as:

In computer science, canonicalization […] is a process for converting data that has more than one possible representation into a “standard”, “normal”, or canonical form.
Source: Wikipedia.

Canonicalization, as used in the context of this document and the proposed charter, is indeed defined on an abstract data model (i.e., on RDF Dataset [rdf11-concepts]), regardless of a specific serialization. (It could also be referred to as a “canonical labelling scheme”). It is therefore very different from the usage of the term in, for example, the “Canonical XML” [xml-c14n11] or the “JSON Canonicalization Scheme” [rfc8785] specifications which are, essentially, syntactic transformations of the respective documents. Any comparison with those documents can be misleading.

1.1.1 The General Problem Space

Though canonical labeling procedures for directed and undirected graphs have been studied for several decades, only in the past 10 years have two generalized and comprehensive approaches been proposed for RDF Graphs and RDF Datasets:

  1. Algorithms for signing RDF data have been proposed in [carroll-2003] and [kasten-et-al-2014], both reviewed through the anonymous scholarly peer review process. However, these approaches are not sound and complete with respect to isomorphism.
  2. The algorithms defined by Aidan Hogan in [hogan-2017], reviewed through the anonymous scholarly peer review process, and also implemented by the author.
  3. The algorithm defined by Rachel Arnold and Dave Longley, see [arnold-longley-2020], reviewed by experts at Mirabolic Consulting, implemented and deployed via, e.g., the JSON-LD Signatures package used in several JSON-LD Signature suites.

The introduction of Aidan Hogan’s paper [hogan-2017] also contains a more thorough description of the underlying mathematical challenges.

2. Defining an RDF Dataset Hash

One possible approach to calculating the hash of an RDF Dataset R may imply the following steps:

  1. use an RDF Dataset Canonicalization function C to calculate C(R);
  2. serialize C(R) to quads [n-quads] and sort the resulting set of quads;
  3. apply a (traditional) hashing function h on the result of the serialization to yield h(R) ; this can be considered as the cryptographic hash of the Dataset.

The second step, i.e., the sorting of the serialized dataset in quads, also requires the specification of what could be considered as a “canonical” version of N-Quads [n-quads] files (handling of white spaces, specifying the exact sorting algorithms to be used, canonical representation of datatype literals, etc.). Considering the simplicity of the N-Quads format, this does not necessitate a significant specification effort, but has a value in its own right. That being said, there may be other approaches to define a hash that do not necessarily involve a sorted N-Quads representation: the Working Group will have to determine the best approach.

The main challenge for the Working Group is to provide a standard for the RDF Dataset Canonicalization function.

Note

When the hash of a file is transferred from one system to another and the file is used as is, there is no need for any processing other than checking the value of the hash of the file. This is true for RDF just as for any other data formats; this means that any existing hashing functions may be used on the original file. However, RDF has many serializations for datasets, notably TriG, JSON-LD, N-Quads or, informally, CBOR-LD. The space efficient verification use case points to a need in some circumstances to transform — usually to minimize — the data that is transferred. In this scenario, a hash on the original file, such as a hash calculated on a JSON-LD file, is not appropriate, as the conversion will make it invalid. A hash of the abstract dataset, on the other hand, will still be valid.

3. Use Cases and Requirements

Some typical use cases for RDF Dataset Canonicalization and/or signatures are:

Detecting changes in Datasets
When processing RDF Datasets over a period of time, determining if information has changed is helpful. For example, knowing if information has changed helps with data cache invalidation, detecting if expected data has been tampered with or modified, or when debugging unexpected changes in source RDF Datasets.
Space-efficient verification of the contents of Datasets
If unique identification of RDF Datasets is possible, one can cryptographically hash the information to establish a storage-efficient way to verify that the information has not changed over time. One property of cryptographic hash is that one can verify data integrity. For example, a small device sending an RDF Dataset to a remote storage location can compute a cryptographic hash for later use in verifying that all the data arrived intact and has not been tampered with.
(Contributed by Alan Karp.)
Secret confirmation of the contents of Datasets
Since a cryptographic hash is a one-way function, and serves as an abbreviation for the entire RDF Dataset, one can use it in places where secrecy is desired. For example, when ensuring that the transaction history on a distributed ledger is the same between two services, two systems could keep track of the list of transactions in their respective ledgers. Canonicalizing and cryptographically hashing the list of transactions should result in the same cryptographic hash without either party needing to share the list of transactions with the other.
(Contributed by Alan Karp.)
Annotating Datasets with digital signatures and other digital proofs
When publishing or transmitting an RDF Dataset, clearly articulating the entity that published the data and protecting it from undetected modification is useful for mission critical systems. For example, understanding the issuer of a Verifiable Credential and ensuring that it is evident when a Verifiable Presentation has been tampered with underlies the trustworthiness of the encoded information. This process can go through the calculation of the hash of the Verifiable Credential represented as an RDF Dataset, and use some standard cryptographic signature method to ensure the integrity of the hash value.
Generating canonical Skolem IRIs for blank nodes
Skolem IRIs have been proposed in RDF 1.1 as a way to replace blank nodes with IRIs in application scenarios where it is preferable to avoid the use of blank nodes. Rather than using an ad hoc scheme to generate Skolem IRIs to replace blank nodes, an alternative is to generate Skolem IRIs in a deterministic manner, such that compliant implementations will generate the same IRIs to replace the same blank nodes in isomorphic copies of an RDF graph or dataset. Such a procedure will produce a canonical version of a Skolemized RDF graph or dataset that can then be used in the context of several of the use-cases mentioned previously.
Semantic consistency of multi-part datasets
The change detection and space-efficient verification use-cases above can be leveraged in situations where a graph or dataset semantically relies on one or several other graph(s), which it refers to through links. Attaching cryptographic hashes to these links would enable the verification of the overall integrity of the set of interconnected graphs. One such example is the import mechanism of OWL: the ontology consumer may wish to verify that the imported ontology is the same as the one used by the author of the importing ontology, otherwise the resulting inferences may differ. Another such example is an EARL test report: the consumer may wish to ensure that the test description pointed to by the report is the one that was actually used for the test.

A. References

A.1 Informative references

[arnold-longley-2020]
RDF Dataset Normalization. Rachel Arnold; Dave Longley. Report submitted to the W3C Credentials Community Group mailing list. URL: https://lists.w3.org/Archives/Public/public-credentials/2021Mar/att-0220/RDFDatasetCanonicalization-2020-10-09.pdf
[carroll-2003]
Signing RDF Graphs. Jeremy J. Carroll. International Semantic Web Conference — ISWC 2003, Springer Verlag, pp 369-384. URL: https://link.springer.com/chapter/10.1007/978-3-540-39718-2_24
[hogan-2017]
Canonical Forms for Isomorphic and Equivalent RDF Graphs: Algorithms for Leaning and Labelling Blank Nodes. Aidan Hogan. ACM Transactions on the Web, vol. 11, no. 4, pp. 22:1-22:62. URL: http://aidanhogan.com/docs/rdf-canonicalisation.pdf
[kasten-et-al-2014]
A Framework for Iterative Signing of Graph Data on the Web. Andreas Kasten; Ansgar Scherp; Peter Schauß. European Semantic Web Conference — ESWC 2014, Springer Verlag, pp. 146-160. URL: https://link.springer.com/chapter/10.1007%2F978-3-319-07443-6_11
[n-quads]
RDF 1.1 N-Quads. Gavin Carothers. W3C. 25 February 2014. W3C Recommendation. URL: https://www.w3.org/TR/n-quads/
[rdf11-concepts]
RDF 1.1 Concepts and Abstract Syntax. Richard Cyganiak; David Wood; Markus Lanthaler. W3C. 25 February 2014. W3C Recommendation. URL: https://www.w3.org/TR/rdf11-concepts/
[rfc8785]
JSON Canonicalization Scheme (JCS). A. Rundgren; B. Jordan; S. Erdtman. IETF. June 2020. Informational. URL: https://www.rfc-editor.org/rfc/rfc8785
[xml-c14n11]
Canonical XML Version 1.1. John Boyer; Glenn Marcy. W3C. 2 May 2008. W3C Recommendation. URL: https://www.w3.org/TR/xml-c14n11/