W3C | Semantic Web | SWAP

Motivations for N3

The N3 language -- and the SWAP code such as Cwm which experiments with it and demonstrates it -- were motivated by a desire to show the semantic web working in a practical, tryable prototype, with the minimum of fuss and clutter. Minimum fuss includes an absence of extra features: this suggests a tactic of reusing existing features as far as possible - to push the bounds of one style or pattern until it is demonstrably unreasonable, (and then to stop!).

N3 requirements

The Semantic Web involves doing things with data and logic but using URIs for identifiers, so that the semantics of symbols can be shared with other applications and other agents across the world.

Notaion3 is a language for interchange which is designed to span and integrate many levels, from data transfer to the expression of logic, proof and trust.

A data language

The first task of N3 was to allow data - pure facts -to be written down simply and easily in a text file -- or in an IRC chat channel, for that matter. RDF had evolved as a way of expressing any data, be it tables, trees or mixtures of the above in a circles and arrows, but the XML format was too verbose for use in conversation, on a whiteboard.

The ";" and "," syntaxes followed from an observation that often many values were given for the same predicate on the same subject, and much data was given for the same subject.

Most of the information in most real systems is data (although the balance between data and rules varies widely). Even the ontology layer (OWL) is expressible using the data language. So RDF schemas and ontologies (unlike SGML and XML DTDs) are expressed in the same language.

A goal of N3 was not only to make a concise and usable language for data, but also to show how much could be done by using that data language.

A framework for logic

The semantics web layer cake is a simplified roadmap showing how many languages of different expressive power will be useful, and must interoperate as much as possible on the Web.

One aim of N3 was to make a transition to the expressive power of various languages to include rules and queries, in as seamless a way as reasonable from the data representation language. Axioms of the RDFS and OWL layers can be represented in N3, not because the logic is a FOL whose expressivity includes OWL, but rather because the quoting and substitution used by cwm's log:includes allows any syntactic graph-to-graph transform to be expressed.


On this Web of information, much of that information is about other information. Indeed, the first driver for a common language for data was data about information resources (hence the now rather outdated name Resource Description Framework). In this environment, attempts to avoid paradox by avoiding data referring to data are not going to work. An alternative approach is to embrace quoting - to be able to make comments in one document about other documents without imposing any acyclic requirement on the whole global system.

Information work in a real world necessarily involves considering information without just adding it to the pile of things one believes. An agent makes his or her or its way through life considering who said what. This requires referential opacity to distinguish between talking about Mary and saying "Mary", to distinguish between what a document actually said and what you would now infer from what it said.


The interesting thing about Semantic Web data -- and rules -- is that it can be re-used in ways not designed by its originator. This works better for modular and declarative information than for monolithic or procedural information. Therefore, N3 is not required to be so much as as scripting language (although [1]) as to be able to express rules in a way which will make the re-usable out of context.

Cwm requirements

Cwm is designed to implement as much of the Semantic Web layer cake as possible to show (a) that it could be done and (b) what the semantic web was about.

It is not designed to be particularly efficient, to scale well for large datasets or large rulesets. There are a host of interesting algorithms which have been used in existing systems including databases including the SQL world, forward and backward chaining inference systems, including Prolog all the logic programming world, and KIF and the Knowledge Representation world. The idea is not that the cwm inference engine is special, rather that it not special: you can take any inference system and webize it, and make a semantic web agent. Webizing involves:

Each URI scheme has different properties and making the software aware of them and capable of implementing them makes it a true Semantic Web application, not just a "Semantic" application.

So goals for cwm include the ability to:


[1] The Haystack project at MIT/LCS takes an N3-like language and extends it to a complete procedural programming language.

Tim BL, with his director hat off
$Id: Motivation.html,v 1.15 2003/04/08 15:54:04 timbl Exp $