Difference between revisions of "Use Cases And Requirements"
(→Maintaining Social Contact Information)
(+ Alternative scenario: moving contained resources)
|Line 316:||Line 316:|
====Alternative scenario: moving contained resources
====Alternative scenario: moving contained resources====
===UC3: Retrieve resource description===
===UC3: Retrieve resource description===
Revision as of 07:23, 3 December 2012
- 1 Linked Data Platform Use Cases And Requirements
- 1.1 Steps to Complete
- 1.2 Scope and Motivation
- 1.3 Organization of this Document
- 1.4 User Stories
- 1.4.1 Maintaining Social Contact Information
- 1.4.2 Keeping Track of Personal and Business Relationships
- 1.4.3 System and Software Development Tool Integration
- 1.4.4 Library Linked Data
- 1.4.5 Municipality Operational Monitoring
- 1.4.6 Healthcare
- 1.4.7 Metadata enrichment in broadcasting
- 1.4.8 Aggregation and Mashups of Infrastructure Data
- 1.4.9 Data Sharing
- 1.4.10 Hosting POSTed Resources
- 1.4.11 LDP and Authentication/Authorization
- 1.4.12 Sharing Binary Resources and Metadata
- 1.4.13 Data catalogs
- 1.4.14 Constrained Devices and Networks
- 1.4.15 Services Supporting the Process of Science
- 1.4.16 Project Membership Information : Information Evolution
- 1.4.17 Cloud Infrastructure Management
- 1.5 Use Cases
- 1.5.1 UC1: Manage containers
- 1.5.2 UC2: Manage resources
- 1.5.3 UC3: Retrieve resource description
- 1.5.4 UC4: Update existing resource
- 1.5.5 UC5: Determine if a resource has changed
- 1.5.6 UC6: Aggregate resources
- 1.5.7 UC7: Filter resource description
- 1.5.8 UC8: Managing non-RDF Resources
- 1.6 Requirements
- 1.7 Acknowledgements
- 1.8 References
Linked Data Platform Use Cases And Requirements
This is a working document used to collect use cases and requirements for consideration by the WG. The starting point comes from Linked Data Basic Profile Use Cases and Requirements.
Steps to Complete
Nov 26? WG to confirm User Story content: add, remove, refine (see process below). Note: this is ONLY User Stories (not Use Cases, Use Case Scenarios or Requirements)
- Issues before Nov 26th? will be included into FPWD
- Editors to:
- Dec 3 refine User Stories based on feedback
- Dec 5 elaborate on Use Cases in support of User Stories
- Dec 6 insert open issues into draft for FPWD
- Dec 7 - convert wiki page to ReSpec for FPWD
- Dec 10 - WG to review prior to FPWD
- Dec 17 - Publish FPWD
- Dec ?? Deadline for publications by year end 2012
Process to introduce new User Stories & Use Cases
Open an Issue in the tracker against the UC&R product. The WG will review these and decide whether these are valid.
Scope and Motivation
Linked Data was defined by Tim Berners-Lee with the following guidelines :
- Use URIs as names for things
- Use HTTP URIs so that people can look up those names
- When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL)
- Include links to other URIs. so that they can discover more things
These four rules have proven very effective in guiding and inspiring people to publish Linked Data on the web. The amount of data, especially public data, available on the web has grown rapidly, and an impressive number of extremely creative and useful “mashups” have been created using this data as result.
There has been much less focus on the potential of Linked Data as a model for managing data on the web - the majority of the Application Programming Interfaces (APIs) available on the Internet for creating and updating data follow a Remote Procedure Call (RPC) model rather than a Linked Data model.
If Linked Data were just another model for doing something that RPC models can already do, it would be of only marginal interest. Interest in Linked Data arises from the fact that applications with an interface defined using Linked Data can be much more easily and seamlessly integrated with each other than applications that offer an RPC interface. In many problem domains, the most important problems and the greatest value are found not in the implementation of new applications, but in the successful integration of multiple applications into larger systems.
Some of the features that make Linked Data exceptionally well suited for integration include:
- A single interface – defined by a common set of HTTP methods – that is universally understood and is constant across all applications. This is in contrast with the RPC architecture where each application has a unique interface that has to be learned and coded to.
- A universal addressing scheme – provided by HTTP URLs – for both identifying and accessing all “entities”. This is in contrast with the RPC architecture where there is no uniform way to either identify or access data.
- A simple yet extensible data model – provided by RDF – for describing data about a resource in a way which doesn’t require prior knowledge of vocabulary being used.
Experience implementing applications and integrating them using Linked Data has shown very promising results, but has also demonstrated that the original four rules defined by Tim Berners-Lee for Linked Data are not sufficient to guide and constrain a writable Linked Data API. As was the case with the original four rules, the need generally is not for the invention of fundamental new technologies, but rather for a series of additional rules and patterns that guide and constrain the use of existing technologies in the construction of a Basic Profile for Linked Data to achieve interoperability.
The following list illustrates a few of the issues that require additional rules and patterns:
- What URLs do I post to in order to create new resources?
- How do I get lists of existing resources, and how do I get basic information about them without having to access each one?
- How should I detect and deal with race conditions on write?
- What media-types/representations should I use?
- What standard vocabularies should I use?
- What primitive data types should I use?
A good goal for the Basic Profile for Linked Data would be to define a specification required to allow the definition of a writable Linked Data API equivalent to the simple application APIs that are often written on the web today using the Atom Publishing Protocol (APP). APP shares some characteristics with Linked Data, such as the use of HTTP and URLs. One difference is that Linked Data relies on a flexible data model with RDF, which allows for multiple representations.
Organization of this Document
This document is organized as follows:
- User Stories capture statements about system requirements written from a user or application perspective. They are typically lightweight and informal and can run from one line to a paragraph or two (sometimes described as an 'epic') . Analysis of each user story will reveal a number of (functional) use-cases and other non-functional requirements. See Device API Access Control Use Cases and Requirements for a good example of user stories and their analysis.
- Use Cases are used to capture and model functional requirements. Use cases describe the system’s behavior under various conditions , cataloguing who does what with the system, for what purpose, but without concern for system design or implementation . Each use case is identified by a reference number to aid cross-reference from other documentation; use-case indexing in this document is based on rdb2rdf use-cases. A variety of styles may be used to capture use-cases, from a simple narrative to a structured description with actors, pre/post conditions, and step-by-step behaviours as in POWDER: Use Cases and Requirements, and non-functional requirements raised by the use-case. Use cases act like the hub of a wheel, with spokes supporting requirements analysis, scenario-based evaluation, testing, and integration with non-functional, or quality requirements.
- Scenarios are more focused still, representing a single instance of a use case in action. Scenarios may range from lightweight narratives as seen in Use cases and requirements for Media Fragments, to being formally modeled as interaction diagrams. Each use-case should include at least a primary scenario, and possibly other alternative scenarios.
- Requirements list non-functional or quality requirements, and the use cases they may be derived from. This approach is exemplified in the Use Cases and Requirements for the Data Catalog Vocabulary.
Maintaining Social Contact Information
Many of us have multiple email accounts that include information about the people and organizations we interact with – names, email addresses, telephone numbers, instant messenger identities and so on. When someone’s email address or telephone number changes (or they acquire a new one), our lives would be much simpler if we could update that information in one spot and all copies of it would automatically be updated. In other words, those copies would all be linked to some definition of “the contact.” There might also be good reasons (like off-line email addressing) to maintain a local copy of the contact, but ideally any copies would still be linked to some central “master.”
Agreeing on a format for “the contact” is not enough, however. Even if all our email providers agreed on the format of a contact, we would still need to use each provider’s custom interface to update or replace the provider’s copy, or we would have to agree on a way for each email provider to link to the “master”. If we look outside our own personal interests, it would be even more useful if the person or organization exposed their own contact information so we could link to it.
What would work in either case is a common understanding of the resource, a few formats needed, and access guidance for these resources. This would support how to acquire a link to a contact, and how to use those links to interact with a contact (including reading, updating, and deleting it), as well as how to easily create a new contact and add it to my contacts and when deleting a contact, how it would be removed from my list of contacts. It would also be good to be able to add some application-specific data about my contacts that the original design didn’t consider. Ideally we’d like to eliminate multiple copies of contacts, there would be additional valuable information about my contacts that may be stored on separate servers and need a simple way to link this information back to the contacts. Regardless of whether a contact collection is my own, shared by an organization, or all contacts known to an email provider (or to a single email account at an email provider), it would be nice if they all worked pretty much the same way.
Keeping Track of Personal and Business Relationships
In our daily lives, we deal with many different organizations in many different relationships, and they each have data about us. However, it is unlikely that any one organization has all the information about us. Each of them typically gives us access to the information (at least some of it), many through websites where we are uniquely identified by some string – an account number, user ID, and so on. We have to use their applications to interact with the data about us, however, and we have to use their identifier(s) for us. If we want to build any semblance of a holistic picture of ourselves (more accurately, collect all the data about us that they externalize), we as humans must use their custom applications to find the data, copy it, and organize it to suit our needs.
Would it not be simpler if at least the Web-addressable portion of that data could be linked to consistently, so that instead of maintaining various identifiers in different formats and instead of having to manually supply those identifiers to each one’s corresponding custom application, we could essentially build a set of bookmarks to it all? When we want to examine or change their contents, would it not be simpler if there were a single consistent application interface that they all supported? Of course it would.
Our set of links would probably be a simple collection. The information held by any single organization might be a mix of simple data and collections of other data, for example, a bank account balance and a collection of historical transactions. Our bank might easily have a collection of accounts for each of its collection of customers.
System and Software Development Tool Integration
System and software development tools typically come from a diverse set of vendors and are built on various architectures and technologies. These tools are purpose built to meet the needs for a specific domain scenario (modeling, design, requirements and so on.) Often tool vendors view integrations with other tools as a necessary evil rather than providing additional value to their end-users. Even more of an afterthought is how these tools’ data -- such as people, projects, customer-reported problems and needs -- integrate and relate to corporate and external applications that manage data such as customers, business priorities and market trends. The problem can be isolated by standardizing on a small set of tools or a set of tools from a single vendor, but this rarely occurs and if does it usually does so only within small organizations. As these organizations grow both in size and complexity, they have needs to work with outsourced development and diverse internal other organizations with their own set of tools and processes. There is a need for better support of more complete business processes (system and software development processes) that span the roles, tasks, and data addressed by multiple tools. This demand has existed for many years, and the tools vendor industry has tried several different architectural approaches to address the problem. Here are a few:
- Implement an API for each application, and then, in each application, implement “glue code” that exploits the APIs of other applications to link them together.
- Design a single database to store the data of multiple applications, and implement each of the applications against this database. In the software development tools business, these databases are often called “repositories.”
- Implement a central “hub” or “bus” that orchestrates the broader business process by exploiting the APIs described previously.
It is fair to say that although each of those approaches has its adherents and can point to some successes, none of them is wholly satisfactory. The use of Linked Data as an application integration technology has a strong appeal. OSLC
Library Linked Data
The W3C Library Linked Data working group has a number of use cases cited in their Use Case Report. LLD-UC These referenced use cases focus on the need to extract and correlate library data from disparate sources. Variants of these use cases that can provide consistent formats, as well as ways to improve or update the data, would enable simplified methods for both efficiently sharing this data as well as producing incremental updates without the need for repeated full extractions and import of data.
The 'Digital Objects Cluster' contains a number of relevant use-cases:
- Grouping: This should "Allow the end-users to define groups of resources on the web that for some reason belong together. The relationship that exists between the resources is often left unspecified. Some of the resources in a group may not be under control of the institution that defines the groups."
- Enrichment: "Enable end-users to link resources together."
- Browsing: "Support end-user browsing through groups and resources that belong to the groups."
- Re-use: "Users should have the capability to re-use all or parts of a collection, with all or part of its metadata, elsewhere on the linked Web."
The 'Collections' cluster also contains a number of relevant use-cases:
- Collection-level description: "Provide metadata pertaining to a collection as a whole, in contrast to item-level description."
- Collections discovery: "Enable innovative collection discovery such as identification of nearest location of a physical collection where a specific information resource is found or mobile device applications ... based on collection-level descriptions."
- Community information services: Identify and classify collections of special interest to the community.
Municipality Operational Monitoring
Across various cities, towns, counties, and various municipalities there is a growing number of services managed and run by municipalities that produce and consume a vast amount of information. This information is used to help monitor services, predict problems, and handle logistics. In order to effectively and efficiently collect, produce, and analyze all this data, a fundamental set of loosely coupled standard data sources are needed. A simple, low-cost way to expose data from the diverse set of monitored services is needed, one that can easily integrate into the municipalities of other systems that inspect and analyze the data. All these services have links and dependencies on other data and services, so having a simple and scalable linking model is key.
For physicians to analyze, diagnose, and propose treatment for patients requires a vast amount of complex, changing and growing knowledge. This knowledge needs to come from a number of sources, including physicians’ own subject knowledge, consultation with their network of other healthcare professionals, public health sources, food and drug regulators, and other repositories of medical research and recommendations.
To diagnose a patient’s condition requires current data on the patient’s medications and medical history. In addition, recent pharmaceutical advisories about these medications are linked into the patient’s data. If the patient experiences adverse affects from medications, these physicians need to publish information about this to an appropriate regulatory source. Other medical professionals require access to both validated and emerging effects of the medication. Similarly, if there are geographical patterns around outbreaks that allow both the awareness of new symptoms and treatments, this information needs to quickly reach a very distributed and diverse set of medical information systems. Also, reporting back to these regulatory agencies regarding new occurrences of an outbreak, including additional details of symptoms and causes, is critical in producing the most effective treatment for future incidents.
Metadata enrichment in broadcasting
There are many different use cases when broadcasters show interest in metadata enrichment:
- enrich archive or news metadata by linking facts, events, locations and personalities
- enrich metadata generated by automatic extraction tools such as person identification, etc.
- enrich definitions of terms in classification schemes or enumeration lists
This comes in support of more effective information management and data/content mining (if you can't find your content, it' like if you don't have and must either recreate or acquire it again, which is not financially effective).
However, there is a need for solutions facilitating linkage to other data sources and taking care of the issues such as discovery, automation, disambiguation. Etc. Other important issues that broadcasters would face are the editorial quality of the linked data, its persistence, and usage rights.
Aggregation and Mashups of Infrastructure Data
For infrastructure management (such as storage systems, virtual machine environments, and similar IaaS and PaaS concepts), it is important to provide an environment in which information from different sources can be aggregated, filtered, and visualized effectively. Specifically, the following use cases need to be taken into account:
- While some data sources are based on Linked Data, others are not, and aggregation and mashups must work across these different sources.
- Consumers of the data sources and aggregated/filtered data streams are not necessarily implementing Linked Data themselves, they may be off-the-shelf components such as dashboard frameworks for composing visualizations.
- Simple versions of this scenario are pull-based, where the data is requested from data sources. In more advanced settings, without a major change in architecture it should be possible to move to a push-based interaction model, where data sources push notifications to subscribers, and data sources provide different services that consumers can subscribe to (such as "informational messages" or "critical alerts only").
In this scenario, the important factors are to have abstractions that allow easy aggregation and filtering, are independent from the internal data model of the sources that are being combined, and can be used for pull-based interactions as well as for push-based interactions.
In a downscaled context, where the used of a central data repository is replaced by several smaller servers, it is necessary to be able to ship information among the servers. A device in the network may publish an information on a server with an other device as a target receiver. This message will then have to be forwarded from server to server until that target is reached. A set of common standards for updating the content of containers and the description of the resources will be necessary to implement such feature (not taking the routing aspect into consideration here).
Hosting POSTed Resources
<http://dev.example/bugs> is a factory resource for creating new bugs (well, documenting existing bugs). It accepts <Bug>s of the form:
_:newBug a <Bug> ; <product> <http://products.example.com/gun> ; <issueText> "kills people" ; dc:author "Bob" ; dc:date "2012-07-04T23:54"^^xsd:dateTime
By this definition "hosting" means changing _:newBug to <http://dev.example/bug/7>. LDP doesn't provide any guidance around that.
LDP and Authentication/Authorization
Access to the Linked Data Platform often may require authentication and authorization. Access by clients can depend on the interaction context (different resources are needed for accomplishing different goals), client identity (different clients have different levels of access), and possibly client roles (access control may be coupled to roles instead of identities, in which case client/role associations need to be established) and/or client attributes (access control may be coupled to attributes which can be used even when the client identities are unknown, assuming that the attributes can be reliably determined). On the Web, many different ways of identification (establishing naming schemes that uniquely identify entities of interest), authentication (frameworks for verifying someone's claim to have an identity), and authorization (granting access to a resource based on an identity and some access control scheme) exist. In addition, in many cases platforms must integrate into existing scenarios around these issues, and cannot freely pick a framework from scratch. Thus, LDP should provide developers with some guidance around the following issues:
- How should LDP services integrate into existing landscapes of identification, authentication, and authorization? For some established technologies, maybe we can provide guidance and patterns on how to support them.
- What is a reasonable authentication and authorization model? LDP will be used to drive many different services, and these might have different models of how clients should have access to LDP services in the context of those specific application scenarios. What is a reasonable model so that LDP providers can handle this flexibility, and still can support identification, authentication, and authorization frameworks that are dictated by the environment? In this context, is it more reasonable to deal with roles or attributes in granting the access to the clients?
Sharing Binary Resources and Metadata
When publishing datasets about stars one may want to publish links to the pictures in which those stars appear, and this may well require publishing the pictures themselves. Vice versa: when publishing a picture of space we need to know which telescope took the picture, which part of the sky it was pointing at, what filters were used, which identified stars are visible, who can read it, who can write to it, ...
If LinkedData contains information about resources that are most naturally expressed in non-rdf formats (be they binary such as pictures or videos, or human readable documents in XML formats), those non RDF formats should be just as easy to publish to the LinkedData server as the RDF relations that link those resources up. A LinkedData server should therefore allow publishing of non linked data resources too, and make it easy to publish and edit metadata about those resources.
The resource comes in two parts - the image and information about the image (which may in the image file but better external to it as it's more general). The information about the image is vital. It's a compound item of image data and other data (being application metadata about the image does not distinguish from the platform's point-of-view.
The Asset Description Metadata Schema (ADMS) provides the data model to describe semantic assets repositories contents, but this leaves many open challenges when building a federation of these repositories to serve the need of assets reuse. These include accessing and querying individual repositories and efficiently retrieving updated content without having to retrieve the whole content. Hence, we chose to build the integration solution capitalizing on the Data Warehousing integration approach. This allows us to cope with heterogeneity of sources technologies and to benefit from the optimized performance it offers, given that individual repositories do not usually change frequently. With Data Warehousing, the federation requires to:
- understand the data, i.e. understand their semantic descriptions, and other systems.
- seamlessly exchange the semantic assets metadata from different repositories
- keep itself up-to-date.
Repositories owners can maintain de-referenceable URIs for their repository description and contained assets in a Linked Data compatible manner. ADMS provides the necessary data model to enable meaningful exchange of data. However, This leaves the challenge of efficient access to the data not fully addressed.
Related: Data Catalog Schema and Protocol
Constrained Devices and Networks
Information coming from resource constrained devices in the Web of Things (WoT) has been identified as a major driver in many domains, from smart cities to environmental monitoring to real-time tracking. The amount of information produced by these devices is growing exponentially and needs to be accessed and integrated in a systematic, standardized and cost efficient way. By using the same standards as on the Web, integration with applications will be simplified and higher-level interactions among resource constrained devices, abstracting away heterogeneities, will become possible. Up-coming IoT/WoT standards such as 6LowPAN - IPv6 for resource constrained devices - and the Constrained Application Protocol (CoAP), which provides a downscaled version of HTTP on top of UDP for the use on constrained devices, are already at a mature stage. The next step now is to support RESTful interfaces also on resource constrained devices, adhering to the Linked Data principles. Due to the limited resources available, both on the device and in the network (such as bandwidth, energy, memory) a solution based on SPARQL Update is at the current point in time considered not to be useful and/or feasible. An approach based on the HTTP-CoAP Mapping would enable constrained devices to directly participate in a Linked Data-based environment.
Services Supporting the Process of Science
Many fields of science now include branches with in silico data-intensive methods, e.g. bioinformatics, astronomy. To support these new methods we look to move beyond the established platforms provided by scientific workflow systems to capture, assist, and preserve the complete lifecycle from record of the experiment, through local trusted sharing, analysis, dissemination (including publishing of experimental data "beyond the PDF"), and re-use.
- Aggregations, specifically Research Objects (ROs) that are exchanged between services and clients bringing together workflows, data sets, annotations, and provenance. We use an RDF model for this. While some aggregated contents are encoded using RDF and in increasing number are linked data sources, others are not; while some are stored locally "within" the RO, others are remote (in both cases this is often due to size of the resources or access policies).
- Services that are distributed and linked. Some may be centralising for e.g. publication, others may be local, e.g. per lab. We need lightweight services that can be simply and easily integrated into and scale across the wide variety of softwares and data used in science: we have adopted a RESTful approach where possible.
- Foundation services that collect and expose ROs for storage, modification, exploration, and reuse.
- Services that provide added-value to ROs such as seamless import/export from scientific workflow systems, automated stability evaluation, or recommendation (and therefore interact with the foundation services to retrieve/store/modify/ROs).
- Compatibility with access control that can reflect the needs for privacy and publication at different stages of the research lifecycle.
Project Membership Information : Information Evolution
Information about people and projects changes as roles change, as organisations change and as contact details change. Finding the current state of a project is important in enabling people to contact the right person in the right role. It can also be useful to look back and see who was performing what role in the past.
A use of a Link Data Platform could be to give responsibility for managing such information with the project team itself, not requiring updates to be requested of a centralised website administrator.
This could be achieved with:
- Resource descriptions for each person and project
- A container resource to describe roles/membership in the project.
To retain the history of the project, the old version of a resources, including container resources, should be retained so there is a need to address both specific items and also have a notion of "current".
Access to information has two aspects:
- Access to the "current" state, regardless of the version of the resource description
- Access to historical state, via access to a specific version of the resource description
See also Maintaining Social Contact Information.
Cloud Infrastructure Management
Cloud operators offer API support to provide customers with remote access for infrastructure management. Infrastructure consists of Systems, Computers, Networks, Discs, etc, and the overall structure can be seen as mostly hierarchical, (Cloud contains Systems, Systems contain Machines, etc). This is complemented with crossing links (e.g. Machines connected to a Network). The IaaS scenario also makes requirements for lifecycle management, non-instant changes and history capture. Infrastructure management can be seen as the manipulation of the underlying graph.
The following use-cases are each derived from one or more of the user-stories above. These use-cases are explored in detail through the development of scenarios, each motivated by some key aspect exemplified by a single user-story. The examples they contain are included purely for illustrative purposes, and should not be interpreted normatively.
UC1: Manage containers
Containers are the primary mechanism for creating and managing resources within an application context. Resources grouped together within the same container would typically belong to the same application. A container is identified by a URI so is a resource in its own right. The properties of a container may also represent the affordances of that container, enabling clients to determine what other operations they can do on that container. These operations may include descriptions of application specific services that can be invoked by exchanging RDF documents.
Primary scenario: create container
Create a new container resource within the LDP server. In Services supporting the process of science, Research Objects are semantically rich aggregations of resources that bring together data, methods and people in scientific investigations. A basic workflow research object will be created to aggegate scientific workflows and the artefacts that result from this workflow. The research object begins life as an empty container into which workflows, datasets, results and other data will be added throughout the lifecycle of the project.
@prefix ro: http://purl.org/wf4ever/ro# @prefix dct: http://purl.org/dc/terms/ @prefix ore: http://www.openarchives.org/ore/ <> a ro:ResearchObject, ore:Aggregation ; dct:created "2012-12-01"^^xsd:dateTime .
Alternative scenario: create a nested container
The motivation for nested containers comes from the Hosting POSTed Resources user-story. The OSLC Change Management vocabular allows bug reports to have attachments referenced by the membership predicate oslc_cm:attachment. The 'top-level-container' contains issues, which each issue resource has its own container of attachment resources.
@prefix dcterms: <http://purl.org/dc/terms/>. @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>. @prefix oslc_cm: <http://open-services.net/ns/cm#>. @prefix : <http://example.org/>. :top-level-container rdfs:member :issue1234 . :issue1234 a oslc_cm:ChangeRequest; dcterms:identifier "1234"; dcterms:type "a bug"; dcterms:related :issue1235 ; oslc_cm:attachments :attachments123. :issue1235 a oslc_cm:ChangeRequest; dcterms:title "a related bug". :attachments a oslc_cm:AttachmentList; oslc_cm:attachment :attachment324, :attachment251.
UC2: Manage resources
This use-cases addresses the managed lifecycle of a resource and is concerned with resource ownership. The responsibility for managing resources belongs with their container. We focus on creation and deletion of resources in the context of a container, and the potential for transfer of ownership by moving resources between containers. The ownership of a resource should always be clear; no resource managed in this way should ever be owned by more than one container.
Primary scenario: create resource
Resources begin life by being created within a container. From user-story, Maintaining Social Contact Information, It should be possible to "easily create a new contact and add it to my contacts." Contact details are captured as an RDF description and it's properties, including "names, email addresses, telephone numbers, instant messenger identities and so on." The description may include non-standard RDF; "data about my contacts that the original design didn’t consider." The new resource is created in a container representing "my contacts." This container is used to manage the resources within it. So, for example, if "my contacts" is deleted then a user would also reasonably expect that all contacts within it would also be deleted. The following RDF describes a contact resource including examples of same-document references.
<> a foaf:PersonalProfileDocument; foaf:primaryTopic <#me> . <#me> a foaf:Person; foaf:name "Henry" .
While the LDP has ultimate control over resource naming, some applications may require more control over naming, perhaps to provide a more human-readable URI. An LDP server may support something like the Atom Publishing Protocol slug header to convey a user defined naming 'hint'.
Alternative scenario: delete resource
Delete a resource and all it's properties. If the resource resides within a container it will be removed from that container, however other links to the deleted resource may be left as dangling references. In the case where the resource is a container, the server may also delete any or all contained resources. In normal practice, a deleted resource cannot be reinstated. There are however, edge-cases where limited undelete may be desirable. Best practice states that "Cool URIs don't change", which means that deleted URIs shouldn't be recycled.
In practice, sensitive data will be subject to access control as described in user-story LDP and Authentication/Authorization. In this scenario authentication is based on Web Access Control. The user authenticates to the LDP server using FOAF+SSL, returning a WebID - a URI identifying the user. We assume the LDP holds an RDF Access Control List (ACL) expressed in the Basic Access Control ontology. This is used to determine whether or not the user is authorized to perform the operation, in this case a deletion (a write operation). In the ACL fragment below the agent identified as <http://example.com/card#i> is permitted access to delete <resourceX>.
@prefix acl: <http://www.w3.org/ns/auth/acl#> . [acl:accessTo <resourceX>; acl:mode acl:Read, acl:Write; acl:agent <http://example.com/card#i>].
Alternative scenario: moving contained resources
Created by ericP -- validate
Many resources may have value beyond the life of their membership in a container. For instance, the workflows, datasets and other data described in the create a container use case may be useful in other Research Objects. Cloning container members for use in other containers results in duplication of infromation and maintenance problems; web practice is to encourage the creation of one resource, which may be referenced as many places as necessary. This implies methods to add references to external resources to containers, and that it may sometimes be undesirable to delete contained resources when deleting a container.
UC3: Retrieve resource description
Access the current description of a resource, containing properties of that resource and links to related resources. The representation may include descriptions of related resources that cannot be accessed directly.
Depending upon the application, an LDP may enrich the retrieved RDF with additional triples. Examples include adding incoming links, sameAs closure and type closure.
The HTTP response should also include versioning information (i.e. last update or entity tag) so that subsequent updates can ensure they are being applied to the correct version.
Primary scenario: retrieve description of a resource
Based on Maintaining Social Contact Information, a user should be able to read contact details so that they are able to interact with a contact. An LDP holds social contact information about Alice. In this example the contact details make no distinction between resources and the people they describe. Resource http://example.com/people/Alice delineates the following RDF model. A Request for this resource returns an RDF representation in the desired format which could be Turtle or another RDF serialisation.
@prefix : <http://example.com/people/>. <> a foaf:Person; rdfs:label "Alice"; foaf:mbox <mailto:email@example.com>.
Alternative scenario: retrieve description of a non-document resource
In many cases, the things that are of interest are not always the things that are resolvable. The example below demonstrates how a foaf profile may be used to distinguish between the person and the profile; the former being the topic of the latter. This begs the question as to what a client should do with such non-document resources. In this case the HTTP protocol requires that the fragment part be stripped off before requesting the URI from the server. The result is a resolvable URI for the profile.
@base <http://www.w3.org/People/Berners-Lee/card> @prefix foaf: <http://xmlns.com/foaf/0.1/>. @prefix dc: <http://purl.org/dc/elements/1.1/>. <> a foaf:PersonalProfileDocument ; dc:title "Tim Berners-Lee's FOAF file" ; foaf:homepage <http://www.w3.org/People/Berners-Lee/> ; foaf:primaryTopic <#i> .
UC4: Update existing resource
Change the RDF description of a LDP resource, potentially removing or overwriting existing data. This allows applications to enrich the representation of a resource by addling additional links to other resources.
Primary scenario: enrichment
This relates to user-story Metadata enrichment in broadcasting and is based on the BBC Sports Ontology. The resource-centric view of linked-data provides a natural granularity for substituting, or overwriting a resource and its data. The simplest kind of update would simply replace what is currently known about a resource with a new representation. There are two distinct resources in the example below; a sporting event and an associated award. The granularity of the LDP would allow a user to replace the information about the award without disturbing the information about the event.
@prefix sport: <http://www.bbc.co.uk/ontologies/sport/> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . :mens_sprint a sport:MultiStageCompetition; rdfs:label "Men's Sprint"; sport:award <#gold_medal> . <#gold_medal> a sport:Award .
We can enrich the description as events unfold, linking to the winner of the gold medal by substituting the above description with the following.
@prefix sport: <http://www.bbc.co.uk/ontologies/sport/> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . :mens_sprint a sport:MultiStageCompetition; rdfs:label "Men's Sprint"; sport:award <#gold_medal> . <#gold_medal> a sport:Award; sport:awarded_to [ a foaf:Agent ; foaf:name "Chris Hoy" . ] .
Alternative scenario: selective update of a resource
@prefix dcat: <http://www.w3.org/ns/dcat#> . @prefix dcterms: <http://purl.org/dc/terms/> . :catalog a dcat:Catalog ; dcat:dataset :dataset/001; dcterms:issued "2012-12-11"^^xsd:date.
A catalog may contain multiple datasets, so when linking to new datasets it would be simpler and preferable to selectively add just the new dataset links. A Talis changeset  could be used to add a new dc:title to the dataset. The following update would be directed to the catalogue to add an additional dataset.
@prefix : <http://example.com/>. @prefix dcterms: <http://purl.org/dc/terms/> . @prefix cs: <http://purl.org/vocab/changeset/schema#> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>. <change1> a cs:ChangeSet ; cs:subjectOfChange :catalog ; cs:createdDate "2012-01-01T00:00:00Z" ; cs:changeReason "Update catalog datasets" ; cs:addition [ a rdf:Statement ; rdf:subject :catalog ; rdf:predicate dcat:dataset ; rdf:object :dataset/002 . ] .
UC5: Determine if a resource has changed
It should be possible to retrieve versioning information about a resource (e.g. last modified or entity tag) without having to download a representation of the resource. This information can then be compared with previous information held about that resource to determine if it has changed. This versioning information can also be used in subsequent conditional requests to ensure they are only applied if the version is unchanged.
Based on the user-story, Constrained Devices and Networks, an LDP could be configured to act as a proxy for a CoAP based Web of Things. As an observer of CoAP resources, the LDP registers its interest so that it will be notified whenever the sensor reading changes. Clients of the LDP can interrogate the LDP to determine if the state has changed.
In this example, the information about a sensor and corresponding sensor readings can be represented as RDF resources. The first resource below, represents a sensor described using the Semantic Sensor Network ontology.
@prefix : <http://example.com/energy-management/>. <> a :MainsFrequencySensor; rdfs:comment "Sense grid load based on mains frequency"; ssn:hasMeasurementCapability [ a :FrequencyMeasurementCapability; ssn:hasMeasurementProperty <#property_1> . ] .
The value of the sensor changes in real-time as measurements are taken. The LDP client can interrogate the resource below to determine if it has changed, without necessarily having to download the RDF representation. As different sensor properties are represented disjointly (separate RDF representations) they may change independently.
@prefix : <http://example.com/energy-management/>. <http://example.com/energy-management#property_1> :hasMeasurementPropertyValue <> . <> a :FrequencyValue; :hasQuantityValue "50"^^xsd:float.
UC6: Aggregate resources
There is a requirement to be able to manage aggregations of resources. These are (weak) aggregations, unrelated to the lifecycle management of resources, and distinct from the ownership between a resource and its container. There is a need to be able to create aggregations by adding and deleting individual membership properties.
Primary scenario: add a resource to a collection
There is an existing collection at <http://example.com/concept-scheme/subject-heading> that defines a collection of subject headings. This collection is defined as a skos:ConceptScheme and the client wishes to insert a new concept into the scheme. which will be related to the collection via a skos:inScheme link. The new subject-heading, "outer space exploration", is not necessarily owned by a container. The following RDF would be added to the (item-level) description of the collection.
@prefix scheme : <http://example.com/concept-scheme/>. @prefix concept : <http://example.com/concept/>. scheme:subject-heading a skos:ConceptScheme. concept:Outer+space+Exploration skos:inScheme scheme:subject-heading.
Alternative scenario: add a resource to multiple collections
Logically, a resource should not be owned by more than one container. however, it may be a member of multiple collections which define a weaker form of aggregation. As this is simply a manipulation of the RDF description of a collection, it should be possible to add the same resource to multiple collections.
As a machine-readable collection of medical terms, the SNOMED ontology is of key importance in healthcare. SNOMED CT allows concepts with more than one parent that don't fall into a lattice. In the example below, the same concept may fall under two different parent concepts. The example uses skos:narrowerTransitive to elide intervening concepts.
@prefix : <http://example.com/snomed/>. :_119376003 a skos:Concept ; skos:prefLabel "Tissue specimen" skos:narrowerTransitive :TissueSpecimenFromHeart. :_127462005 a skos:Concept ; skos:prefLabel "Specimen from heart" skos:narrowerTransitive :TissueSpecimenFromHeart. :_128166000 a skos:Concept; rdfs:label "Tissue specimen from heart".
UC7: Filter resource description
This use-case extends the normal behaviour of retrieving an RDF description of a resource, by dynamically excluding specific (membership) properties. For containers, it is often desirable to be able to read a collection, or item-level description that excludes the container membership.
Primary scenario: retrieve collection-level description
This scenario, based on Library Linked Data, uses the Dublin Core Metadata Initiative Collection-Level description. A collection can refer to any aggregation of physical or digital items. This scenario covers the case whereby a client can request a collection-level description as typified by the example below, without necessarily having to download a full listing of the items within the collection.
@prefix rdf: <rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">. @prefix dc: <http://purl.org/dc/elements/1.1/>. @prefix : <http://example.org/bookshelf/>. @prefix dcmitype: <http://purl.org/dc/dcmitype/>. @prefix cld: <http://purl.org/cld/terms/>. @prefix dcterms: <http://purl.org/dc/terms/>. <> dc:type dcmitype:Collection ; dc:title "Directory of organizations working with Linked Data" ; dcterms:abstract "This is a directory of organisations specializing in Linked Data." cld:isLocatedAt <http://dir.w3.org> cld:isAccessedVia <http://dir.w3.org/rdf/2012/directory/directory-list.xhtml?construct>
Alternative scenario: retrieve item-level description of a collection
This use-case scenario focuses on obtaining an item-level description of the resources aggregated by a collection. The simplest scenario is where the members of a collection are returned within a single representation, so that a client can explore the data by following these links. Different applications may use different membership predicates to capture this aggregation. The example below uses rdfs:member, but many different membership predicates are in common use, including RDF Lists. Item-level descriptions can be captured using the Functional Requirements for Bibliographic Records (FRBR) ontology.
@prefix frbr: <http://purl.org/vocab/frbr/core#>. <> rdfs:member <#ebooks97>, <#ebooks21279>. <#work97> a frbr:LiteraryWork; dc:title "Flatland: a romance of many dimensions" ; frbr:creator <#Abbott_Edwin>; frbr:manifestation <ebook97>. <#work21279> a frbr:LiteraryWork; dc:title "2 B R 0 2 B" ; frbr:creator <#Vonnegut_Kurt>; frbr:manifestation <ebook21279>.
Collections are potentially very large, so some means may be required to limit the size of RDF representation returned by the LDP (e.g. pagination).
UC8: Managing non-RDF Resources
From the User Story Sharing Binary Resources and Metadata it should be possible to easily add non RDF resources to a containers that accept them.
A user is trying to create a work order along with an attached image showing a faulty machine part. To the user and to the work order system, these two artifacts are managed as a set. A single request may create the work order, the attachment, and the relationship between them, atomically. When the user retrieves the work order later, they expect a single request by default to retrieve the work order plus all attachments. When the user updates the work order, e.g. to mark it completed, they only want to update the work order proper, not its attachments. Users may add/remove/replace attachments to the work order during its lifetime.
TODO: Refine these based on use case and scenario updates
- Define a minimal set of RDF media-types/representations
- Define a limited number of literal value types
- Use standard vocabularies as appropriate
- Update resources, either RDF-based or not
- Use optimistic collision detection on updates
- Ensure clients are ready for resource format and type changes
- Apply minimal constraints for creation and update
- Add a resource to an existing container
- Remove a resource, including any associations with a container
- Get members of a container
- When getting members of a container, provide data about the members
- Get just data about a container, without all the members
- Handle a large number of members of a container, breaking up representation into pages
- Allow pages to have order information for members, within a page and across all pages