Draft Report of the SSN-XG Incubator Group
Perhaps this will migrate to the required "Abstract" of the final report
<all> please insert abbreviations here as you use them elsewhere.
- Semantic Sensor Network or Semantic Sensor Networks
- Incubator Group
- Semantic Sensor Network Incubator Group
- Commonwealth Scientific and Industrial Research Organisation, Australia
- Open Geospatial Consortium
- Sensor Web Enablement
- Southeastern Universities Research Association
Background to SSNXG
Rationale and sponsors for the W3C incubator group
It is estimated that today there are 4 billion mobile devices that can act as sensors. This is complemented by an even larger number of fixed sensors recording observations of a wide variety of modalities. The National Research Council in the US predicts that these numbers may grow into trillions by 2020. These sensors are increasingly being connected with Web infrastructure, and the OGC's Sensor Web Enablement (SWE) package of interoperability standards is being adopted in industry, government and academia. On the other hand, the W3C's Web of Data concept is gaining traction through the Semantic Web initiative, which aims to model complex, dynamic, heterogenous and distributed open information systems. Semantics is increasingly seen as key enabler for integration of sensor data into the broader Web information systems. Analytical and reasoning capabilities afforded by Semantic Web standards and technologies are important for developing advanced applications that go from capturing observations to recognition of events and ultimately developing comprehensive situation awareness. A W3C incubator group offers an international forum for advancing interoperability of within sensor systems using W3C technologies .
The Semantic Sensor Network (SSN) Incubator Group was conceived in CSIRO with early support from the Boeing Company and Wright State University. Both have been developing architectures and software tools for sensor network systems and had semantically-enabled tools to assist with a range of challenges. Both have also experimented with the OGC Sensor Web Enablement (SWE) standards and have realised the benefits of upgrading those standards to incorporate semantic approaches and technologies.
The Group, recognising the interoperability and broader applicability benefits of a collaborative effort, has developed a formal OWL DL ontology for modelling sensor devices, systems and processes. The development was informed by a thorough review of previous sensor ontologies, and the concurrent development of an informal vocabulary of the main terms, drawing heavily on earlier vocabularies. Several approaches were investigated to harmonize OGC SWE encodings with sensor ontologies developed by this group. This included a unified semantic annotation strategy applied to SWE standards, including OGC SensorML, Observation and Mesurements, and SweCommon.
Initially, we also planned to investigate mapping between OGC standards and OWL specifications of sensors and to describe how mappings can be constructed both from these ontologies to existing standards and from suitably annotated documents complying with existing standards to the ontologies. We used the informal OGC ontology in our formal ontology development, but ....< what to say here?>.. Our coverage of "existing standards" here is limited to the SensorML.<currently under discussion -needs fixing -- Kerry>
".. < suggest removing of this paragraph, or we can say.. that we show mapping of SWE XML instances to Sensor Ontology instances, which were possible because of the semantic annotation strategy.>" -- Luis
The need for semantics in Sensor Network Frameworks
Section will contain:
- overview to sensor networks (inc. stats and scales)
- existing (non-semantic) systems and approaches
- particularly OGC SWE standards
- the need for semantics in sensor networks
- applicability of W3C standards and technologies
Note that parts of the above overlap with the first paragraph of "Rationale and Sponsors" and the introduction to "Markup and Annotation". I suggest I amalgamate those here, and we move this before the Rationale. i.e. we separate the rationale for the (wider) work from the genesis and rational for the XG itself.
I propose this section is also framed in terms of:
- application of semantics within sensor networks
- device discovery, tasking, configuration, sensor network management etc.
- exposing the outputs of sensor networks semantically
- discovering and integrating sensor data with other data sources, especially the scientific domain (ontologies) for which the sensors are deployed
- perhaps note the alignment with producer and consumer in SWE)
"Kevin, this section could go either here or or possibly before the previous rationale section. Suggest you write as if here (ie take account of the introduction above). There will be an abstract which will draw on this bit, too, and that will be first. " -- Kerry
" .. I agree should be part of first paragraph " -- Luis
Participation and collaboration
Kerry Taylor and Amit Parashar of CSIRO developed the early concept for the incubator; Amit Sheth of Wright State's Kno.e.sis Center particularly built up a network of potential participants, and Amit Sheth and Michael Compton of CSIRO drafted the Charter. Carl Reed of OGC supported the Group and it was formally initiated by CSIRO, Wright State and the OGC on the 2nd March 2009. The initial chairs were Amit Sheth, Kerry Taylor and Amit Parashar, but Amit Parashar later resigned from CSIRO and was replaced as co-chair by Holger Neuhaus of CSIRO.
The Group attracted a lively average participation of 18 regular attendees and contributors from 17 member organisations, and 7 invited experts (of whom 4 contributed regularly). We met weekly by teleconference and simultaneous IRC chat using the facilities provided by the W3C. Our meeting dates and minutes are recorded Main_Page#Meetings. We also held a full day Face-to-Face meeting at the headquarters of SURA, Washington DC, USA on the 24th October 2009, for which a report is available F2f09.
The wiki Main_Page was used to organize the information, upload files, maintain progress of the activities, write the report and to provide public information about the activity.
The public mailing list [] was used to conduct discussion between meetings, and for announcements.
As ontology design issues became more complex and vigorous, we used the tracker [] to follow the progress of the more significant and contested issues.
Early on, we had several presentations from members of the Group and external researchers active in the area Main_Page#Presentations. We were contacted by a representative of the MPEG-V standards development group. Following initial exchanges of information, no further action was taken.
Formally, the Group's deliverables are defined in the Charter as follows (annotated with links to the outcomes produced):
The SSN-XG will maintain a wiki (Main_Page) with relevant information such as a review of relevant ontologies (and applications to sensor systems, if any) (State_of_the_art_survey, Published paper) and research projects (Communities) that have an interest in semantic sensor networks. It will deliver one report and make a W3C submission:
- The report will describe a framework (ontologies) for semantically describing sensors and
- How mappings can be constructed both from these ontolgies to existing standards and from suitably annotated documents complying with existing standards to the ontologies.
- The submission will cover specification of semantic annotation of Sensor Markup Language (SML-S).
"Suggestion ... deliverables are: 1) use cases, 2)review of existing sensor and observation ontologies, 3)a community created sensor ontology, 4 ) harmonization methodology SWE with Sensor ontology, 5) Guide to extend sensor ontology" -- Luis
In addition to these formal deliverables, and this formal report of the XG, the XG has also assembled a list of use cases that were used to drive the design of the ontology and markup(Main_Page#Use_cases), a comprehensive vocabulary of relevant terms (SSN_terms), a reference list of events related to semantic sensor networks (Events), delivered a number of presentations on the work of the XG (#Elsewhere), sponsored a workshop at the International Semantic Web Conference, and .. <anything else?>
From an extensive list of use cases, the group selected in a voting process (over 6 weeks) four use cases. Those were grouped by the XG into three classes of use cases:
- Device discovery - Find all the devices that meet certain criteria. The criteria can be something like type, geographic region, measured phenomenon, range of measurement, availability, owner/responsible party, manufacturer etc. and also combinations of those.
- Data discovery - Find all observations that meet certain criteria. The user may use different criteria to select the area and window to be considered in the spatio-temporal constraint. Or the user may select specific types of constraints in terms of the types of observations to be found.
- Process/provenance - User wants extra information about the instrument to better evaluate or process the data.
Scenarios are being developed for these use case classes. The scenarios include a description of the use case, its primary actor(s), pre-conditions, and the success- and failure end condition(s). Additionally, variations, extensions and performance will be discussed.
The use cases were developed to promote ...
Review of Related Work
The group extensively reviewed ontologies and data models describing sensors and their capabilities as well as observations. Those included SensorML, OntoSensor, SWAMO, the MMI Device Ontology, CESN, and OOSTethys. A full list of reviewed ontologies can be found here. From this review, the group identified concepts that should be included, but found that none of the ontologies under review supported all of those required concepts.
The group has also produced a survey paper (Compton et al.) which was presented at the 2nd International Workshop on Semantic Sensor Networks 2009.
An exhaustive list of relevant references can be found here. Also, information on communities of practice with an interest in Sensor Web and Sensor Networks, as well as work done by other standard organisations has been collected.
As for the Ontology deliverable of the group, reference material for the Semantic Markup deliverable has been gathered.
"Is it enough to point to the corresponding wiki pages or should we place the content here directly (at least for the final report)?" -- Arthur
The SSN-XG Ontology for Describing Sensors
The Group, recognising the interoperability and broader applicability benefits of a collaborative effort, has developed a formal OWL DL ontology for modelling sensor devices, systems and processes. The development was informed by a thorough review of previous sensor ontologies, and the concurrent development of an informal vocabulary of the main terms, drawing heavily on earlier vocabularies. Simultaneously, we have reviewed several approaches to semantic annotation of the SWE XML document standards, which rely on the underlying OpenGIS® Sensor Model Language Encoding Standard (SensorML) to specify information models and XML encodings. We developed a recommended uniform approach for connecting the SWE standards with formal OWL ontologies, including but not limited to our sensor ontology.
We have referred to VIM(?) and YYY as well as to the SWE for vocabulary as well as to the SWE list for vocabulary definitions, and we have also aligned with the DOLCE ontology at the upper levels. As it became clear that many in the Group could not accept some of the modelling decisions in SWE and also that SensorML in particualar, our most important source model, is not being actively maintained, we have deviated from the OGC modelling in several places.
<Laurent> I have 4 categories
- common terms glossary/lexicon/thesaurus,
- controlled vocabulary/taxonomy (SKOS)
- common data or object model (UML-ish)
- common ontologies (OWL) - possibly even a community-managed "foundry"
The work I have done on terms corresponds to the first type.
OWL 2.0 ontology Deliverable
Markup and Annotation
The last few years have seen an explosion in the number and variety of sensors being deployed in all manner of environments around the globe; and this trend will continue as sensors are becoming cheaper and more readily available. The outcome of this development is an avalanche of observational data that must be analyzed and explained in order to achieve an understanding of our environment. Currently, this data is too often stove piped, with a strong tie between the sensor network, observation database, and end-user application. With the advent of projects such as the OGC Sensor Web Enablement (SWE) and the W3C Semantic Sensor Networks Incubator Group (SSN-XG) this information is now being set free and made available on the Web. With this new freedom, however, comes significant challenges, such as the following
- How do we discover, access and search sensor data on the Web?
- How do we integrate the sensor data when it comes from many heterogeneous sources?
- How do we make raw sensor data meaningful to Web applications and naive users?
SWE has taken important initial steps towards answering these questions. It includes the development of a set of XML-based languages and Web service interface specifications. The service interfaces, such as the Sensor Observation Service (SOS), provide a means to discover, access and search sensor data (as much as it is possible in XML-level syntactic interoperability level and through the use of standardized tags); and the languages, such as the Sensor Model Language (SensorML) and Observations and Measurements (O&M), provide a means to integrate data from heterogeneous sources in a standard format accessible to Web users. Such syntactic level interoperability is a good start and provides a solid framework to begin exploring the issue of semantic level interoperability. The latter issue falls under the charter of the W3C SSN-XG and is being explored through the investigation of two separate but closely related projects -- the development of an ontology for describing sensors and sensor data, and an annotation framework for adding semantic metadata to the SWE standards. This is a description of the latter.
Motivating Use Cases
Use cases from Ontology Work package
- Sensor Discovery
- Data Discovery
- Provenance of Sensor Data
Other Use Cases
- Integration of sensor data
- Sensor data service mashups
- RDF-ization of sensor data
Why convert to RDF? - The conversion from XML-based languages, such as those provided by SWE, into RDF provides multiple benefits. The primary benefit is that while XML is a basis for syntactic interoperability, RDF elevates interoperability to a semantic level. The RDF data model is based on an expressive graph representation that includes the use of named relationships and Web-based URI's. Named relationships are fundamental to semantic representation, and RDF supports relationships as first-class objects. An RDF graph has a triple-based structure such that all statements are defined through a subject-predicate-object triple. This structure allows for the easy integration of multiple RDF graphs by simply relating subjects in the first graph to objects in the second through a predicate (or 'relationship'). Since the subjects, predicates, and objects within the RDF graph have Web-accessible URI's we can integrate RDF datasets across the Web. This idea is the basis of the Linking Open Data (LOD) project. We now are beginning to see RDF sensor data on LOD.
Documents to Annotate
Sensor Web Enablement (SWE) Languages - In SWE, members of the OGC are building a unique and revolutionary framework of open standards for exploiting Web-connected sensors and sensor systems of all types. This framework is called a Sensor Web, and refers to web accessible sensor networks and archived sensor data that can be discovered and accessed using standard protocols and application program interfaces (APIs). SWE is composed of four languages and three service specifications. The languages include the Sensor Model Language (SML or SensorML) , Observations and Measurements (O&M) , and Transducer Model Language (TML) . The services include the Sensor Observation Service (SOS) , Sensor Planning Service (SPS) , and Sensor Alert Service (SAS) .
Sensor Model Language (SML) - Standard models and XML Schema for describing sensor systems and processes; provides information needed for discovery of sensors, location of sensor observations, processing of low-level sensor observations, and listing of taskable properties.
Observation & Measurements (O&M) - Standard models and XML Schema for encoding observations and measurements from a sensor, both archived and real-time.
Sensor Observation Service (SOS) GetCapabilities - The Sensor Observation Service includes three core operations: GetObservation, DescribeSensor, and GetCapabilities. The GetObservation operation provides an interface to query over observation data and returns an O&M document. The DescribeSensor operation provides an interface to query for the description of a sensor and returns a SensorML document. The GetCapabilities operation provides an interface to query for the description of a Sensor Observation Service. GetCapabilities allows clients to retrieve service metadata about a specific service instance and returns a GetCapabilites response document.
Recommended Linking and Annotation Technique
|xlink:href||link to instance|
|xlink:role||link to ontology concept|
|xlink:arcrole||link to ontology object property|
|xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"||specifies that xlink is currently being used as a model reference (or semantic annotation)|
+ Note: This xlink:arcrole link to sawsdl:modelReference originates with the SAPIENCE project.
XLink, the XML Linking Language, is an XML markup language for creating hyperlinks in XML documents. XLink is a W3C recommendation and outlines methods of describing links between resources in XML documents. Any element in an XML document can behave as a link. XLink supports simple links (like HTML) and extended links (for linking multiple resources together). In addition, with XLink, the links can be defined outside the linked files.  XLink attributes can be added to SensorML and O&M documents to provide semantic annotations for the sensor data. XLink is already used in SWE documents, thus, no syntactic or structural changes are required. This explains the relative success of XLink-based approaches in earlier attempts to add semantic annotations to SWE documents. Recognizing which XLink attributes correspond to semantic annotations and which correspond to permissible SWE usages could become difficult.
Note that the xlink: prefix is used throughout to stand for the declaration of the XLink namespace, whether or not a namespace declaration is present in the example.
- xlink:type - Every element defining an XLink *must* contain a "type" attribute, which specifies what type of link it is - the value for this attribute may be any one of "simple", "extended", "locator", "arc", "resource", "title" or "none".
- xlink:href - The "href" attribute is used to specify the URL of a remote resource, and is mandatory for locator links. In addition to the URL of the remote resource, it may also contain an additional "fragment identifier", which drills down to a specific location within the target document.
- xlink:show - The "show" attribute is used to define the manner in which the endpoint of a link is presented to the user. The value of this attribute may be any one of "new" (display linked resource in a new window); "replace" (display linked resource in the current window, removing whatever's currently there); "embed" (display linked resource in a specific area of the current window); "other" (display as per other, application-dependent directives); or "none" (display method unspecified)
- xlink:actuate - The "actuate" attribute is used to specify when a link is traversed - it may take any of the values "onLoad" (display linked resource as soon as loading is complete); "onRequest" (display linked resource only when expressly directed to by the user, either via a click or other input); "other" and "none".
- xlink:label - The "label" attribute is used to identify a link for subsequent use in an arc.
- xlink:from and xlink:to - The "from" and "to" attributes are used to specify the starting and ending points for an arc respectively. Both these attributes use labels to identify the links involved.
- xlink:role and xlink:arcrole - The "role" and "arcrole" attributes reference a URL which contains information on the link's role or purpose.
- xlink:title - The "title" attribute, not to be confused with the title type of link, provides a human-readable descriptive title for a link.
Use of XLink in OGC standards
The OGC was an early adopter of XLink. Within the Sensor Web Enablement (SWE) languages, XLink is used heavily as a linking mechanism between documents. Traditionally, however, the OGC has defined the use of XLink annotation as a composition by inclusion of remote resources. This definition regards annotation as a pointer to a remote resource such that the description is deferred. While this definition is useful for pointing to concepts in an ontology, it does not allow use of annotation for adding information to an existing resource description. To add further complexity to the situation, there are several (subtly) different usages of XLink in sub-communities of OGC. The GML specification authorizes four variants on the use of XLink:
- A reference to an object element in the same GML document may be encoded as:
- A reference to an object element in a remote XML document using the gml:id value of that object may be encoded as:
- A reference to an object element in a remote XML document (or GML object repository) using the gml:identifier property value of that object may be encoded as:
<myProperty xlink:href="http://my.big.org/test.xml#element(//gml:GeodeticCRS[./gml:identifier[ @codeSpace="urn:x-ogc:def:crs:EPSG:6.3:"]="4326"])"/>
- A reference to an object element with a uniform resource name may be encoded as follows (note that a URN resolver is required to resolve the URN and access the referenced object):
These four uses of XLink correspond to the definition of XLink annotation as a composition by inclusion of remote resources. Within this framework xlink:href is used to point to a target instance and xlink:role is used to point to its nature or type. Describing XLink annotation as a semantic annotation, xlink:href would be used to point to an instance of a concept in an ontology and xlink:role would be used to point to a concept in an ontology.
XLink conversion to RDF
The XLink specification defines ways for XML documents to establish hyperlinks between resources. The Resource Description Framework (RDF) specification defines a framework for the provision of machine-understandable information about web resources. Both XLink and RDF provide a way of asserting relations between resources. RDF is primarily for describing resources and their relations, while XLink is primarily for specifying and traversing hyperlinks. However, the overlap between the two is sufficient that a mapping from XLink links to statements in an RDF model can be defined. Such a mapping allows XLink elements to be harvested as a source of RDF statements. XLink links thus provide an alternate syntax for RDF information that may be useful in some situations. A more detailed description of this conversion of XLink elements into RDF statements, and a corresponding algorithm written in XSLT, can be found within a W3C Note titled Harvesting RDF Statements from XLinks.
|xlink:href||Identifier of the resource which is the target of the association, given as a URI||rdf:about of range resource|
|xlink:role||Nature of the target resource, given as a URI||rdf:about of class of range resource|
|xlink:arcrole||Role or purpose of the target resource in relation to the present resource, given as a URI||rdf:about of object property linking domain element to range resource|
|xlink:title||Text describing the association or the target resource||rdfs:comment|
Examples of semantic annotations of sensor data with XLink
- Observations and Measurements (O&M): The following example illustrates a temperature observation from sensor_xyz with value 42.0 and is related to a feature of interest location in Geonames dataset.
<om:Observation> <om:samplingTime><gml:TimeInstant>...</gml:TimeInstant></om:samplingTime> <om:procedure xlink:role="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#Sensor" xlink:href="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#sensor_xyz"/> <om:observedProperty xlink:href="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#temperature"/> <featureOfInterest xlink:href="http://sws.geonames.org/5758442/"/> <om:result uom="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#fahrenheit">42.0</om:result> </om:Observation>
- Sensor Observation Service (SOS) GetCapabilities: The following example illustrates an observation offering called XYZ with sensor sensor_xyz that measures temperature and generates observations that may be related to a feature of interest location in the Geonames dataset.
<ObservationOffering gml:id="XYZ"> <gml:name>XYZ</gml:name> <gml:boundedBy>...</gml:boundedBy> <om:procedure xlink:role="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#Sensor" xlink:href="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#sensor_xyz"/> <om:observedProperty xlink:href="http//www.w3.org/2009/Incubator/ssn/ontologies/SensorOntolgy.owl#temperature/> <featureOfInterest xlink:href="http://sws.geonames.org/5758442/"/> <responseFormat>text/xml;subtype="om/1.0.0"</responseFormat> <resultModel xmlns:ns="http://www.opengis.net/om/1.0">ns:Observation</resultModel> <responseMode>inline</responseMode> <responseMode>resultTemplate</responseMode> </ObservationOffering>
- Sensor Discovery Use Case: The following provides a more comprehensive example of the annotation of an SOS GetCapabilities document useful for sensor discovery.
Description - Find all the sensors that meet certain criteria. While all (or most) criteria necessary for sensor discovery can be found in a SensorML document, parsing through hundreds or thousands of XML documents to find a sensor matching a set of criteria would be terribly inefficient. Therefore, like finding most resources on the Web, sensor discovery will likely occur through Web services rather than document searches. The Sensor Observation Service is the prominent service within the OGC Sensor Web Enablement for searching and accessing sensor data. The Sensor Observation Service, however, currently has no method for finding relevant sensors and encourages the use of catalog services for this task. However, by semantically annotating the GetCapabilities document of an SOS, we can provide the ability to discover relevant sensors through several criteria of interest for this use-case (i.e., location, property, availability, etc.). The following table details the search criteria for the sensor discovery use-case and the resources that contain this information.
Criteria (with Resource Availability)
|Within geographic region (location)||yes||yes|
|Range of measurement||yes|
+ Note: The Application Domain criteria is not part of original Sensor Discovery Use Case description, but is already part of SOS GetCapabilities and may be useful for discovery (e.g., Water Resource Management).
SOS DescribeSensor - The SOS DescribeSensor method returns a SensorML document with all the information about a particular sensor, however, you must know about the sensor (i.e., sensor ID) before invoking the method. Therefore, DescribeSensor is not sufficient for finding relevant sensors based on particular attributes. The following table describes the parameters of DescribeSensor:
|outputFormat||The outputFormat attribute specifies the desired output format of the DescribeSensor operation.|
|SensorId||The sensorId parameter specifies the sensor for which the description is to be returned. This value must match the value advertised in the xlink:href attribute of a procedure element advertised in the SOS GetCapabilities response.|
|service||Service type identifier (i.e., SOS)|
|version||Specification version for operation|
SOS GetCapabilities - The SOS GetCapabilities method, on the other hand, returns a service description that contains several attributes that could be useful for sensor discovery (i.e., location, property, availability, etc.).
|time||Time period for which observations can be obtained. This supports the advertisement of historical as well as real-time observations.||Availability|
|observedProperty||The observable/phenomenon that can be requested in this offering.||Measured phenomenon|
|featureOfInterest||Features or feature collections that represent the identifiable object(s) on which the sensor systems are making observations. In the case of an in-situ sensor this may be a station to which the sensor is attached representing the environment directly surrounding the sensor. For remote sensors this may be the area or volume that is being sensed, which is not co-located with the sensor. The feature types may be generic Sampling Features (see O&M) or may be specific to the application domain of interest to the SOS. However, features should include spatial information (such as the GML boundedBy) to allow the location to be harvested by OGC service registries.||Location|
|intendedApplication||The intended category of use for this offering such as homeland security or natural resource planning||Application Domain|
|procedure||A reference to one or more procedures, including sensor systems, instruments, simulators, etc, that supply observations in this offering. The DescribeSensor operation can be called to provide a SensorML or TML description for each system.|
SOS GetCapabilities Example Document with Semantic Annotations - The following example illustrates an SOS GetCapabilities document describing a service containing an AirTemperature sensor. The sensor description is semantically annotated with model references to concepts and instances related to time, sensors, properties, and locations. The semantic annotations are described in the table, followed by example GetCapabilities document fragments:
|xlink:href||link to instance|
|xlink:role||link to ontology concept|
|xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"||specifies that xlink is currently being used as a model reference (or semantic annotation)|
+ Note: This xlink:arcrole link to sawsdl:modelReference originates with the SAPIENCE project.
- Sensor Availability
<sos:time><gml:TimePeriod xlink:role="http://www.isi.edu/~pan/damltime/time-entry.owl#Interval" xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"> <gml:beginPosition xlink:role="http://www.isi.edu/~pan/damltime/time-entry.owl#begins" xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"> 2005-10-18T19:54:13.000Z </gml:beginPosition> <gml:endPosition xlink:role="http://www.isi.edu/~pan/damltime/time-entry.owl#ends" xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"> 2005-10-18T19:54:13.000Z </gml:endPosition> </gml:TimePeriod></sos:time>
- Property (or observable)
<sos:observedProperty xlink:href="http://knoesis.wright.edu/ssw/ont/weather.owl%23AirTemperature" xlink:role="http://www.w3.org/2009/SSN-XG/Ontologies/SensorBasis.owl#Property" xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"/>
- Location (as a Feature)
<sos:featureOfInterest xlink:href="http://sws.geonames.org/5248611/" xlink:role="http://www.w3.org/2009/SSN-XG/Ontologies/SensorBasis.owl#Location" xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"/>
- Sensor ID
<sos:procedure xlink:href="http://knoesis.wright.edu/ssw/System_C1988" xlink:role="http://www.w3.org/2009/SSN-XG/Ontologies/SensorBasis.owl#Sensor" xlink:arcrole="http://www.w3.org/ns/sawsdl#modelReference"/>
Other Linking and Annotation Techniques
RDFa (or Resource Description Framework - in - attributes) is a W3C Recommendation that adds a set of attribute level extensions to XHTML for embedding rich metadata within Web documents. The RDF data model mapping enables its use for embedding RDF triples within XHTML documents, it also enables the extraction of RDF model triples by compliant user agents. RDFa attributes can be added to SensorML and O&M documents to provide semantic annotations for the sensor data. Approaches based on RDFa look promising at the level of SWE documents since it would be easy to process the annotations independently of the rest of the document. Further work is required to check that the introduction of RDFa would not bring major changes for the implementers of the SWE standards and also to investigate how RDFa-enabled SWE services could be further integrated with other RDFa-based Web mashups.
Note that the rdfa: prefix is used throughout to stand for the declaration of the RDFa namespace, whether or not a namespace declaration is present in the example.
W3C defined syntax and semantics for RDFa
- rdfa:about - A URI or CURIE specifying the resource the metadata is about; in its absence it defaults to the current document.
- rdfa:rel - and rdfa:rev Specifies a relationship or reverse-relationship with another resource.
- rdfa:href, rdfa:src, and rdfa:resource - Specifies the partner resource.
- rdfa:property - Specifies a property for the content of an element.
- rdfa:content - Optional attribute that overrides the content of the element when using the property attribute
- rdfa:datatype - Optional attribute that specifies the datatype of text specified for use with the property attribute
- rdfa:typeof - Optional attribute that specifies the RDF type(s) of the subject (the resource that the metadata is about).
RDFa conversion to RDF
|rdfa:about||The identification of the resource (to state what the data is about)||rdf:about of domain resource|
|rdfa:typeof||RDF type(s) to associate with a resource||rdf:about of class of a resource|
|rdfa:href||Partner resource of a relationship ('resource object')||rdf:about of range resource|
|rdfa:property||Relationship between a subject and some literal text ('predicate')||rdf:about of datatype property|
|rdfa:rel||Relationship between two resources ('predicate')||rdf:about of object property|
|rdfa:rev||Reverse relationship between two resources ('predicate')||rdf:about of (inverse) object property|
|rdfa:src||Base resource of a relationship when the resource is embedded ('resource object')||rdf:about of domain resource|
|rdfa:resource||Partner resource of a relationship that is not intended to be 'clickable' ('object')||rdf:about of range resource|
|rdfa:datatype||Datatype of a property||XML type range of datatype property|
|rdfa:content||Machine-readable content ('plain literal object')||Value for datatype property|
Other Annotations Techniques
- GRDDL - A markup format for Gleaning Resource Descriptions from Dialects of Languages. It is a W3C Recommendation, and enables users to obtain RDF triples out of XML documents, including XHTML. It defines the syntax to include a reference to a lifting script in a source document - the lifting script can then be used to transform the document to RDF.
- Microdata - Allows nested groups of name-value pairs to be added to documents, in parallel with the existing content. A non-semantic alternative to RDFa.
- SAWSDL - A set of extension attributes for the Web Services Description Language and XML Schema definition language that allows description of additional semantics of WSDL components. Allows the user to record the mapping of WSDL elements to concepts defined in a reference ontology and to specify the lifting scripts which can be applied to the output of a service to transform it into a RDF file using the reference ontology concepts.
- hRESTs - A microformat to add additional meta-data to REST API descriptions in HTML and XHTML. Developers can directly embed meta-data from various models such an ontology, taxonomy or a tag cloud into their API descriptions. The embedded meta-data can be used to improve search (for example: perform faceted search for APIs), data mediation (in conjunction with XML annotation) as well as help in easier integration of services to create mashups.
- SA-REST and Micro-WSMO - two similar methods to semantically annotate REST services using the same microformat (hRESTs) and a different target ontology. Similar basis than SAWSDL (including the possibility to include a reference to a lifting script) but applicable to an HTML-based description of a service).
Comparison of techniques
|Domain Instance||rdfa:about or rdfa:src|
|Inverse Object Property||rdfa:rev|
|Range Instance Object Property||xlink:href||rdfa:href or rdfa:resource|
|Range Value||rdfa:content or element content|
Outcome of Semantic Markup Activity
Other activities and outcomes of the XG
Working over International Time Zones
Our members came from most international time zones, with a strong representation from Australia. We selected two different meeting times for alternate weeks in order to establish times that were "not too unreasonable" for everyone at least once a fortnight. This enabled the Eastern Australians, for example, to participate at 5am or 11pm starting times both weeks, but made every second meeting impossible for Western Australia and ??. When most world time zones were shifting in October we cancelled meetings for a month to reduce confusion. We restarted with a new single time slot (we still alternated the days to fit in with particpants' commitments) after that, but we lost regular attendance from some of Europe. The alternation did create confusion quite often, but overall the outcome was positive.
Despite the challenges, we did maintain good teleconference participation. We know of cases where Australians are effectively prevented from participation in W3C activities due to time zone challenges. The practice of alternating meetings seems to be a good idea, but are there better ways? How can we avoid excluding valued participation?
Our Group members had little prior experience of participation in W3C Incubator groups (or Working Groups, too). Despite seeking early advice, and borrowing from what we could see from other groups, we took several months to develop our ways of working and our (never good) skills with the tools including the tracker, the IRC, and the minutes logger (RRSAgent), the email list and the wiki. In some cases we found it necessary to develop our own "how to" guides for beginners. It is hard to believe that our needs and capabilities are very different from the average -- some kind of resource book of guidelines for incubator groups could have been helpful in improving our productivity.
Typical problems included:
- Wiki response too slow for editing
- Teleconference oversubscribed
- Unskilled minute takers, no clear decision/action points in minutes
- Poor teleconference voice quality and time delays
- Web IRC being blocked (invisibly) when other network services used.
Although we often had attendees at meetings using only one of the IRC and the phone, we found using both channels simultaneously actually worked quite well. Often the main discussion was on the phone, but the IRC enabled side issues, points or references to be accommodated simultaneously.
-vocab vs structure revisiting deciusions