Test Running Guide

From OWL
Revision as of 08:44, 1 August 2009 by Michael Schneider (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The goal of this page is to describe to what steps an implementer should follow in order to generate and submit the results of running their tool over all or a subset of the OWL 2 Test Suite.

The Test Suite

The collection of test cases produced and maintained by the OWL WG is maintained in the OWL 2 Test Cases wiki. The test cases wiki is designed to provide an easy interface through which a user may browse tests and create new tests. In addition to displaying metadata about the test (and potentially links to pages with additional syntaxes), the page provides download links for each input ontology. These links can be used directly by tools capable of http resource dereference (such as the RDF validation service).

Implementers interested in processing the test cases with tools should use exports from the test case wiki. All exports are RDF/XML documents and use the test case format described in the OWL 2 Conformance Document and available as an RDF/XML ontology. A notable characteristic of this format is that all data necessary to describe a test is included in a single RDF graph and input ontologies are serialized as literals.

The test case exports may be retrieved in two different ways

  1. On each test case's wiki page, a "Download OWL" link is provided.
  2. Via a bulk export available at http://wiki.webont.org/exports/ Note that several bulk exports are provided, some of which reduce the number of tests, based on test metadata. E.g., approved/profile-RL.rdf contains all those cases with "Approved" status for which all input ontologies fit within the OWL RL profile (the specific definition of each metadata field is described in Conformance).

Running Tests

A software suite has been implemented in Java in order to simplify test management and running. Tools implemented in Java can use the harness to simplify test case processing, including parsing of the test case format, filtering tests based on specific metadata (e.g., RL & RDF-Based Semantics only), and generating results in the appropriate format (see below). The Java source code for the test harness is available using the download link on its public scm page or by using git to clone the master repository git://github.com/msmithcp/owlwg-test.git .

Note that the test harness may be useful to implementers of tools in languages other than Java because it provides software elements that can be extended in many ways. E.g., the framework can be used to reformat tests into an arbitrary format that is desired by a particular tool.

Parties interested in extending the test harness may do so by forking the repository, a process for which github provides a step-by-step guide.

Desired Reports

The WG is interested in all results that help satisfy the established CR exit criteria. Note that because the test suite is in flux, reporting results frequently is helpful. Further, because tests progress through various statuses, implementers should run at least those tests with 'Proposed' and 'Approved' status that meet other relevant criteria.

E.g., if an implementer is interesting in including the results of their OWL QL direct semantics reasoner, they should run all consistency, inconsistency, and entailment tests for which the OWL QL profile is asserted, the direct semantics are applicable, and which which have either status 'Proposed' or 'Approved'.

Reporting Results

If an implementer would like their tool's test results to be included in the CR implementation report (see current draft), the results should be formatted using the Test Results Format (which is described with examples in the WG wiki and available as an an RDF/XML ontology), compressed into a zip or compressed tar file and emailed to msmith@clarkparsia.com with the subject "OWL 2 Test Results".