See also the Testing spec in mercurial
- 1 Test Suite proposals
- 2 manifest test description, EARL results submission
- 3 Schemas
- 4 Procedures
1 Test Suite proposals
Reminder: the current deadline for proposals is Nov 26th.
1.1 @@my proposal@@
- champion: betehess
- environment/languages/tools to be used
- concrete examples of tests
- explain how the actual tests and the English description will be maintained (it was suggested to maintain them in one place if possible)
- say how tests will be submitted, reviewed, accepted/rejected, added
- describe how an implementer will be able to test her implementation
- if available, provide a pointer to some runnable code
- send an email to firstname.lastname@example.org with "[testing] my proposal" as title
1.2 HTTP-in-RDF and EARL
- HTTP Vocabulary in RDF 1.0
- Allows description in RDF of HTTP conversations.
- Test expectations could be formulated as SPARQL ASK queries against such conversations.
- Evaluation and Report Language (EARL) 1.0
- Allows description of test suites and recording of test results.
- Application-specific extensions are needed to state test inputs and expectations if automatic execution is desired.
1.2.1 Example Usage within W3C
There are a number of W3C WG that have been successfully using the RDF Test Metadata and Test Description vocabulary along with EARL for their Test Cases/Test Suites. Typically, a Test Harness is provided that enables implementers to run the tests either online or offline and then submit the resulting EARL document to the WG for producing the Implementation Report. In the following some of these deployments are listed.
- RDB2RDF - Test Cases, Test Harness
- Media Fragments - Server TC, UA TC
- RDFa - Test Harness, Test Suite
1.3 Traceability from use-cases to test cases
(added by steve battle) The UC&R defines a number of user-stories from which many use-cases are derived. Each use-case in turn is fleshed-out by a number of scenarios. It is expected that in turn each scenario will lead to the development of a number of test-cases. The concept of traceability from Use-Cases to test cases ensures that test-cases are well-founded and conversely they help to confirm the completeness of the set of use-cases.
2 manifest test description, EARL results submission
- champion: ericP
- Turtle manifest (like SPARQL) with
- input state
- expected response
- expected result state
- results posted in EARL (also like SPARQL)
<test1> mf:action [ u:graphData [ ut:graph <paging1.ttl> ; rdfs:label "http://example.org/g1" ; :as :LDPContainer ] ; # ×n ] ; httprdf:PST [ endpoint <bar> ; ut:graph <post1.ttl> ] . httprdf:PST [ endpoint <bar> ; ut:graph <response1.ttl> ] . mf:final [ u:graphData [ ut:graph <final1.ttl> ; rdfs:label "http://example.org/g1" ; :as :LDPContainer ] ; # ×n u:graphData [ ut:graph <resource1.ttl> ; rdfs:label "http://example.org/rsrc1" ; :as :LDPResource ] ; # ×n ] ; ] ;
The LDP test suite, the different tests that compose it and test results will be described using RDF.
Next, some guidelines are proposed for the schemas to be used to describe them.
3.1 Describing the test suite
- Identifier (the URI of the test suite)
- Title (dc:title) Title of the test suite
- Description (dc:description) Description of the test suite
- Licensing (dc:rights)
3.2 Describing tests
- Test case (td:TestCase)
- Identifier (the URI of the test)
- Title (dc:title) Title of the test
- Description (dc:description) Description of the test
- Contributor (dc:contributor) Who contributes the test
- Status (td:reviewStatus) unreviewed, approved or rejected
- Specification reference (td:specificationReference) Pointer to the related part of the specification
- See also (rdfs:seeAlso) Pointer to issue tracker and/or to use case
- Inputs (td:input) Inputs of the tests using the HTTP Vocabulary in RDF
- Expected results (td:expectedResult) Expected results using the HTTP Vocabulary in RDF
- Test Metadata 
- Test Description 
- HTTP Vocabulary in RDF 1.0 
- Representing Content in RDF 1.0 
3.3 Describing results
The following test-related procedures will apply in the LDP WG.
4.1 Test definition
Anyone can contribute tests to the test suite.
The license for the tests will be identical to this one: 
Tests must be described using the schemas defined.
A file with the test description must be placed into the WG source code repository (the file must be named TestXXX.rdf, using consecutive numbers).
The test description can refer to existing RDF files; they should be placed with the test description and named TestXXX-*.ttl, being '*' something as meaningful as possible (e.g., TextXXX-post.ttl).
A submitted test has an status of "unreviewed" and has to undergo a review process.
4.2 Test review
Anyone can submit issues over tests. Issues will be submitted using the WG issue tracker and will be solved following the usual procedure.
During the reviewing process, a test can finish in a "rejected" status.
Two weeks before the publication of the test suite, the WG members have to review the existing tests. After that two weeks all the tests without a related issue change to the "accepted" status.
Issues can also be created for accepted tests.
4.3 Test suite publication
For the publication of the test suite only accepted tests will be taken into account.
A manifest.rdf file will be created including the descriptions of all the accepted tests.
A technical report will be created describing the different tests. Test execution
The WG (will|will not)? provide validators.
Validators must use the schemas defined for describing tests and results.
4.4 Result submission
Anyone can submit test execution results.
Test execution results must be described using the schemas defined.
A file with the test execution results must be placed into the WG source code repository (the file must be named ResultTTT.rdf, using the tool name).
4.5 Result publication
For the publication of the test suite every result file will be taken into account.
A results.rdf file will be created including the results for all the tools.
An implementation report will be created containing all the results.