SVG Conformance Test Suite --
Test Builder's Manual

Revision:  3.00

Date: October 1, 2001

By:  Lofton Henderson

Document Versions

Rev

Date

Description of Change

1.00

2000-02-08

Standing project document, 1st WG release.  Based on Cupertino docs, later conformance decisions, additional rsch, and core info from path doc.

1.01 2000-04-27 Releasable HTML version.
2.00 2000-05-18 Incorporate cumulative experience from BE suite construction.
2.01 2000-08-20 Improvements to "Test Review Guidelines"; also to details on writing tests.
2.02 2000-09-27 Comment about color mgmt artifacts; XML navigation links and harnesses.
2.03 2000-11-14 CVS online; templates & harnesses; 2nd generation stuff; serial numbers.
2.04 2001-01-27 Further edits to reflect completed BE suite, 2nd public release.
3.00 2001-10-01 Editorial changes to align with final REC SVG test suite public release.

Preface

This document is intended as a permanent reference document and user manual for designers and contributers to the SVG Conformance Test Suite.  It is the repository for currently agreed methods, templates, procedures, and techniques. This is the third public release of the document, and correlates with the public release of the completed BE test suite, subsequent to REC SVG publication. The first release (mid-2000) corresponded to an earlier public release of a subset (slightly over half) of the full BE suite. The second public release (early 2001) was a full BE suite, and corresponded to public CR draft of SVG.

Caveat

Some parts of this document remain incomplete. In particular, the synopses of technical content of other conformance projects, in the chapter "Related Conformance Work", have not yet been detailed. In addition, some procedures and methods documented in the first public release have now been superseded or deprecated.

Table of Contents


1 Overview

1.1 Goals

The ultimate goal is a comprehensive and detailed conformance test suite for SVG 1.0. 

Amongst the multiple purposes which such a suite can serve, we identify the most important as:   a publicly available suite to help implementation builders achieve interoperability.

1.2 Scope

There are at least three areas in which conformance testing is applicable:

  1. conformance of SVG document instances;
  2. conformance of SVG generators;
  3. conformance of SVG interpreters and viewers;

This project's scope is limited to the third -- a conformance test suite for interpreters and viewers.

1.3 What Kind of Suite?

At the Cupertino SVG-WG meeting (11/99), the question of what sort of suite we are building was discussed and resolved.  The options included:

  1. An SVG demo suite;
  2. A QA test suite for product developers;
  3. A publicly available conformance suite and interoperability aid;
  4. A certification test suite for a rigorous certification service.

The SVG WG decided at Cupertino:  we are building #3, a publicly available suite for such uses as informal conformance analysis and developer self-testing.

Presently, there are no plans for a SVG "certification" service, therefore there is no need for #4 -- W3C doesn't  currently carry out certification testing, nor is any currently proposed by other entities.

1.4 Why Does It Matter?

What is the difference?  In what ways would the suite differ depending on its purpose?

Some potential differences include:

While the formality and rigor of a certification suite might not be needed, the SVG conformance suite will (eventually) embody "traceability" (see below) -- what specification in the standard justifies a given test?

1.5 Milestones & Schedules

The SVG WG agreed to two milestones:

At the time of release of this document, the BE suite is complete and the SVG 1.0 REC has been published. Some initial work has been done on DT tests, but it has not yet been integrated into the test suite structure, nor in this current (REC) test suite public release.

1.6 Roadmap to this Document 

For those interested in a quick user guide for test construction, you can skip directly to "How to Write Tests". The rest of this document provides background, detailed explanation, and motivation for the methods used.

1.7 Document Status

The material in Section 2, especially the brief synopsis of the nature and content of each existing suite, remains incomplete.

Section 3 is substantially complete.

The material in section 4 is complete for Static Rendering, but Dynamic has not been addressed.  A couple of topics like overall test-suite linking structure are still incomplete (e.g., when substantial DT tests are generated and added) .

Section 5 -- How to Write Tests -- is substantially complete for Static Rendering "how-to", including incorporation of experience from a couple years' work on the BE test suite. 

Section 6, Test Review Guidelines, is complete.

Section 7, Glossary, contains a useful subset of key terms used in this document.

2 Related Conformance Work

2.1 Motivation

There is now a substantial body of test suite experience and material, for several different standards:

These suites and the experiences of building them are useful to the SVG conformance effort, and to contributors of test materials, in a number of ways:

The "level of effort" data should be particularly interesting to the SVG group.

Note. Since the initial release of this document (4/2000) to this release (10/2001), there has also been substantial test suite work on XSLT, XSL-FO, DOM Java binding, etc. These could be researched and include in a future version of this document section.

2.2 Applicability of Previous Conformance Work

2.2.1 Previous CGM Work

See [5] and [3].

The applicability of CGM test suite experience to at least the static rendering subset of SVG is obvious. 

CGM and SVG differ in other ways: 

Note. As of this date (10/2001), work is substantially complete and a public release is pending for a test suite for REC WebCGM.

2.2.2 Previous VRML Work

See [9].

[Analysis of potential applicability VRML suite and methods will be written for future document release.]

2.2.3 CSS

See [7].

For application of visual properties (graphical attributes) SVG borrows heavily on CSS2.  The syntax for all such properties (plus some others) is patterned on CSS, and a number of CSS2 properties (esp. font properties) are adopted directly by SVG.  The full font selection and matching machinery of CSS2 is required in conforming SVG processors (interpreters and viewers).

W3C has made a test suite for CSS1. The methods of the CSS1 test suite should clearly be applicable to some aspects of the SVG suite.  Some actual CSS test materials might be (almost) directly usable.

2.2.4 DOM

See [6].

We'll have to test the SVG DOM.  NIST's XML DOM suite for Javascript binding is released, for both XML and HTML.  The Java binding is in progress.  Methods and techniques should be applicable, and maybe some materials can be borrowed with minimal modification.

2.2.5 XML

See [8].

A conforming SVG interpreter (hence also a conforming SVG viewer) "must be able to parse and process any XML constructs defined in [XML10] and [XML-NS]."  A conforming SVG viewer therefore incorporates XML-suite conformance, by reference.

2.3 Size and Effort of Other Suites

2.3.1 CGM V1

Size:  about 200 simple, atomic tests.

Level of effort:   difficult to determine, but probably about 1 - 1.5 FTE, external contractor plus NIST staff.

2.3.2 CGM V3

Size:  270 new tests, (70 new plus extensive redesign and revision of existing 200+ V1 tests).

Level of effort:   difficult to determine, but probably about 1 - 2 FTE, external contractor plus NIST staff.

2.3.3 VRML

Size:  Estimated about 1,000 tests.

Level of effort:  3 people full time for about 2 years at NIST.  There was a steep learning curve.  NIST released the first tests, after 3 months.  They released tests as soon as a node was completed.

2.3.4 XML

Size of 1st release:  1,000 XML tests -- DTD+4000 lines of XML code; 400 lines of XSL. 

Level of effort:  1.5 FTE -- 2 people for approximately 9 months.  One person designed the test harness and some of the tests and the other  designed some tests and spent lots of time validating and filling in the holes from what other people contributed.

Note. Current release (10/2001) is about 2,000 tests. Level of effort has surely increased significantly, by an amount tbd.

2.3.5 DOM

Size of 1st release (Ecmascript with XML) -- 800 tests, 30,000 lines of code (this is only the Fundamental and Extended tests).

Level of Effort:  1.5 FTE -- One person half time for 9 months, who did the test harness (following much of what was done for VRML and XML); plus another full time for about 9 months; plus a third person about 4 months full time.   

2.3.6 CSS

Unknown. To be researched for inclusion in future version of this document.

2.4 Synopsis of Design & Content of Other Suites

2.4.1 CGM

The CGM suite ([5]) consists of 269 test cases, each of which has three components:

  1. Test file instance;
  2. Reference Pictures, at least one per test case (originally color hardcopy, now GIF files);
  3. Operator Scripts, one per test case (including "Verdict Criteria").

There is no interactive harness or driver, and hence there are no navigation buttons to assist navigation through the suite.  The operator has to invoke the viewer, access the Reference Picture, and access the Operator Script.

2.4.2 VRML

[Synopsis of content and structure of the VRML suite will be written for future document release.]

2.4.3 XML

[Synopsis of content and structure of the XML suite will be written for future document release.]

2.4.4 DOM

[Synopsis of content and structure of the DOM suite will be written for future document release.]   

[Javascript, XML and HTML finished.  Java version being built now.  NIST did the Javascript XML, then Javascript HTML reusing some the data file (which is a big HTML or XML document on which the DOM tests work). And the re-used the Test Assertions and XML document for the Java DOM-XML tests.] 

2.4.5 CSS

[Synopsis of content and structure of the CSS1 suite will be written for future document release.]

3 Graphics Testing Overview

3.1 Process of Building Test Suite Content

The following basic process is applied for construction of most of the test suites referenced above -- CGM, VRML, XML, DOM, at least.  In overview:

In practice, these steps need not be overly formal.  In the case of a certification suite, formality is important.  For a conformance suite, it is less so.  In any case traceability (see below) is required.

Therefore, explicitly or implicitly, these steps are carried out -- the document is read exhaustively and decisions are made about what to test about each functionality, and how to realize these decisions in a set of test cases.

Section 4.2 of reference [9] contains an interesting discussion of TRs (which it calls SRs) and TCs -- the step of generating TPs is implicit in this reference, not explicitly treated as a formal step.

3.2 Principles Applicable to Test Suite Content

Some basic principles have been learned during the construction of previous test suites, applicable to both graphics suites and others:

  1. Simple or Atomic.  Each test purpose should be as simple as possible and narrowly focused on a atomic (simple) functionality.  Example:  chose one attribute or property and exercise it through a range, while holding other variables constant.  The advantages to this approach are:
  2. Reducing number of tests.  Without sacrificing the principle of atomic testing, the number of test instances (files) can be reduced by having a single instance combine multiple related test purposes.  Example:  for an attribute or property with a half-dozen different enumerated values, test each of the values in a "sub-test" of a single test case (test file instance).  Counter-example (poor practice):  the first CGM test suite sometimes had each instance test one value of an attribute, so that almost 30 test file instances to test the horizontal and vertical text alignment values.
  3. Progressive.  For any functionality, the tests should be organized from easy and general to harder and more specific. This avoids wasting time and resources if the implementation  is completely incapable in a functional area. 
  4. Comprehensive.  The detailed tests should try to methodically vary and test all values, plus boundary conditions and extreme conditions, of each parameter, attribute, or property.
  5. Self-documenting.  The tests should be self-documenting.  For example, a line-width test should have something like tick-marks drawn to delimit the correct width.  Graphical (displayed) text should explain and/or label the pieces of the picture.
  6. Key combinations.  #1 notwithstanding, there should be some number of tests which do vary more that one attribute or property at once.  Especially, thought should be given to how implementations might fail.  Example:  CGM has separate but equivalent attributes for lines and edges of filled primitives.   Since it is common for implementations to use a single stroke generator for both purposes, it is sensible to test that the state of line/edge attributes are properly saved and restored after drawing an edge/line.
  7. Real examples.   The CGM V3 suite included some real world graphics arts and technical pictures. 
  8. Traceability.  A test must be traceable back to a statement or statements in the standard's specification.

4 SVG Conformance Suite Development

4.1 Modularization & Prioritization

The SVG specification divides fairly cleanly into semi-independent functional modules.  Test materials will be developed and released progressively, subject to the constraint that we have agreed to make an entire breadth-first, basic effectivity (BE) test suite release first (now finished), and a drill-down (DT) release subsequent to that.

4.1.1 Static versus Dynamic

A major natural division in the specification is:

Static rendering got first priority for development and release of BE tests, although work on dynamic proceeded in parallel, and followed closely on the static tests. Prioritization (if any) for future DT work is tbd.

4.1.2 Progressive Ordering for Static Rendering

Functionality will be ordered in the suite, for purposes of execution and navigation through the suite, from most basic to most complex -- implementations should encounter the simplest and most basic tests first, before being subjected to progressively more complex and advanced functionality. This mirrors the organization of the REC SVG document.

Given our intent to make progressive releases of test suite modules, it made sense to generally follow this ordering for the building of the materials, at least for the completion of the DT and ER tests.

The SVG WG agreed at Cupertino (11/1999) to divide up the functionality by chapter.  For static rendering, the following chapters are addressed in BE testing:

Building and executing tests in chapter order does not appear to always lead to a basic-to-complex ordering. 

From most basic (or fundamental -- basic does not necessarily mean simple), to most advanced, a rough functional ordering might be:

The issue of test suite ordering and organization was resolved for the completed BE suite -- chapter order. The issue is still open for future DT work.

4.1.3 Dynamic Module Prioritization

The decision was implicitly made for the dynamic parts of the BE suite -- chapter order, first priority assigned equally to the tests in a breadth first set touching on each of the dynamic functionalities.

4.2 Test Case Materials

4.2.1 Static Rendering Materials

Each Test Case (TC) in the static rendering module contains three principle components:

  1. .svg file -- the SVG instance, designed to test one or more Test Purposes;
  2. .png file -- a raster reference image showing a correct rendering of the SVG instance;
  3. .xml file -- XML-based TC description file, including the operator script, test suite navigation information, etc.

The operator script comprises a few sentences describing what is being tested, what the results should be, verdict criteria for pass/fail, allowable deviations from the reference image, etc.

#1, #2, and #3 are file instances, one for each Test Case. #3 includes navigation information, and is the source for generating the (HTML) navigation page.

Note.  In the earliest test suites, for CGM, the Operator Script was a rigid and rote checklist for use by (non-expert) certification testing technicians, to score each test.  It has evolved in more recent conformance work, to being more informative about the test's purpose and what to look for. It also can (and should) function to improve the accessibility of the test suite.

Details and examples of writing an Operator Script are given in the next chapter.

Other supporting material are generated for each test case:

Note. The traceability links were postponed until the SVG spec stabilized. Now that REC has been published, this should be added to the test suite..

See below, section [4.4.2] and [4.4.4], for futher details about the test harnesses.

4.2.2 Dynamic Module Materials

Most of the SR materials are applicable to most dynamic tests.  However, there may be cases (e.g., some DOM) which do not have graphical output, and there will be some which could (but need not necessarily) have animated graphical "reference images".

For the BE suite, the dynamic materials are the same as for static tests. This material may be further refined as more of the dynamic functionalities' tests are developed (looking forward to DT development).

4.3 Types of Tests

Four generic test categories have been decided.  These are equally applicable to static rendering and dynamic test modules:

Note. As of the date of this document (10/2001), only BE and DT tests have gotten any implementation attention.

For BE tests, an attentive reading of the applicable spec sections is required, but an exhaustiveTR enumeration is not.  The generic BE test purpose is:  correct basic implementation, including major variations within the functional area.

For DT and ER tests, an exhaustive TR extraction from the SVG spec is necessarily a part of the process, and test purposes are be derived from the TRs (see next chapter). 

Following is a list of generic test purposes for DT tests (ER also?):

The Generic Test Purposes provide a high-level checklist for the sorts of test cases which should result from the analysis and test design of a functional area.  If any major categories are not represented, it may indicate that some implicit or explicit requirements have been missed.

4.4 Packaging, Organization, and Presentation

4.4.1 Standalone and Browser Requirement

These requirements are agreed, at least for the static rendering module:

  1. the tests must be usable for a standalone viewer,
  2. the suite should be conveniently navigable and executable from a browser.

4.4.2 Test Harnesses

The VRML suite, as well as the CSS, XML, and DOM suites, employ an interactive test harness (HTML page), which:

Unlike the VRML suite, which presents the test rendering and the reference image side-by-side in one window, the SVG WG initially decided on a two-window approach, for standard release harnesses:

So in any case, a browser would have to be available for convenient viewing of all of the materials, but it is not necessary that a viewer-under-test be a browser plug-in. 

Since the initial determination, additional harnesses have been added to the public release:

4.4.3 Test Naming Convention

A strong naming convention for the materials is useful, both with the management of the test suite repository, as well as with requirement #2 above. 

Test names aare brief but informative.  The name design is:  chapter-focus-type-num.  'Type-num' is a concatenation of the test type -- BE, DT, ER, or DM -- and its ordinal in the sequence of such tests -- 01, 02, ...

Examples:  path-lines-BE-01, shapes-rect-DT-04, styling-fontProp-DM-02. 

4.4.4 Harness Details

The primary test harness (static rendering, at least), is an HTML page which identifies the test, invokes the PNG reference image, and presents the operator script.  The plug-in variant also presents the SVG beside the PNG.

Navigation buttons are provided to go back to a table of contents (and maybe an index), to navigate laterally through BE tests, and to drill down to "child" tests (from the BE level to the DT, ER, and DM tests), to go back up to "parent" tests from the lower levels.

Per an early WG decision (11/1999) and subsequent discussions, the principal HTML harness presented the operator script and the PNG reference image, but did not assume a browser-invokable SVG viewer -- the test administrator had to get the SVG image into another window, or onto a printer, or whatever was appropriate. There was a companion SVG-only harness to assist this (with exactly parallel navigation capabilities to the PNG-plus-Operator Script harness.)

With the current method for producing harness(es) -- XSLT stylesheet applied to instances of a simple XML grammar which describes each test case -- it is not difficult to produce multiple harness versions, including variants such as PNG plus rendered SVG plus operator script, for browser-plugin SVG viewers. These have now been included in the public release.

The first generation design of simple XML grammar for describing tests, and the XSLT stylesheet for producing the HTML page, have been released.  A "manual" SVG template has been released as well -- see next chapter for details.

Work was completed on a "second generation" of harness and template tools:

This evolved into a test suite editor's tool, for performing major upgrades that involve modifying the standard template in a uniform manner. Because so many exceptions arose, which had to get manual or special processing, this line of tools has been largely abandoned in favor of simpler generic methods -- 'sed' scripts and the like.

Originally planned for the second generation tools:

After some working group experience with the BE suite and processes, it was decided that this sort of "make" overhead was undesirable.

4.4.5 SVG Template Details

Each test , at least for the static rendering module, is put into a standard template.  As just discussed, this was originally a manual process -- the test writer puts the test body content into the template, and sets the routine information items in the template. The second generation tools mentioned in the previous section relieved this requirement somewhat, as long as the content was written in the appropriate coordinate range, etc.

Features of the standard template include:

  1. Canonical Size of 450 x 450.  The template has width and height of 450, without units, and does not contain a viewBox element.  This configuration to be used for all tests except those which probe size, viewBox, and related attributes and functionality.
  2. Valid XML, in particular has a DOCTYPE which references the operative SVG DTD.
  3. A comment block which records such information as author, revision history, etc.
  4. <title> element with the test case name, per convention described above.  Example.  path-lines-DT-04.
  5. <desc> element, with concise (one sentence is ideal) summary of the test purpose, i.e., what this test is exercising.  Example:  "Test that viewer correctly displays path components up to, but not including, a command containing an error."
  6. These two elements appear in the SVG file after the SVG element and before the actual test  content itself.
  7. [Test body content is generated and inserted after <title> and <desc>.]
  8. Legend & frame.  A block (<g>) at the end of the SVG file, comprised of simple <text> elements so that they are part of the picture -- with the following content:

#1, #3, #5, and #7 are the most critical. The template-generating tools do carry these parts forward, and generate the other parts anew at each upgrade run. An initial TC submission which has these parts can be input to the tools, to generate a fully conformant SVG test case instance (the latter is a management process for the "test suite editor", whoever that is).

The serial number is a method for ensuring that the PNG reference image, the SVG instance, and the SVG rendering are all current, all agree.  To be useful, it must unfailingly increment whenever any change is made to the SVG file, in fact whenever the SVG file is saved.  Originally this was a manual process. It has been replaced with the Revision keyword of CVS (the suite is now maintained in a CVS repository).

4.4.6 Linking Order

The overall linking structure is: TOC, BE layer throughout the suite (next/previous), DT drill down from BE (child), BE pop up from DT (parent), etc.  Some details are to be worked out.  Does each BE point down to a different DT "stack"?  Or do all BEs in a chapter or chapter-focus point down to the first DT in that area?  The WG tentatively decided the latter (the structure of the suite is not likely to be regular enough to make the former widely practical).

An index of all files would be a useful addition, in some future release of the test suite.

4.5 CVS Repository

Processes and procedures still need to be designed, and a test suite editor assigned, for: 

Previously, the test suite editor was the repository, and all of these items were ensured by the editor. The repository is now in CVS. An introduction to it can be found in the document "CVS Repository for SVG Test Suite" (presently only accessible to the WG). All WG members now interact directly with the CVS repository. Experience has shown that there are varying degrees of completeness, correctness, and integrity in the results with all-WG direct access.

Once a test case was submitted, it is "owned" by the repository. All WG and public releases are from the repository, and all maintenance changes to test cases must be applied to the latest repository version.

If the second generation tools (for automatic SVG template generation) had been made operational, adherence of test cases to the exact formatting (as opposed to functional) details of the template conventions could have been somewhat automated. However test contributors would have still had to adher to certain minimum requirements in structuring the initial contribution (see previous section, plus later sections about using the templates). And, as indicated earlier, there were too many exception cases in which the automated procedures couldn't be applied.

4.6 Sources for Tests

4.6.1 SVG Specification

The examples which already exist in the SVG Specification have proved to be an excellent basis for material for some of the "Basic Effectivity" tests  -- simple tests which minimally illustrate an SVG functionality.

4.6.2 Vendor In-house work

Several of the SVG WG members have in-house development efforts.  Materials ranging from basic path and shape graphics, to filter effects tests, to DOM and animation functionality are known to exist.  Though some adaptation and integration into the SVG Test Suite framework is required, these existing QA materials have already proved to be a valuable resource for BE suite completion.

To be done:  inventory what is available within the WG (for further DT development -- BE is done).

Note. The OASIS XSLT/Xpath conformance committee has pioneered and is now (10/2001) ready for public release of tools and procedures for accepting and integrating collections of externally authored test cases. These should be investigated for any future DT test suite construction.

4.6.3 CGM Translation or Transcription

For graphical output functionality, there is substantial commonality between CGM and SVG (see comparison table in [12]).  The CGM Test Suite (for ATA, release 3.0, see [5]) has 269 tests, conforming to the test suite principles articulated above. 

There are two interesting possibilities to leverage this CGM design and implementation work:

Note that NIST-certified CGM viewers exist, as well as certified printer drivers and certified rasterizers.

Note. This potential source has never been exploited, but could be potentially useful for generating some DT tests.

4.6.4 Outside Contributions

Contributions from outside of the WG may some day be solicited.   Miniminal processing for contributions should include:

4.6.5 Construct New SVG Tests

There is only one way to assure comprehensive coverage of Test Requirements: build new tests, carefully designed and targeted at specific TR(s).

There are two ways to approach this:

Experience with the first has been:  the output drivers usually don't have precise enough control of the individual pattern of elements, and manual touchup is almost always required.

4.6.6 An Intersting Source of Test Purposes

From CGM experience, an interesting source of Test Purposes, and possibly even test materials, are instances which have:

These indicate trouble areas, where implementers are likely to misinterpret or incorrectly implement the specification.

4.6.7 Other X*L Tests

Beyond static rendering module, it should be possible to leverage methodology, or tests, or both, from such resources as the DOM test suite, [6].  CSS, [7], should be applicable as well (for future DT test generation).

5 How to Write Tests

5.1 Overview

This is meant to be a cookbook for writing the test cases for a functional module. Functional modules have generally corresponded to chapters in the SVG spec.  These techniques were prototyped with the Path chapter, and have been applied in the generation of BE-level tests for the whole spec.

Note.  When we did the Path chapter, an exhaustive TR extraction was one of the first actions.  This is not necessary prior to BE test case specification, and the TR analysis is postponed until after the BE test case generation in what follows.

5.2 Outline of BE Test Generation

Implicitly or explicitly, formally or informally, in this order or another which you prefer, you will go through these steps for writing BE tests:

  1. Read the document, all sections related to your functional topic.
  2. Enumerate a list of the major pieces of functionality.
  3. Decide about focus sub-sections, the focus sections for your chapter.
  4. Figure out a set of test cases which covers the functionality list.
  5. Derive a list of test purposes (at least implicitly).
  6. Combine them into a convenient number of test cases.
  7. Write the BE test case specification for each of the test cases.
  8. For each BE test case instance, fill in the required information in the SVG template.
  9. Write the SVG code to implement the BE test case specifications, put it in the template, refine till you're happy with it.
  10. Write the XML description file, especially the Operator Script part.
  11. For each BE test case instance, generate the HTML harness (for your own use only).
  12. Generate the PNG reference image.
  13. Put it all together and tune it till you're happy with it.

5.3 Outline of DT Test Generation

DT test generation involves more rigor and more systematic methodology than BE test generation.   The basic steps for DT tests are similar, with some differences at the beginning:

  1. Read the document exhaustively, critically looking for any and all testable assertions.
  2. Enumerate a list of all Test Requirements (TRs).
  3. Design a set of Test Cases which cover all of the TRs:
  4. Derive a set of individual Test Purposes which covers all TRs, with attention to the list of Generic Test Purposes.
  5. Write the DT test case specification for each of the test cases;

The rest follows as for BE tests.  The difference between the DT outline and the BE outline is in the rigor and thoroughness of the first steps, which are deciding "what to test".

5.4 Chapter and Focus Sections

The naming convention for SVG tests is:  chapter-focus-{BE|DT|ER|DM}-NN  (NN=01, 02, ...).

Decide how to subdivide your functional area ("chapter") into subsections -- "focus" sections.

Chapter is self explanatory -- a one-word, though possibly complex, name for your document chapter or functional area. Examples: path, coordSystem, clipMaskComposite.

Focus is simple to specify in some cases:  shapes-rect-...; path-lines-...; filters-feColorMatrix-...

In some cases, focus might not seem obvious.  However, the "focus" component of the name is always required.

"{BE|DT|ER|DM}" indicates that exactly one of the two-letter test type designators is to be used, BE or DT or ER or DM. The numbering runs consecutively throughout the chapter, it does not restart with each focus subsection.

Note. Use "camel case" for compound words for chapter and focus. The first letter is lower case, and the first letter of subsequent words is upper case. Examples: gradPatt, radialGradient, textAnchor.

5.5 Designing a BE Test

Remember, at the BE level, we are only trying to verify that the interpreter or viewer has implemented the given functional area.  Therefore, we focus on the major functional pieces of the chapter.

There are no firm rules as to what comprises a BE test and what is DT -- sometimes it's a judgement call. However, here are some helpful guidelines:

Example.  The 'path' element has a "d" attribute, which can contain a number of commands:  Mm, Ll, Zz, Hh, Cc, ...  It also has the "nominalLength" attribute, which is unique to Path.  The BE tests for path give a basic exercise of these attributes and commands, including verification that the implementation understands the concept of subpath (holes and islands).

Note that there are also attributes like "style", "class", "transform", which are functionalites widely applicable to other parts of SVG.  It is a judgement call, but in the path BE tests, we avoid these details -- they will in fact be attacked in their own modules, such as Styling, Transform, etc.  (In other words, we don't deal with them extensively in the Path tests, not even in the DT tests.)

We adopted a principle for the Path tests:  just enough styling -- basic colors, etc -- to make the tests visually less grim (than b/w, one-pixel wide lines, no fill). This principle is applicable for all tests -- do not unnecessarily clutter the test with functionalities which are unrelated to the functionality being tested.

When starting to design the BE tests, look at the existing very simple examples in the SVG spec for starters.

5.6 Writing BE Test Description -- Test Case Specification

The guidelines in this section apply to DT, ER, and DM tests, as well as BE tests.

The test description is only for you, the test designer, unless you're dividing the labor into specification versus production -- different people doing each -- which we actually did on the Path prototype.  Nevertheless, experience shows that the challenge of writing the description force one to actually design the test in sufficient detail that it is easy to then write SVG content to implement it.

It is the Conceptual Description of the test (see below) which is useful at this early stage (and it might be done after you have sketched/drawn the Test Case).

In the Path prototype, the following format proved useful in describing Test Cases:

  1. Name (<title>):   using our convention, e.g., "path-curves-BE-04".  This will be the root of associated filenames for the test case (TC).
  2. Test Purpose (<desc>):  a brief one-sentence summary of what this TC addresses.
  3. Conceptual Description:   brief prose description (to TC instance builders) of the content of the TC.  Typically one-to-few paragraphs long.  See [11] for examples.
  4. Operator Script:  Summary of test purpose(s), how to execute test, what to look for in result, some specifics about what constitutes pass/fail, allowable variations from reference picture, etc. 
  5. Associated Test Requirements:  Formally or informally, this information should be available by the time the test case is designed (it is clearly applicable to DT and ER tests, less so for BE and DM tests).
  6. Document References:   Links or pointers from the test to the SVG spec.

See later section for writing the Operator Script (#4). 

The Associated Test Requirements (#5) and Document References (#6) are the crux of traceability.  You will have generated this information (implicitly, at least) by the time you have designed and written your test case. If you are writing test cases, generate and preserve this information (see, for example, next section and [11]). 

See the next chapter, "Test Review Guidelines", for a concise summary of other details which you should pay attention to, when you design and write a test case.

Note.Traceability data currently are not integrated into the completed (REC SVG) BE suite. This will have to be done (as links into the SVG spec) in a future test suite release.

5.7 Extracting Test Requirements (TR) for DT Tests

A comprehensive list of Test Requirements is the critical first step in writing a comprehensive set of DT test cases for your functional area.

There is no magic rule.  See [9], section 4.2, for an interesting discussion of this.   See [11] for a fairly thorough (as far as it went -- it's incomplete) example for the Path chapter.

It starts with an intensive reading of each sentence of your chapter.  Weigh the question, phrase by phrase:  is there a testable assertion here?  If so, highlight it and add it to your list (however you want to manage it.  You also will need to read some other chapters, at least:

and maybe others like Accessibility.  Depending on your chapter, you might be led off into other sections of the document for requirements, or into other standards (e.g., CSS2).

Build up a list of TRs --  testable assertions -- which might be applicable to a viewer or interpreter.   In fact, don't descriminate by "applicable to" at this stage.  The SVG spec is written to describe a file format, and the semantic requirements associated with data elements are not always explicitly stated.  For example from the Path chapter: 

"The command letter can be eliminated on subsequent commands if the same command is used multiple times in a row." 

This is a statement about allowable data configurations within 'path' elements.  But combined with the statement form Appendix G, "All SVG static rendering features ... must be supported and rendered ...", a testable assertion about viewers results (and a test purpose can be derived).

You'll be lucky to find any (or many) statements which jump out with a "shall" or "must" -- it doesn't occur once in my (incomplete) list of 59 TRs for Path, in [11].

So for a first pass, pick up anything and everything which looks like it might lead to a Test Requirement on an SVG viewer.

For the Path chapter, I assembled a list of Test Requirements after the first intensive reading, during which I did markup on paper.  You can follow this sort of process, or do whatever is most agreeable for you. 

Each entry in the list contained a document reference, and the text of the TR -- I did cut-and-paste against the HTML version of the document for the latter (note:  there is some danger of volatility with this, at this stage of the document). 

Simple example:

Reference:  10.3.2.Mmtable

Statement:  (x,y)+ -- Mm must be followed by one or more x,y pairs.

Lengthier example:

Reference:  10.4.p1.b2

Statement:  nominalLength= The distance measurement (A) for the given 'path' element computed at authoring time.  The SVG user agent should compute its own distance measurement (B). The SVG user agent should then scale all distance-along-a-curve computations by A divided by B. 

I used the following ad-hoc (pseudo-Xpointer) notation for referencing Test Requirements: pN.bN.sN, where  pN = paragraph N, bN = bullet N, sN = sentence N.  Plus unambiguous constructions like "Mmtable" to point at tables and table entries.

See [11] for the example of the complete listing of those Test Requirements (TR) which pertain to geometry and syntax of Path (extracted from the 19990914 SVG spec.)

5.8 Designing DT Cases

We want to turn the TR list into a number of Test Cases which exhaustively covers the requirements in the TR list.

The first step, implicitly or explicitly, is derivation of a set of Test Purposes associated with the TR list.  Example:

TR:  "Mm must be followed by one or more x,y pairs"

TP:  "Verify that interpreter correctly handles Mm with one (x,y) pair, or several pairs, or many pairs."

Note that, because of the "Error Processing" requirement about Path, that this suggests another TP:  "Verify that interpreters respond correctly to invalid Mm data combination."  (Such as "M x L x y", or "M L x y", or "M x y x y x Z".

This leads to another point.  The general conformance requirements of Appendix G imply a list of "Generic Test Purposes" (see [4.3]):

Keep this list at hand while you are looking at your TR list and deciding what to test, i.e., deciding Test Purposes.  (This list might be extended -- suggestions welcome.)

I have been a bit informal with my "TP list", but I still keep track of what I have covered on the TR list and the generic TP list, so that I know when I'm done.  See, for example, [11], the section, "Detailed Drill-down Tests (DT) for Line Commands."

The final principle here, once you have an idea of what you're going to test and how, is to put a reasonable number of "atomic" tests together to make a Test Case.  This will reduce the number of individual test cases and increase their content density.  The guiding principles should be: 

5.9 Generating the Template and Harness

The first release of a simple XML grammar for describing tests, and the XSLT stylesheet for producing the HTML page, featured:  

  1. SVGTestCase.dtd -- the DTD for the simple XML grammar for describing a test case.
  2. CreateHTMLHarness.xslt -- XSLT stylesheet for producing the harness (HTML page) from the XML instance.
  3. CreateSVGHarness.xslt -- XSLT stylesheet for producing the all-SVG harness from the XML instance.
  4. harness-ps-create-frameset.xslt, harness-ps-create-top.xslt, harness-ps-create-bottom.xslt -- XSLT stylesheets to generate PNG&SVG side-by-side, OS below, in frame-based page.
  5. static-output-template.xml -- sample XML instance.

These were the extent of the "first generation" production tools. If you process the XML instances with CreateHTMLHarness.xslt (#2), you will get HTML pages which pull together and presents the PNG reference images, the operator scripts, and navigation buttons for the suite. Various batch commands have been supplied as well, to assist in making one or more of the harness types for one or a list of test cases.

Subsequent to the initial release of these tools, #2 - #4 have been rewritten, factored to improve maintainability, and augmented with another harness -- all-SVG which presents SVG and PNG side-by-side (but with no OS).

These tools are all in the CVS repository, in the tools subdirectory.

If you process the XML instances with CreateSVGHarness.xslt (#3), you will get a parallel set of SVG pages with SVG elements for navigation buttons, and inclusion by reference of the test case SVG instances themselves.

If you process the XML instances three times respectively through the three XSLT stylesheets of #4, you will get a set of three HTML files which generate frame-based pages. PNG is displayed side-by-side with SVG (the latter via a plugin), and the Operator Script is displayed below, in the bottom frame.

Initially I used only the XT tool of James Clark (get it from his Web site).  You can use whatever tool you prefer, but a caveat -- different XSLT processor may give inconsistent results.  Since the rewriting and revision of the stylesheets, and addition of the 4th harness (all-SVG), I use Apache/xalan for the two all-SVG harnesses (to avoid an anomoly involving appearance of unneeded namespace declarations).

A set of DOS batch command files exists, to facilitate generation of the harnesses. These are documented elsewhere (TBD).

A "manual" SVG template has been released as well.

The scheme has now been developed to the point that automatic generation of the SVG skeleton file, with some of the details filled in (see next section), is now possible. A set of advanced stylesheet tools was designed to do this from a slightly expanded XML grammar (featuring things like 'desc' strings, elements for author and creation date, etc). The final second-generation tools, however, stayed with the simple initial grammar and don't automate all of the required information. Therefore the static-output-template.svg is still the place to start if you are writing a new test case.

5.10 Using the Template to Write Test Case

Starting with the static-output-template.svg and:

  1. put the test name into the <title> element.
  2. put a one brief sentence test purpose description into the <desc> element
  3. modify the preamble comment block
  4. fill in the Legend block
  5. write or otherwise generate the SVG test content and put it in the body, where indicated (test-body-content).

Note about #5: this is critical. All critical content should be inside of the "test-body-content" group. The second generation tools and maintenance processes rely on it --most of the stuff outside of this group is automatically generated, and it wouldd be lost if you put anything important there (the 'desc', and comment preamble will be preserved). Though these (2nd gen) tools are not in production, it would be wise to keep the option open.

Note about #2. That the test should be self-documenting implies that the picture should have a <text> element equivalent to the <desc> (across the very top is a good location, in most cases).

Use good and thorough comments in the SVG content itself to describe what everything is doing.

There may be some test purposes (e.g., the structure-emptySVG-BE-01.svg test) which require no graphical content, in which (only) cases the Legend may be omitted.

See below about the Serial Number (now superseded with CVS's Revision keyword).

See the next chapter, "Test Review Guidelines", for a concise summary of other details which you should pay attention to, when you design and write a test case.

5.11 Writing an Operator Script

The key feature of the XML instance is the Operator Script. Note when you fill in your XML instance, you will also define the name (per convention documented herein), and navigation links. The latter define the next and previous TCs in the sequence for link navigation through the suite.

Be sure that your TC's XML instance links correctly to its TC neighbors, and vice-versa! This is probably the most common mistake amongst contributors -- failure to adjust the links of neighbors when a new test case is added (or a test case is removed). (This is sufficiently annoying that an automated verification or even correction tool is on "the queue", for some future development.)

The Operator Script comprises a few sentences and is written as one or more paragraphs of the XML instance (see earlier section) for the test case. 

Once again, there are no firm rules.  However, the Operator Script can address any or all of:

  1. describing what is being tested, i.e., a summary of the test purpose(s);
  2. what the results should be;
  3. verdict criteria for pass/fail;
  4. allowable deviations from the reference image;
  5. how to execute test, if there are any special instructions;
  6. optionality of features;
  7. prerequisites (other functionality used in the test);
  8. accessibility aid.

#2 could conceivably be:  "picture should look like the PNG".  However, some specifics could be pointed out, such as (for an accuracy test):  "All lines should pass through the cross-hairs", or "Vertexes should be at the locations of the markers".

#3 and #4 go together.   If there are allowable deviations of the rendered SVG from the PNG, it should be stated (e.g., maybe a style falls back to the default style sheet, which can vary).

About #6, optionality.  If a test is exploring an optional or recommended feature, that should be clearly indicated right at the beginning of the operator script. 

#7 refers to a brief description of SVG functionalities, other than the one under test, which are used in the test file instance.

In addition to the other purposes of the Operator Script, a well-detailed Operator Script can be useful as an aid to accessibility (#8).

5.12 Generating the PNG

Note. The following describes first-generation methods for getting the PNG reference images. New tools are available from several sources, e.g., Adobe, CSIRO, and the Apache/batik project, which allowed direct generation of correct PNG files for all test cases in the second public release, i.e., the complete BE suite.

When you develop a test case, you submit to the repository: the PNG reference image; and, a description of how you generated the PNG.

5.12.1 Screen Capture SVG Rendering & Postprocess

By far, the most common method of generating the PNG reference image is:

  1. Produce SVG.
  2. Display the SVG with an SVG Viewer, such as the Adobe plug-in, the batik viewer, etc.
  3. Do a screen capture (e.g., Alt-PrtSc to clipboard in Windows);
  4. Paste the screen capture into a tool such as Adobe ImageReady, Corel Photopaint, Macromedia Fireworks(?).
  5. Edit to trim away non-picture part, resulting in 450x450 pixel image.
  6. Depending on the capabilities of such tool:
    • save as PNG (e.g., you can do this from ImageReady).
    • or, save in the native some portable format of the image editor and load that into a PNG generation tool (e.g., Adobe ImageReady 2.0, Corel Photopaint, Macromedia Fireworks), and save as PNG.

5.12.2 Screen Capture & Postprocess "Patch" File

The screen capture method relies on having an SVG viewer which can correctly display the picture. Sometimes, especially in the early days of viewer implementation development, this is not possible -- no SVG viewer handles the test instance correctly.

However, it is often the case that there is another SVG file which is exactly equivalent (pictorially). These are called "patch" files, and are named, for example: structure-nestedSVG-BE-02-patch.svg (actual example).

Example: viewer doesn't correctly establish the origin of the user space for a simple test of nested SVG elements. Then compensate for the viewer error by changing the coordinates of the innermost graphical elements so that the viewer positions them (graphically) correctly.

Example: viewer defaults something wrong. Then compensate by explicitly setting that value (assuming that the viewer does this correctly).

Example: multi-stop gradients don't work, but two-stop are correct. Make a "-patch" file where the correct picture of a multi-stop gradient is built by stringing together multiple two-stop gradients.

Test contributors should submit any "-patch" files, along with the PNG files and "how to" description.

5.12.3 Handcode the Reference Picture in a Graphics Program

This technique was used by some contributors, before the development of SVG viewers was very advanced:

  1. Produce SVG, e.g., by hand-coding.
  2. Draw out the picture in a graphics program (e.g., Adobe Illustrator, CorelDraw, or Macromedia Freehand) that shows how the SVG should look, save as EPS (or another good exchange format for subsequent editing)
  3. Load the EPS into a PNG generation tool (Adobe ImageReady 2.0, Corel Photopaint, Macromedia Fireworks) and save as PNG.

With this method, you should be on guard against accuracy issues, as the SVG and the PNG result from independent and disjoint drawing pipelines. 

A variant of this is to use a graphics program to draw just the incorrect piece of the SVG rendering, and then cut-paste with a raster editor to get a complete and correct reference PNG.

5.12.3 CGM Transcoder, CGM Render, Screen Capture & Postprocess

This method has been postulated, but (to my knowledge) no yet used by anyone.  It would be equally applicable to formats other than CGM, when transcoders are available.

  1. Produce the test case/picture you want to draw as a Clear Text CGM (not as nice as SVG, but still can be hand-coded for simple stuff);
  2. If the transcoder of step #3 cannot handle clear text CGM, do encoding conversion (e.g., via CGMconvert) to Binary CGM;
  3. Convert to SVG with a SVG-to-CGM transcoder (e.g., one such is available from IBM);
  4. Take the body-content and put it into the SVG template, doing any hand editing that might be required;
  5. Render the CGM, screen capture and proceed as in previous methods.

Whether or not this will work for your test case depends on whether the result of step #3 is close enough to the SVG configuration you need for the test (and correct!), and more importantly, whether the hand-editing of #4 preserves the graphical accuracy of the rendered picture.

A variant, for simple test cases, would be to hand-code the desired SVG, hand-code (graphically) equivalent clear text CGM, and not use the transcoder at all -- just use hand-coded CGM as a route to a correct picture.

This would only be useful, in place of the previous methods, if none of the existing SVG viewers could get a correct rendering of the desired SVG test case, and it was too difficult to reproduce the desired drawing in a graphics program. 

5.12.4 Notes about PNG Generation

Three aspects of the PNG file generation should have your attention:

  1. file should be 450x450, matching the SVG test case.
  2. file should use PNG-8 for compactness, unless PNG-24 is required.
  3. avoid artifacts from color management systems.

The only exceptions for #1 are test cases which specifically deviate from the canonical 450x450 coordinate space, in order to test viewer handling of different SVG address spaces. In this case, match the SVG test instance.

8-bit PNG should suffice for most tests -- 256 colors are possible. 24-bit PNG is likely to be required for tests such as:

While it might be possible to compute the number of colors required in some of these cases, and optimize with PNG-8, nevertheless it is strongly recommended to be conservative and use PNG-24, if there is any doubt.

For these same cases which require PNG-24, it has been discovered that attention must be given to the color mode of the monitor, if screen capture is being used. On PC Windows systems, for example, noticeable color banding has occurred on some tests when using "High Color" (16-bit) mode, and it disappears if "True Color" (24-bit) mode is used.

Avoidance of color artifacts is not yet completely understood. However, be aware that color management systems on your computer may lead to incorrect colors in the PNG reference image. One commonly seen example is a faint pink tinge to areas that should be white. These color management artifacts can sometimes be detected by using tools in raster editors (such as Adobe ImageReady 2.0, Corel Photopaint, Macromedia Fireworks.)

5.13 About Serial Number

During CGM test suite development, a major annoyance and quality impact arose from not being able to keep synchronization between the reference image (the PNG files, for this SVG test suite), and the test case (SVG for us).  Changes made to the latter often weren't reflected by updating the former.  Worst of all, there was no way to detect the problem when it occurred.

A "serial number" in the SVG, which is encoded in graphical text, is the solution for this -- it is quick and easy to determine if the PNG corresponds to the SVG file and a given rendering of the SVG file (e.g., printout or screen image) ... assuming that the PNG was generated from the SVG!

The serial number is part of the Legend of the SVG file.  Previously, the serial number was manually maintained and updated, which proved to be something of an annoyance. Nevertheless, the version control benefits warranted the inconvenience. Originally, the serial number was identical to a version number -- 1, 2, 3, ... Its maintenance was solely the responsibility of the test suite editor, which somewhat alleviated the error-prone manual aspects.

Now, the serial number text string (in the Legend of test cases and templates) has been replaced with the Revision keyword of the CVS system. Every 'commit' of a changed SVG file, no matter how trivial the change, automatically updates the serial number. It is identical to the revision number in CVS.

Note: a very few .SVG files need to be treated as binary instead of text (e.g., ones with wide range UTF-16 character codes). Presently (10/2001), there are only two such test cases (still true?). Keyword substitution is suppressed for these. Therefore the serial number must be manually typed into the Legend after the rest of the test case production, and should match the current CVS revision number.

6 Test Review Guidelines

6.1 Motivation for the Guidelines

There are two reasons that these guidelines are provided:

  1. They comprise a brief synopsis of the most critical details, from the previous chapter, for test developers;
  2. And, it is an operational principle of this project that no tests will be published until they have had review by someone other than the test author.

6.2 Overall Chapter Content

Considering the chapter and its set of test cases as a whole, assess:

6.3 Individual Test Case Content

Looking at the SVG test cases instances individually, evaluate:

Note. About "Self-documenting (i.e., in rendered content)," the style of in-picture animation should be to describe what is being tested, but should not describe visual effect (the latter may be done in Operator Script). So for example,

6.4 XML Test Case Description

Specifically, this refers to the Operator Script, which is to be evaluated for:

In addition, evaluate the correctness of the navigation links:

6.5 PNG Reference Image

Evaluate at least these criteria for the PNG reference image:

7 Glossary

7.1 Basic Effectivity Test (BE)

A test which lightly exercises one of the basic functionalities of the SVG specification. Collectively, the BE tests of an SVG functional area (chapter) give a complete but lightweight examination of the major functional aspects of the chapter, without dwelling on fine detail or probing exhaustively. BE tests are intended to simply establish that a viewer has implemented the functional capability.

7.2 Demo Test (DM)

A test which is intended to show the capabilities of the SVG specification, or a functional area thereof, but is otherwise not necessarily tied to any particular set of testable assertions from the SVG specification.

7.3 Detailed Test (DT)

Also called drill-down tests. DT tests probe for exact, complete, and correct conformance to the most detailed specifications and requirements of SVG. Collectively, the set of DT tests is equivalent to the set of testable assertions about the SVG specification.

7.4 Drill-down Test

See Detailed Test.

7.5 Error Test (ER)

An Error Test probes the error response of viewers, especially for those cases where the SVG specification describes particular error conditions and prescribes viewer error behavior.

7.6 Semantic Requirement (SR)

See Test Requirement.

7.7 Test Assertion (TA)

See Test Requirement.

7.8 Test Requirement (TR)

A testable assertion which is extracted from a standard specification.  Also called Semantic Requirement (SR) or Test Assertion (TA) in some literature.  Example.  "Non-positive radius is an error condition."

7.9 Test Purpose (TP)

A reformulation of a Test Requirement (or, one or more TRs) as a testing directive.  Example.  "Verify that radius is positive" would be a Test Purpose for validating SVG file instances, and "Verify that interpreter treats non-positive radius as an error condition" would be a TP for interpreter or viewer testing.

7.10 Test Case (TC)

As used in this project, an executable unit of the material in the test suite which implements one or more Test Purposes (hence verifies one or more Test Requirements).  Example.  An SVG test file which contains an elliptical arc element with a negative radius.  In practice (and abstractly), the relationship of TRs to TCs is many-to-many.

7.11 Traceability

The ability, in a test suite, to trace a Test Case back to the applicable Test Requirement(s) in the standard specification.

8 Bibliography

  1. W3C Scalable Vector Graphics (SVG) 1.0 Specification, 4 September 2001, Recommendation, http://www.w3.org/TR/SVG/.
  2. Documents related to the OASIS XSLT/Xpath conformance work may be found at (http://www.oasis-open.org/committees/xslt/).
  3. Computer Graphics Metafile Conformance Testing -- Full Conformance Testing for CGM:1992/Amd.1 Model Profile, NIST SBIR Final Report, January 1995.
  4. WebCGM 1.0 Profile, W3C Recommendation, http://www.w3.org/TR/REC-WebCGM.
  5. NIST CGM Test Suite for ATA Profile,  http://www.itl.nist.gov/div897/ctg/graphics/cgm_form.htm
  6. NIST DOM Test Suite, http://xw2k.sdct.itl.nist.gov/xml/dom-test-suite.html
  7. W3C CSS Test Suite, http://www.w3.org/Style/css/Test/
  8. OASIS/NIST XML Test Suite, http://www.oasis-open.org/committees/xmltest/testsuite.htm
  9. "Interactive Conformance Testing for VRML", http://www.itl.nist.gov/div897/ctg/vrml/design.html
  10. ISO 10641, section 5, "Conformance testing requirements within graphics standards".
  11. "SVG Conformance Suite -- Preliminary Design for Path", Lofton Henderson, 12 Nov. 1999.
  12. "SVG Conformance Test Suite Design -- Outline of Proposed Approach", Lofton Henderson, 12 November 1999.
  13. "CVS Repository for SVG Test Suite", Lofton Henderson, 02 November 2000.