SVG Conformance Test Suite --
Test Builder's Manual

Revision:  2.00

Date: May 18, 2000

By:  Lofton Henderson

Document Versions



Description of Change



Standing project document, 1st WG release.  Based on Cupertino docs, later conformance decisions, additional rsch, and core info from path doc.

1.01 2000-04-27 Releasable HTML version.
2.00 2000-05-18 Incorporate cumulative experience from BE suite construction.


This document is intended as a permanent reference document and user manual for designers and contributers to the SVG Conformance Test Suite.  As well as being the repository for currently agreed methods, templates, procedures, and techniques, it also contains the SVG Test Suite Issues Log.


Some parts of this document are still in progress. In particular, the synopses of technical content of other conformance projects, in the chapter "Related Conformance Work", have not yet been detailed.

Table of Contents

1 Overview

1.1 Goals

The ultimate goal is a comprehensive and detailed conformance test suite for SVG 1.0. 

Amongst the multiple purposes which such a suite can serve, we identify the most important as:   a publicly available suite to help implementation builders achieve interoperability.

1.2 Scope

There are at least three areas in which conformance testing is applicable:

  1. conformance of SVG document instances;
  2. conformance of SVG generators;
  3. conformance of SVG interpreters and viewers;

This project's scope is limited to the third -- a conformance test suite for interpreters and viewers.

1.3 What Kind of Suite?

At the Cupertino SVG-WG meeting (11/99), the question of what sort of suite we are building was discussed and resolved.  The options included:

  1. An SVG demo suite;
  2. A QA test suite for product developers;
  3. A publicly available conformance suite and interoperability aid;
  4. A certification test suite for a rigorous certification service.

The SVG WG decided at Cupertino:  we are building #3, a publicly available suite for such uses as informal conformance analysis and developer self-testing.

Presently, there are no plans for a SVG "certification" service, therefore there is no need for #4 -- W3C doesn't  currently carry out certification testing, nor is any currently proposed by other entities.

1.4 Why Does It Matter?

What is the difference?  In what ways would the suite differ depending on its purpose?

Some identified differences include:

While the formality and rigor of a certification suite might not be needed, the SVG conformance suite will (eventually) embody "traceability" (see below) -- what specification in the standard justifies a given test?

1.5 Milestones & Schedules

The SVG WG, as part of the extension of its charter, has agreed to two milestones:

A timetable for test suite construction, in the context of other WG activities, looks like:

1.6 Roadmap to this Document 

For those interested in a quick user guide for test construction, you can skip directly to "How to Write Tests". The rest of this document provides background, explanation, and motivation for the methods used.

1.7 Document Status

The material in Section 2, especially the brief synopsis of the nature and content of each existing suite, is incomplete.

Section 3 is substantially complete.

The material in section 4 is complete for Static Rendering, but Dynamic has not been addressed.  A couple of topics like overall test-suite linking structure are still incomplete.

Section 5 -- How to Do It -- is substantially complete for Static Rendering "how-to", including incorporation of experience from several months' work on BE tests. 

Section 6, Glossary, is mostly a placeholder so far.

Section 7, Issues Log, ditto.

2 Related Conformance Work

2.1 Motivation

There is now a substantial body of test suite experience and material, for several different standards:

These suites and the experiences of building them are useful to the SVG conformance effort, and to contributors of test materials, in a number of ways:

The "level of effort" data should be particularly interesting to the SVG group.

2.2 Applicability of Previous Conformance Work

2.2.1 Previous CGM Work

See [5] and [3].

The applicability of CGM test suite experience to at least the static rendering subset of SVG is obvious. 

CGM and SVG differ in other ways: 

2.2.2 Previous VRML Work

See [9].

[Analysis of potential applicability VRML suite and methods will be written for future document release.]

2.2.3 CSS

See [7].

For application of visual properties (graphical attributes) SVG borrows heavily on CSS2.  The syntax for all such properties (plus some others) is patterned on CSS, and a number of CSS2 properties (esp. font properties) are adopted directly by SVG.  The full font selection and matching machinery of CSS2 is required in conforming SVG processors (interpreters and viewers).

W3C has made a test suite for CSS1. The methods of the CSS1 test suite should clearly be applicable to some aspects of the SVG suite.  Some actual CSS test materials might be (almost) directly usable.

2.2.4 DOM

See [6].

We'll have to test the SVG DOM.  NIST's XML DOM suite for Javascript binding is released, for both XML and HTML.  The Java binding is in progress.  Methods and techniques should be applicable, and maybe some materials can be borrowed with minimal modification.

2.2.5 XML

See [8].

A conforming SVG interpreter (hence also a conforming SVG viewer) "must be able to parse and process any XML constructs defined in [XML10] and [XML-NS]."  A conforming SVG viewer therefore incorporates XML-suite conformance, by reference.

2.3 Size and Effort of Other Suites

2.3.1 CGM V1

Size:  about 200 simple, atomic tests.

Level of effort:   difficult to determine, but probably about 1 - 1.5 FTE, external contractor plus NIST staff.

2.3.2 CGM V3

Size:  270 new tests, (70 new plus extensive redesign and revision of existing 200+ V1 tests).

Level of effort:   difficult to determine, but probably about 1 - 2 FTE, external contractor plus NIST staff.

2.3.3 VRML

Size:  Estimated about 1,000 tests.

Level of effort:  3 people full time for about 2 years at NIST.  There was a steep learning curve.  NIST released the first tests, after 3 months.  They released tests as soon as a node was completed.

2.3.4 XML

Size of 1st release:  1,000 XML tests -- DTD+4000 lines of XML code; 400 lines of XSL. 

Level of effort:  1.5 FTE -- 2 people for approximately 9 months.  One person designed the test harness and some of the tests and the other  designed some tests and spent lots of time validating and filling in the holes from what other people contributed.

2.3.5 DOM

Size of 1st release (Ecmascript with XML) -- 800 tests, 30,000 lines of code (this is only the Fundamental and Extended tests).

Level of Effort:  1.5 FTE -- One person half time for 9 months, who did the test harness (following much of what was done for VRML and XML); plus another full time for about 9 months; plus a third person about 4 months full time.   

2.3.6 CSS

Unknown. To be researched for inclusion in future version of this document.

2.4 Synopsis of Design & Content of Other Suites

2.4.1 CGM

The CGM suite ([5]) consists of 269 test cases, each of which has three components:

  1. Test file instance;
  2. Reference Pictures, at least one per test case (originally color hardcopy, now GIF files);
  3. Operator Scripts, one per test case (including "Verdict Criteria").

There is no interactive harness or driver, and hence there are no navigation buttons to assist navigation through the suite.  The operator has to invoke the viewer, access the Reference Picture, and access the Operator Script.

2.4.2 VRML

[Synopsis of content and structure of the VRML suite will be written for future document release.]

2.4.3 XML

[Synopsis of content and structure of the XML suite will be written for future document release.]

2.4.4 DOM

[Synopsis of content and structure of the DOM suite will be written for future document release.]   

[Javascript, XML and HTML finished.  Java version being built now.  NIST did the Javascript XML, then Javascript HTML reusing some the data file (which is a big HTML or XML document on which the DOM tests work). And the re-used the Test Assertions and XML document for the Java DOM-XML tests.] 

2.4.5 CSS

[Synopsis of content and structure of the CSS1 suite will be written for future document release.]

3 Graphics Testing Overview

3.1 Process of Building Test Suite Content

The following basic process is applied for construction of most of the test suites referenced above -- CGM, VRML, XML, DOM, at least.  In overview:

In practice, these steps need not be overly formal.  In the case of a certification suite, formality is important.  For a conformance suite, it is less so.  In any case traceability (see below) is required.

Therefore, explicitly or implicitly, these steps are carried out -- the document is read exhaustively and decisions are made about what to test about each functionality, and how to realize these decisions in a set of test cases.

Section 4.2 of reference [9] contains an interesting discussion of TRs (which it calls SRs) and TCs -- the step of generating TPs is implicit in this reference, not explicitly treated as a formal step.

3.2 Principles Applicable to Test Suite Content

Some basic principles have been learned during the construction of previous test suites, applicable to both graphics suites and others:

  1. Simple or Atomic.  Each test purpose should be as simple as possible and narrowly focused on a atomic (simple) functionality.  Example:  chose one attribute or property and exercise it through a range, while holding other variables constant.  The advantages to this approach are:
  2. Reducing number of tests.  Without sacrificing the principle of atomic testing, the number of test instances (files) can be reduced by having a single instance combine multiple related test purposes.  Example:  for an attribute or property with a half-dozen different enumerated values, test each of the values in a "sub-test" of a single test case (test file instance).  Counter-example (poor practice):  the first CGM test suite sometimes had each instance test one value of an attribute, so that almost 30 test file instances to test the horizontal and vertical text alignment values.
  3. Progressive.  For any functionality, the tests should be organized from easy and general to harder and more specific. This avoids wasting time and resources if the implementation  is completely incapable in a functional area. 
  4. Comprehensive.  The detailed tests should try to methodically vary and test all values, plus boundary conditions and extreme conditions, of each parameter, attribute, or property.
  5. Self-documenting.  The tests should be self-documenting.  For example, a line-width test should have something like tick-marks drawn to delimit the correct width.  Graphical (displayed) text should explain and/or label the pieces of the picture.
  6. Key combinations.  #1 notwithstanding, there should be some number of tests which do vary more that one attribute or property at once.  Especially, thought should be given to how implementations might fail.  Example:  CGM has separate but equivalent attributes for lines and edges of filled primitives.   Since it is common for implementations to use a single stroke generator for both purposes, it is sensible to test that the state of line/edge attributes are properly saved and restored after drawing an edge/line.
  7. Real examples.   The CGM V3 suite included some real world graphics arts and technical pictures. 
  8. Traceability.  A test must be traceable back to a statement or statements in the standard's specification.

4 SVG Conformance Suite Development

4.1 Modularization & Prioritization

The SVG specification divides fairly cleanly into semi-independent functional modules.  Test materials will be developed and released progressively, subject to the constraint that we have agreed to make an entire breadth-first, basic effectivity (BE) test suite release first, and a drill-down (DT) release subsequent to that.

4.1.1 Static versus Dynamic

The major natural division in the specification is:

Static rendering has first priority for development and release, although work on dynamic can proceed in parallel, resources permitting.

4.1.2 Progressive Ordering for Static Rendering

Functionality will be ordered in the suite, for purposes of execution and navigation through the suite, from most basic to most complex -- implementations should encounter the simplest and most basic tests first, before being subjected to progressively more complex and advanced functionality.

Given our intent to make progressive releases of test suite modules, it makes sense to generally follow this ordering for the building of the materials, at least for the completion of the DT and ER tests.

The SVG WG agreed at Cupertino to divide up the functionality by chapter.  For static rendering, the following chapters are candidates for testing (based on the document organization of the 3 March 2000 public version):

Building and executing tests in chapter order does not appear to always lead to a basic-to-complex ordering. 

From most basic (or fundamental -- basic does not necessarily mean simple), to most advanced, a rough functional ordering might be:

The issue of final test suite ordering and organization is not yet completely resolved.

4.1.3 Dynamic Module Prioritization


4.2 Test Case Materials

4.2.1 Static Rendering Materials

Each Test Case in the static rendering module will contain three principle components:

  1. .svg file -- the SVG instance, designed to test one or more Test Purposes;
  2. .png file -- a raster reference image showing a correct rendering of the SVG instance;
  3. .ops file -- the operator script, which comprises a few sentences describing what is being tested, what the results should be, verdict criteria for pass/fail, allowable deviations from the reference image.

#1 and #2 will be file instances.  #3 could be a file instance, but now is being handled as the content of a tag in a simple XML grammar which generates the HTML navigation page.

Note.  In the earliest test suites, for CGM, the Operator Script was a rigid and rote checklist for use by (non-expert) certification testing technicians, to score each test.  It has evolved in more recent conformance work, to being more informative about the test's purpose and what to look for. It also can (and should) function to improve the accessibility of the test suite.

Details and examples of writing an Operator Script are given in the next chapter.

Other supporting material will be generated for each test case:

Note. The traceability links may be postponed until the SVG spec stabilizes -- probably at least the PR version.

See below, section [4.4.2] and [4.4.4], for futher details about the test harness(es).

4.2.2 Dynamic Module Materials

Most of the SR materials will be applicable to most dynamic tests.  However, there may be cases (e.g., some DOM) which do not have graphical output, and there will be some which could (but need not necessarily) have animated graphical "reference images".

This material will be further refined as more of the dynamic functionalities' tests are developed.

4.3 Types of Tests

Four generic test categories have been decided.  These are equally applicable to static rendering and dynamic test modules:

For BE tests, an attentive reading of the applicable spec sections is required, but an exhaustiveTR enumeration is not.  The generic BE test purpose is:  correct basic implementation, including major variations within the functional area.

For DT and ER tests, an exhaustive TR extraction from the SVG spec will be a part of the process, and test purposes will be derived from the TRs (see next chapter). 

Following is a list of generic test purposes for DT tests (ER also?):

The Generic Test Purposes provide a high-level checklist for the sorts of test cases which should result from the analysis and test design of a functional area.  If any major categories are not represented, it may indicate that some implicit or explicit requirements have been missed.

4.4 Packaging, Organization, and Presentation

4.4.1 Standalone and Browser Requirement

These requirements are agreed, at least for the static rendering module:

  1. the tests must be usable for a standalone viewer,
  2. the suite should be conveniently navigable and executable from a browser.

4.4.2 Test Harnesses

The VRML suite, as well as the CSS, XML, and DOM suites, employ an interactive test harness (HTML page), which:

Unlike the VRML suite, which presents the test rendering and the reference image side-by-side in one window, the SVG WG has decided on a two-window approach, for standard release harnesses:

So in any case, a browser will have to be available for convenient viewing of all of the materials, but it is not necessary that a viewer-under-test be a browser plug-in. 

Note. A single side-by-side (PNG plus rendered SVG) HTML harness is a simple variant, and can be generated using the harness-generation tools which have been developed.

4.4.3 Test Naming Convention

A strong naming convention for the materials is useful, both with the management of the test suite repository, as well as with requirement #2 above. 

Test names will be brief but informative.  The name design is:  chapter-focus-type-num.  'Type-num' is a concatenation of the test type -- BE, DT, ER, or DM -- and its ordinal in the sequence of such tests -- 01, 02, ...

Examples:  path-lines-BE-01, shapes-rect-DT-04, styling-fontProp-DM-02. 

4.4.4 Harness Details

The test harness (static rendering, at least), will be an HTML page which identifies the test, invokes the PNG reference image, and presents the operator script. 

Navigation buttons will be provided to go back to a table of contents (and maybe an index), to navigate laterally through BE tests, and to drill down to "child" tests (from the BE level to the DT, ER, and DM tests), to go back up to "parent" tests from the lower levels.

Per a Cupertino decision and subsequent discussions, the principle HTML harness will present the operator script and the PNG reference image, but will not assume a browser-invokable SVG viewer -- the test administerer will have to get the SVG image into another window, or onto a printer, or whatever is appropriate. To make this easier, a second, all-SVG navigation harness is provided (with exactly parallel navigation capabilities to the PNG-plus-Operator Script harness.)

With the current method for producing harness(es) -- XSLT stylesheet applied to instances of a simple XML grammar which describes each test case -- it is not difficult to produce multiple harness versions, including other possible variants (e.g., PNG plus rendered SVG plus operator script, for browser-plugin SVG viewers).

The first generation design of simple XML grammar for describing tests, and the XSLT stylesheet for producing the HTML page, have been released.  A "manual" SVG template has been released as well -- see next chapter for details.

Work is underway on "second generation" harnesses and templates to:

4.4.5 SVG Template Details

Each test , at least for the static rendering module, will be put into a standard template.  As just discussed, this is presently a manual process -- the test writer puts the test body content into the template.

The template incorporates these features:

The serial number is a method for ensuring that the PNG reference image, the SVG instance, and the SVG rendering are all current, all agree.  To be useful, it must unfailingly increment whenever any change is made to the SVG file, in fact whenever the SVG file is saved.  A way to automate this is being sought.

The <text> elements of the Legend are as simple as possible -- ideally, defaulting all attributes and properties except size and position. [Note.  The current template does have font-family selection -- it is an issue whether this should be eliminated, or changed to a generic specification like "sans-serif", or changed to a different font.]

There may be some test purposes (e.g., if we want ed an "empty.svg" test) which require no graphical content, in which (only) cases the Legend may be omitted.

4.4.6 Linking Order

The overall linking structure is decided -- TOC (and possibly index), BE layer throughout the suite (next/previous), DT drill down from BE (child), BE pop up from DT (parent), etc.  Some details are to be worked out.  Does each BE point down to a different DT "stack"?  Or do all BEs in a chapter or chapter-focus point down to the first DT in that area?  The latter has been tentatively decided (the structure of the suite is not likely to be regular enough to make the former widely practical).

4.5 Repository

Processes and procedures are still being designed for: 

For now, the test suite editor is the repository. Once a test case is submitted, it is "owned" by the repository. All WG and public releases are from the repository, and all maintenance changes to test cases are applied to the latest repository version.

In particular, the editor releases test cases for maintenance work, and ensures the integrity of the versioning information (serial number). With pending second generation tools, adherence of test cases to the exact formatting (as opposed to functional) details of the template conventions ought to be automatable, and not a significant concern to test contributors.

4.6 Sources for Tests

4.6.1 SVG Specification

The examples which already exist in the SVG Specification have proved to be an excellent basis for material for some of the "Basic Effectivity" tests  -- simple tests which minimally illustrate an SVG functionality.

4.6.2 Vendor In-house work

Several of the SVG WG members have in-house development efforts.  Materials ranging from basic path and shape graphics, to filter effects tests, to DOM and animation functionality are known to exist.  Though some adaptation and integration into the SVG Test Suite framework is required, these existing QA materials have already proved to be a valuable resource.

To be done:  inventory what is available within the WG.

4.6.3 CGM Translation or Transcription

For graphical output functionality, there is substantial commonality between CGM and SVG (see comparison table in [12]).  The CGM Test Suite (for ATA, release 3.0, see [5]) has 269 tests, conforming to the test suite principles articulated above. 

There are two interesting possibilities to leverage this CGM design and implementation work:

Note that NIST-certified CGM viewers exist, as well as certified printer drivers and certified rasterizers.

4.6.4 Outside Contributions

Contributions from outside of the WG will be solicited, once these documents and the template materials are stabilized.   Miminal processing for contributions will include:

4.6.5 Construct New SVG Tests

There is only one way to achieve comprehensive coverage of Test Requirements: build new tests, carefully designed and targeted at specific TR(s).

There are two ways to approach this:

Experience with the first has been:  the output drivers usually don't have precise enough control of the individual pattern of elements, and manual touchup is almost always required.

4.6.6 An Intersting Source of Test Purposes

From CGM experience, an interesting source of Test Purposes, and possibly even test materials, are instances which have:

These indicate trouble areas, where implementers are likely to misinterpret or incorrectly implement the specification.

4.6.7 Other X*L Tests

Beyond static rendering module, we should be able to leverage methodology, or tests, or both, from such resources as the DOM test suite, [6].  This we intend to do for the DOM tests. CSS, [7], should even applicable in static rendering.

5 How to Write Tests

5.1 Overview

This is meant to be a cookbook for writing the test cases for a functional module. Functional modules will generally correspond to chapters in the SVG spec.  These techniques were prototyped with the Path chapter, and have been applied in the generation of BE-level tests for the whole spec.

Note.  When we did the Path chapter, an exhaustive TR extraction was one of the first actions.  This is not necessary prior to BE test case specification, and the TR analysis is postponed until after the BE test case generation in what follows.

5.2 Outline of BE Test Generation

Implicitly or explicitly, formally or informally, in this order or another which you prefer, you will go through these steps for writing BE tests:

  1. Read the document, all sections related to your functional topic.
  2. Enumerate a list of the major pieces of functionality.
  3. Decide about focus sub-sections, the focus sections for your chapter.
  4. Figure out a set of test cases which covers the functionality list.
  5. Derive a list of test purposes (at least implicitly).
  6. Combine them into a convenient number of test cases.
  7. Write the BE test case specification for each of the test cases.
  8. For each BE test case instance, fill in the required information in the SVG template.
  9. Write the SVG code to implement the BE test case specifications, put it in the template, refine till you're happy with it.
  10. Write the XML description file, especially the Operator Script part.
  11. For each BE test case instance, generate the HTML harness (for your own use only).
  12. Generate the PNG reference image.
  13. Put it all together and tune it till you're happy with it.

5.3 Outline of DT Test Generation

DT test generation involves more rigor and more systematic methodology than BE test generation.   The basic steps for DT tests are similar, with some differences at the beginning:

  1. Read the document exhaustively, critically looking for any and all testable assertions.
  2. Enumerate a list of all Test Requirements (TRs).
  3. Design a set of Test Cases which cover all of the TRs:
  4. Derive a set of individual Test Purposes which covers all TRs, with attention to the list of Generic Test Purposes.
  5. Write the DT test case specification for each of the test cases;

The rest follows as for BE tests.  The difference between the DT outline and the BE outline is in the rigor and thoroughness of the first steps, which are deciding "what to test".

5.4 Chapter and Focus Sections

The naming convention for SVG tests is:  chapter-focus-{BE|DT|ER|DM}-NN  (NN=01, 02, ...).

Decide how to subdivide your functional area ("chapter") into subsections -- "focus" sections.

Chapter is self explanatory -- a one-word, though possibly complex, name for your document chapter or functional area. Examples: path, coordSystem, clipMaskComposite.

Focus is simple to specify in some cases:  shapes-rect-...; path-lines-...; filters-feColorMatrix-...

In some cases, focus might not seem obvious.  However, the "focus" component of the name is always required.

"{BE|DT|ER|DM}" indicates that exactly one of the two-letter test type designators is to be used, BE or DT or ER or DM. The numbering runs consecutively throughout the chapter, it does not restart with each focus subsection.

Note. Use "camel case" for compound words for chapter and focus. The first letter is lower case, and the first letter of subsequent words is upper case. Examples: clipMaskComposite, radialGradient, textAnchor.

5.5 Designing a BE Test

Remember, at the BE level, we are only trying to verify that the interpreter or viewer has implemented the given functional area.  Therefore, we focus on the major functional pieces of the chapter.

There are no firm rules as to what comprises a BE test and what is DT -- sometimes it's a judgement call. However, here are some helpful guidelines:

Example.  The 'path' element has a "d" attribute, which can contain a number of commands:  Mm, Ll, Zz, Hh, Cc, ...  It also has the "nominalLength" attribute, which is unique to Path.  The BE tests for path give a basic exercise of these attributes and commands, including verification that the implementation understands the concept of subpath (holes and islands).

Note that there are also attributes like "style", "class", "transform", which are functionalites widely applicable to other parts of SVG.  It is a judgement call, but in the path BE tests, we avoid these details -- they will in fact be attacked in their own modules, such as Styling, Transform, etc.  (In other words, we don't even deal with them extensively in the Path tests, not even in the DT tests.)

We adopted a principle for the Path tests:  just enough styling -- basic colors, etc -- to make the tests visually less grim (than b/w, one-pixel wide lines, no fill). This principle is applicable for all tests -- do not unnecessarily clutter the test with functionalities which are unrelated to the functionality being tested.

When starting to design the BE tests, look at the existing very simple examples in the SVG spec for starters.

5.6 Writing BE Test Description -- Test Case Specification

The guidelines in this section apply to DT, ER, and DM tests, as well as BE tests.

The test description is only for you, the test designer, unless you're dividing the labor into specification versus production -- different people doing each -- which we actually did on the Path prototype.  Nevertheless, experience shows that the challenge of writing the description force one to actually design the test in sufficient detail that it is easy to then write SVG content to implement it.

It is the Conceptual Description of the test (see below) which is useful at this early stage (and it might be done after you have sketched/drawn the Test Case).

In the Path prototype, the following format proved useful in describing Test Cases:

  1. Name (<title>):   using our convention, e.g., "path-curves-BE-04".  This will be the root of associated filenames for the test case (TC).
  2. Test Purpose (<description>):  a (usually) one-sentence summary of what this TC addresses.
  3. Conceptual Description:   brief prose description (to TC instance builders) of the content of the TC.  Typically one-to-few paragraphs long.  See [11] for examples.
  4. Operator Script:  Summary of test purpose(s), how to execute test, what to look for in result, some specifics about what constitutes pass/fail, allowable variations from reference picture, etc. 
  5. Associated Test Requirements:  Formally or informally, this information should be available by the time the test case is designed (it is clearly applicable to DT and ER tests, less so for BE and DM tests).
  6. Document References:   Links or pointers from the test to the SVG spec.

See later section for writing the Operator Script (#4). 

The Associated Test Requirements (#5) and Document References (#6) are the crux of traceability.  You will have generated this information (implicitly, at least) by the time you have designed and written your test case. If you are writing test cases, generate and preserve this information (see, for example, next section and [11]). 

Note.Traceability data currently are not integrated into the almost-complete BE suite. This will have to be done (as links into the SVG spec) when both the spec and the test suite have stabilized significantly.

5.7 Extracting Test Requirements (TR) for DT Tests

A comprehensive list of Test Requirements is the critical first step in writing a comprehensive set of DT test cases for your functional area.

There is no magic rule.  See [9], section 4.2, for an interesting discussion of this.   See [11] for a fairly thorough (as far as it went -- it's incomplete) example for the Path chapter.

It starts with an intensive reading of each sentence of your chapter.  Weigh the question, phrase by phrase:  is there a testable assertion here?  If so, highlight it and add it to your list (however you want to manage it.  You also will need to read some other chapters, at least:

and maybe others like Accessibility.  Depending on your chapter, you might be led off into other sections of the document for requirements, or into other standards (e.g., CSS2).

Build up a list of TRs --  testable assertions -- which might be applicable to a viewer or interpreter.   In fact, don't descriminate by "applicable to" at this stage.  The SVG spec is written to describe a file format, and the semantic requirements associated with data elements are not always explicitly stated.  For example from the Path chapter: 

"The command letter can be eliminated on subsequent commands if the same command is used multiple times in a row." 

This is a statement about allowable data configurations within 'path' elements.  But combined with the statement form Appendix G, "All SVG static rendering features ... must be supported and rendered ...", a testable assertion about viewers results (and a test purpose can be derived).

You'll be lucky to find any (or many) statements which jump out with a "shall" or "must" -- it doesn't occur once in my (incomplete) list of 59 TRs for Path, in [11].

So for a first pass, pick up anything and everything which looks like it might lead to a Test Requirement on an SVG viewer.

For the Path chapter, I assembled a list of Test Requirements after the first intensive reading, during which I did markup on paper.  You can follow this, or do whatever is most agreeable for you. 

Each entry in the list contained a document reference, and the text of the TR -- I did cut-and-paste against the HTML version of the document for the latter (note:  there is some danger of volatility with this, at this stage of the document). 

Simple example:

Reference:  10.3.2.Mmtable

Statement:  (x,y)+ -- Mm must be followed by one or more x,y pairs.

Lengthier example:

Reference:  10.4.p1.b2

Statement:  nominalLength= The distance measurement (A) for the given 'path' element computed at authoring time.  The SVG user agent should compute its own distance measurement (B). The SVG user agent should then scale all distance-along-a-curve computations by A divided by B. 

I used the following ad-hoc notation for referencing Test Requirements: pN.bN.sN, where  pN = paragraph N, bN = bullet N, sN = sentence N.  Plus unambiguous constructions like "Mmtable" to point at tables and table entries.

See [11] for the example of the complete listing of those Test Requirements (TR) which pertain to geometry and syntax of Path (extracted from the 19990914 SVG spec.)

5.8 Designing DT Cases

We want to turn the TR list into a number of Test Cases which exhaustively covers the requirements in the TR list.

The first step, implicitly or explicitly, is derivation of a set of Test Purposes associated with the TR list.  Example:

TR:  "Mm must be followed by one or more x,y pairs"

TP:  "Verify that interpreters correctly handle Mm with one (x,y) pair, or several pairs, or many pairs."

Note that, because of the "Error Processing" requirement about Path, that this suggests another TP:  "Verify that interpreters respond correctly to invalid Mm data combination."  (Such as "M x L x y", or "M L x y", or "M x y x y x Z".

This leads to another point.  The general conformance requirements of Appendix G imply a list of "Generic Test Purposes" (see [4.3]):

Keep this list at hand while you are looking at your TR list and deciding what to test, i.e., deciding Test Purposes.  (This list might be extended -- suggestions welcome.)

I have been a bit informal with my "TP list", but I still keep track of what I have covered on the TR list and the generic TP list, so that I know when I'm done.  See, for example, [11], the section, "Detailed Drill-down Tests (DT) for Line Commands."

The final principle here, once you have an idea of what you're going to test and how, is to put a reasonable number of "atomic" tests together to make a Test Case.  This will reduce the number of individual test cases and increase their content density.  The guiding principles should be: 

5.9 Generating the Template and Harness

The first release has been made of a simple XML grammar for describing tests, and the XSLT stylesheet for producing the HTML page.  

These are the "first generation" production tools. If you process the XML instances with CreateHTMLHarness.xslt, you will get HTML pages which pull together and presents the PNG reference images, the operator scripts, and navigation buttons for the suite. If you process the XML instances with CreateHTMLHarness.xslt, you will get a parallel set of SVG pages with SVG elements for navigation buttons, and inclusion by reference of the test case SVG instances themselves.

Note. A simple modification of the CreateHTMLHarness will allow you to 'embed' (non-standard HTML tag) the SVG side-by-side with the PNG, if an SVG plugin is available for your browser.

I have been using the XT tool of James Clark (get it from his Web site).  You can use whatever tool you prefer, but a caveat -- I have been warned that different XSLT processor may give inconsistent results.  This is not to say that XT is correct, but for now it is my "reference tool".

A "manual" SVG template has been released as well.

The scheme is still being developed, and will eventually lead to automatic generation of the SVG skeleton file, with some of the details filled in (see next section).

5.10 Using the Template to Write Test Case

Starting with the static-output-template:

Use good and thorough comments in the SVG content itself to describe what everything is doing.

There may be some test purposes (e.g., the structure-emptySVG-BE-01.svg test) which require no graphical content, in which (only) cases the Legend may be omitted.

See below about the Serial Number.

Note. As described earlier, work is underway on a "Second Generation" of tools, which should allow developers to ignore these template details, as long as the test case content body is submitted in a correct SVG with correct coordinate space, etc. But for now, write into the template.

5.11 Writing an Operator Script

The Operator Script comprises a few sentences and is written as one or more paragraphs of the XML instance (see earlier section) for the test case. 

Once again, there are no firm rules.  However, the Operator Script can address any or all of:

  1. describing what is being tested, i.e., a summary of the test purpose(s);
  2. what the results should be;
  3. verdict criteria for pass/fail;
  4. allowable deviations from the reference image;
  5. how to execute test, if there are any special instructions;
  6. optionality of features;
  7. prerequisites (other functionality used in the test);
  8. accessibility aid.

#2 could conceivably be:  "picture should look like the PNG".  However, some specifics could be pointed out, such as (for an accuracy test):  "All lines should pass through the cross-hairs", or "Vertexes should be at the locations of the markers".

#3 and #4 go together.   If there are allowable deviations of the rendered SVG from the PNG, it should be stated (e.g., maybe a style falls back to the default style sheet, which can vary).

About #6, optionality.  If a test is exploring an optional or recommended feature, that should be clearly indicated right at the beginning of the operator script. 

#7 refers to a brief description of SVG functionalities, other than the one under test, which are used in the test file instance.

In addition to the other purposes of the Operator Script, a well-detailed Operator Script can be useful as an aid to accessibility (#8).

5.12 Generating the PNG

This section is likely to develop and evolve further.  It should ultimately be a repository of successful methods for getting PNG reference images, which have been discovered by you, the test developers.

When you develop a test case, you submit to the repository: the PNG reference image; and, a description of how you generated the PNG.

5.12.1 Screen Capture SVG Rendering & Postprocess

By far, the most common method of generating the PNG reference image is:

  1. Produce SVG.
  2. Load the SVG into an SVG Viewer, such as the Adobe plug-in, the IBM SVGView, etc.
  3. Do a screen capture (e.g., Alt-PrtSc to clipboard in Windows);
  4. Paste the screen capture into a tool such as Adobe ImageReady, Corel Photopaint, Macromedia Fireworks(?).
  5. Edit to trim away non-picture part, resulting in 450x450 pixel image.
  6. Depending on the capabilities of such tool:
    • save as PNG (e.g., you can do this from ImageReady).
    • or, save in the native some portable format of the image editor and load that into a PNG generation tool (e.g., Adobe ImageReady 2.0, Corel Photopaint, Macromedia Fireworks), and save as PNG.

5.12.2 Screen Capture & Postprocess "Patch" File

The screen capture method relies on having an SVG viewer which can correctly display the picture. Often, in these early days of implementation development, this is not possible -- no SVG viewer can handle the test instance correctly.

However, it is often the case that there is another SVG file which is exactly equivalent (pictorially). These are called "patch" files, and are named, for example: structure-nestedSVG-BE-02-patch.svg (actual example).

Example: viewer doesn't correctly establish the origin of the user space for a simple test of nested SVG elements. Then compensate for the viewer error by changing the coordinates of the innermost graphical elements so that the viewer positions them (graphically) correctly.

Example: viewer defaults something wrong. Then compensate by explicitly setting that value (assuming that the viewer does this correctly).

Example: multi-stop gradients don't work, but two-stop are correct. Make a "-patch" file where the correct picture of a multi-stop gradient is built by stringing together multiple two-stop gradients.

Test contributors should submit any "-patch" files, along with the PNG files and "how to" description.

5.12.3 Handcode the Reference Picture in a Graphics Program

This technique has been used by some contributors, before the development of SVG viewers was very advanced:

  1. Produce SVG, e.g., by hand-coding.
  2. Draw out the picture in a graphics program (e.g., Adobe Illustrator, CorelDraw, or Macromedia Freehand) that shows how the SVG should look, save as EPS (or another good exchange format for subsequent editing)
  3. Load the EPS into a PNG generation tool (Adobe ImageReady 2.0, Corel Photopaint, Macromedia Fireworks) and save as PNG.

With this method, you should be on guard against accuracy issues, as the SVG and the PNG result from independent and disjoint drawing pipelines. 

A variant of this is to use a graphics program to draw just the incorrect piece of the SVG rendering, and then cut-paste with a raster editor to get a complete and correct reference PNG.

5.12.3 CGM Transcoder, CGM Render, Screen Capture & Postprocess

This method has been postulate, but (to my knowledge) no yet used by anyone.  It would be equally applicable to formats other than CGM, when transcoders are available.

  1. Produce the test case/picture you want to draw as a Clear Text CGM (not as nice as SVG, but still can be hand-coded for simple stuff);
  2. If the transcoder of step #3 cannot handle clear text CGM, do encoding conversion (e.g., via CGMconvert) to Binary CGM;
  3. Convert to SVG with a SVG-to-CGM transcoder (e.g., one such is available from IBM);
  4. Take the body-content and put it into the SVG template, doing any hand editing that might be required;
  5. Render the CGM, screen capture and proceed as in previous methods.

Whether or not this will work for your test case depends on whether the result of step #3 is close enough to the SVG configuration you need for the test (and correct!), and more importantly, whether the hand-editing of #4 preserves the graphical accuracy of the rendered picture.

A variant, for simple test cases, would be to hand-code the desired SVG, hand-code (graphically) equivalent clear text CGM, and not use the transcoder at all -- just use hand-coded CGM as a route to a correct picture.

This would only be useful, in place of the previous methods, if none of the existing SVG viewers could get a correct rendering of the desired SVG test case, and it was too difficult to reproduce the desired drawing in a graphics program. 

5.12.4 Notes about PNG Generation

Two aspects of the PNG file generation should have your attention:

  1. file should be 450x450, matching the SVG test case.
  2. file should use PNG-8 for compactness, unless PNG-24 is required.

The only exceptions for #1 are test cases which specifically deviate from the canonical 450x450 coordinate space, in order to test viewer handling of different SVG address spaces. In this case, match the SVG test instance.

8-bit PNG should suffice for most tests -- 256 colors are possible. 24-bit PNG is likely to be required for tests such as:

While it might be possible to compute the number of colors required in some of these cases, and optimize with PNG-8, nevertheless it is strongly recommended to be conservative and use PNG-24, if there is any doubt.

For these same cases which require PNG-24, it has been discovered that attention must be given to the color mode of the monitor, if screen capture is being used. On PC Windows systems, for example, noticeable color banding has occurred on some tests when using "High Color" (16-bit) mode, and it disappears if "True Color" (24-bit) mode is used.

5.13 About Serial Number

During CGM test suite development, a major annoyance and quality impact arose from not being able to keep synchronization between the reference image (the PNG files, for this SVG test suite), and the test case (SVG for us).  Changes made to the latter often weren't reflected by updating the former.  Worst of all, there was no way to detect the problem when it occurred.

A "serial number" in the SVG, which is encoded in graphical text, is the solution for this -- it is quick and easy to determine if the PNG corresponds to the SVG file and a given rendering of the SVG file (e.g., printout or screen image) ... assuming that the PNG was generated from the SVG!

The serial number is part of the Legend of the SVG file.  The only way to use it now is to manually maintain and update it, which is something of a drawback. Nevertheless, the version control benefits warrant the inconvenience.

Currently, the serial number is identical to a version number -- 1, 2, 3, ... Its maintenance is solely the responsibility of the test suite editor, which somewhat alleviates the error-prone manual aspects.

Ultimately, automating this is a better idea, e.g., the serial number changes whenever the test case is checked into the repository (and the PNG is then recreated from that version). Automation is being looked at for the second generation of test suite tools and methods.

6 Test Review Guidelines

6.1 Motivation for the Guidelines

There are two reasons that these guidelines are provided:

  1. They comprise a brief synopsis of the most critical details, from the previous chapter, for test developers;
  2. And, it is an operational principle of this project that no tests will be published until they have had review by someone other than the test author.

6.2 Overall Chapter Content

Considering the chapter and its set of test cases as a whole, assess:

6.3 Individual Test Case Content

Looking at the SVG test cases instances individually, evaluate:

Note. About "Self-documenting (i.e., in rendered content)," the style of in-picture animation should be to describe what is being tested, but should not describe visual effect (the latter may be done in Operator Script). So for example,

6.4 XML Test Case Description

Specifically, this refers to the Operator Script, which is to be evaluated for:

6.5 PNG Reference Image

Evaluate at least these criteria for the PNG reference image:

7 Glossary

7.1 Basic Effectivity Test (BE)

A test which lightly exercises one of the basic functionalities of the SVG specification. Collectively, the BE tests of an SVG functional area (chapter) give a complete but lightweight examination of the major functional aspects of the chapter, without dwelling on fine detail or probing exhaustively. BE tests are intended to simply establish that a viewer has implemented the functional capability.

7.2 Demo Test (DM)

A test which is intended to show the capabilities of the SVG specification, or a functional area thereof, but is otherwise not necessarily tied to any particular set of testable assertions from the SVG specification.

7.3 Detailed Test (DT)

Also called drill-down tests. DT tests probe for exact, complete, and correct conformance to the most detailed specifications and requirements of SVG. Collectively, the set of DT tests is equivalent to the set of testable assertions about the SVG specification.

7.4 Drill-down Test

See Detailed Test.

7.5 Error Test (ER)

An Error Test probes the error response of viewers, especially for those cases where the SVG specification describes particular error conditions and prescribes viewer error behavior.

7.6 Semantic Requirement (SR)

See Test Requirement.

7.7 Test Assertion (TA)

See Test Requirement.

7.8 Test Requirement (TR)

A testable assertion which is extracted from a standard specification.  Also called Semantic Requirement (SR) or Test Assertion (TA) in some literature.  Example.  "Non-positive radius is an error condition."

7.9 Test Purpose (TP)

A reformulation of a Test Requirement (or, one or more TRs) as a testing directive.  Example.  "Verify that radius is positive" would be a Test Purpose for validating SVG file instances, and "Verify that interpreter treats non-positive radius as an error condition" would be a TP for interpreter or viewer testing.

7.10 Test Case (TC)

As used in this project, an executable unit of the material in the test suite which implements one or more Test Purposes (hence verifies one or more Test Requirements).  Example.  An SVG test file which contains an elliptical arc element with a negative radius.  In practice (and abstractly), the relationship of TRs to TCs is many-to-many.

7.11 Traceability

The ability, in a test suite, to trace a Test Case back to the applicable Test Requirement(s) in the standard specification.

8 Bibliography

  1. W3C Scalable Vector Graphics (SVG) 1.0 Specification, 14 September 1999 (WD-only) draft,
  2. W3C Scalable Vector Graphics (SVG) 1.0 Specification, 12 August 1999 (Last Call) draft (
  3. Computer Graphics Metafile Conformance Testing -- Full Conformance Testing for CGM:1992/Amd.1 Model Profile, NIST SBIR Final Report, January 1995.
  4. WebCGM 1.0 Profile, W3C Recommendation,
  5. NIST CGM Test Suite for ATA Profile,
  6. NIST DOM Test Suite,
  7. W3C CSS Test Suite,
  8. OASIS/NIST XML Test Suite,
  9. "Interactive Conformance Testing for VRML",
  10. ISO 10641, section 5, "Conformance testing requirements within graphics standards".
  11. "SVG Conformance Suite -- Preliminary Design for Path", Lofton Henderson, 12 Nov. 1999.
  12. "SVG Conformance Test Suite Design -- Outline of Proposed Approach", Lofton Henderson, 12 November 1999.