XML Processing Model Requirements and Use Cases

W3C Working Draft 11 April 2006

This version:
Latest version:
Alex Milowski, Invited Expert <alex@milowski.com>

This document is also available in these non-normative formats: XML.


This document contains requirements for the development of an XML Processing Model and Language, which are intended to describe and specify the processing relationships between XML resources.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This First Public Working Draft has been produced by the W3C XML Processing Model Working Group as part of the XML Activity, following the procedures set out for the W3C Process. The goals of the XML Processing Model Working Group are discussed in its charter.

Comments on this document should be sent to the W3C mailing list public-xml-processing-model-comments@w3.org (archive).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. This document is informative only. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1 Introduction
2 Terminology
3 Design Principles
4 Requirements
    4.1 Standard Names for Component Inventory
    4.2 Allow Defining New Components and Steps
    4.3 Minimal Component Support for Interoperability
    4.4 Allow Pipeline Composition
    4.5 Iteration of Documents and Elements
    4.6 Conditional Processing of Inputs
    4.7 Error Handling and Fall-back
    4.8 Support for the XPath 2.0 Data Model
    4.9 Allow Optimization
    4.10 Streaming XML Pipelines
5 Use cases
    5.1 Apply a Sequence of Operations
    5.2 XInclude Processing
    5.3 Parse/Validate/Transform
    5.4 Document Aggregation
    5.5 Single-file Command-line Document Processing
    5.6 Multiple-file Command-line Document Generation
    5.7 Extracting MathML
    5.8 Style an XML Document in a Browser
    5.9 Run a Custom Program
    5.10 XInclude and Sign
    5.11 Make Absolute URLs
    5.12 A Simple Transformation Service
    5.13 Service Request/Response Handling on a Handheld
    5.14 Interact with Web Service (Tide Information)
    5.15 Parse and/or Serialize RSS descriptions
    5.16 XQuery and XSLT 2.0 Collections
    5.17 An AJAX Server
    5.18 Dynamic XQuery
    5.19 Read/Write Non-XML File
    5.20 Update/Insert Document in Database
    5.21 Content-Dependent Transformations
    5.22 Configuration-Dependent Transformations
    5.23 Response to XML-RPC Request
    5.24 Database Import/Ingestion
    5.25 Metadata Retrieval
    5.26 Non-XML Document Production
    5.27 Integrate Computation Components (MathML)
    5.28 Document Schema Definition Languages (DSDL) - Part 10: Validation Management
    5.29 Large-Document Subtree Iteration
    5.30 Adding Navigation to an Arbitrarily Large Document
    5.31 Fallback to Choice of XSLT Processor
    5.32 No Fallback for XQuery Causes Error


A References
B Contributors

1 Introduction

A large and growing set of specifications describe processes operating on XML documents. Many applications will depend on the use of more than one of these specifications. Considering how implementations of these specifications might interact raises many issues related to interoperability. This specification contains requirements on an XML Pipeline Language for the description of XML process interactions in order to address these issues. This specification is concerned with the conceptual model of XML process interactions, the language for the description of these interactions, and the inputs and outputs of the overall process. This specification is not generally concerned with the implementations of actual XML processes participating in these interactions.

2 Terminology

[Definition: XML Information Set or "Infoset"]

An XML Information Set or "Infoset" is the name we give to any implementation of a data model for XML which supports the vocabulary as defined by the XML Information Set recommendation [xml-infoset-rec].

[Definition: XML Pipeline]

An XML Pipeline is a conceptualization of a flow of a configuration of steps and their parameters. The XML Pipeline defines a process in terms of order, dependencies, or iteration of steps over XML information sets.

[Definition: XML Pipeline Specification Document]

A pipeline specification document is an XML document that described an XML pipeline.

[Definition: Step]

A step is a specification of how a component is used in a pipeline that includes inputs, outputs, and parameters.

[Definition: Component]

A component is an particular XML technology (e.g. XInclude, XML Schema Validity Assessment, XSLT, XQuery, etc.).

[Definition: Input Document]

An XML infoset that is an input to a XML Pipeline or Step.

[Definition: Output Document]

The result of processing by an XML Pipeline or Step.

[Definition: Parameter]

A parameter is input to a Step or an XML Pipeline in addition to the Input and Output Document(s) that it may access. Parameters are most often simple, scalar values such as integers, booleans, and URIs, and they are most often named, but neither of these conditions is mandatory. That is, we do not (at this time) constrain the range of values a parameter may hold, nor do we (at this time) forbid a Step from accepting anonymous parameters.

[Definition: XML Pipeline Environment]

The technology or platform environment in which the XML Pipeline is used (e.g. command-line, web servers, editors, browsers, embedded applications, etc.).

[Definition: Streaming]

The ability to parse an XML document and pass infoitems between components without building a full document information set.

3 Design Principles

The design principles described in this document are requirements whose compliance with is an overall goal for the specification. It is not necessarily the case that a specific feature meets the requirement. Instead, it should be viewed that the whole set of specifications related to this requirements document meet that overall goal specified in the design principle.

Technology Neutral

Applications should be free to implement XML processing using appropriate technologies such as SAX, DOM, or other infoset representations.

Platform Neutral

Application computing platforms should not be limited to any particular class of platforms such as clients, servers, distributed computing infrastructures, etc. In addition, the resulting specifications should not be swayed by the specifics of use in those platform.

Small and Simple

The language should be as small and simple as practical. It should be "small" in the sense that simple processing should be able to stated in a compact way and "simple" in the sense the specification of more complex processing steps do not require arduous specification steps in the XML Pipeline Specification Document.

Infoset Processing

At a minimum, an XML document is represented and manipulated as an XML Information Set. The use of supersets, augmented information sets, or data models that can be represented or conceptualized as information sets should be allowed, and in some instances, encouraged (e.g. for the XPath 2.0 Data Model).

Straightforward Core Implementation

It should be relatively easy to implement a conforming implementation of the language but it should also be possible to build a sophisticated implementation that implements its own optimizations and integrates with other technologies.

Address Practical Interoperability

An XML Pipeline must be able to be exchanged between different software systems with a minimum expectation of the same result for the pipeline given that the XML Pipeline Environment is the same. A reasonable resolution to platform differences for binding or serialization of resulting infosets should be expected to be address by this specification or by re-use of existing specifications.

Validation of XML Pipeline Documents by a Schema

The XML Pipeline Specification Document should be able to be validated by both W3C XML Schema and RelaxNG.

Reuse and Support for Existing Specifications

XML Pipelines need to support existing XML specifications and reuse common design patterns from within them. In addition, there must be support for the use of future specifications as much as possible.

Arbitrary Components

The specification should allow use any component technology that can consume or produce XML Information Sets.

Control of Inputs and Outputs

An XML Pipeline must allow control over specifying both the inputs and outputs of any process within the pipeline. This applies to the inputs and outputs of both the XML Pipeline and its containing steps. It should also allow for the case where there might be multiple inputs and outputs.

Control of Flow and Errors

An XML Pipeline must allow control the explicit and implicit handling of the flow of documents between steps. When errors occur, these must be able to be handled explicitly to allow alternate courses of action within the XML Pipeline.

4 Requirements

4.1 Standard Names for Component Inventory [req-standard-names]

The XML Pipeline Specification Document must have standard names for components that correspond, but not limited to, the following specifications [xml-core-wg]:

  • XML Base

  • XInclude

  • XSLT 1.0/2.0

  • XSL FO

  • XML Schema

  • XQuery

  • RelaxNG

4.2 Allow Defining New Components and Steps [req-new-components-steps]

An XML Pipeline must allow applications to define and share new steps that use new or existing components. [xml-core-wg]

4.3 Minimal Component Support for Interoperability [req-minimal-components]

There must be a minimal inventory of components defined by the specification that are required to be supported to facilitate interoperability of XML Pipelines.

4.4 Allow Pipeline Composition [req-allow-composition]

Mechanisms for XML Pipeline composition for re-use or re-purposing must be provided within the XML Pipeline Specification Document.

4.5 Iteration of Documents and Elements [req-iteration]

XML Pipelines should allow iteration of a specific set of steps over a collection of documents and or elements within a document.

4.6 Conditional Processing of Inputs [req-conditional-processing]

To allow run-time selection of steps, XML Pipelines should provide mechanisms for conditional processing of documents or elements within documents based on expression evaluation. [xml-core-wg]

4.7 Error Handling and Fall-back [req-error-handling-fallback]

XML Pipelines must provide mechanisms for addressing error handling and fall-back behaviors. [xml-core-wg]

4.8 Support for the XPath 2.0 Data Model [req-xdm]

XML Pipelines must support the XPath 2.0 Data Model to allow support for XPath 2.0, XSLT 2.0, and XQuery as steps.


At this point, there is no consensus in the working group that minimal conforming implementations are required to support the XPath 2.0 Data Model.

4.9 Allow Optimization [req-allow-optimization]

An XML Pipeline should not inhibit a sophisticated implementation from performing parallel operations, lazy or greedy processing, and other optimizations. [xml-core-wg]

4.10 Streaming XML Pipelines [req-streaming-pipes]

An XML Pipeline should allow for the existence of streaming pipelines in certain instances as an optional optimization. [xml-core-wg]

5 Use cases

This section contains a set of use cases that support our requirements and will inform our design. While there is a want to address all the use cases listed in this document, in the end, the first version of those specifications may not solve all the following use cases. Those unsolved use cases may be address in future versions of those specifications.

To aid navigation, the requirements can be mapped to the use cases of this section as follows:

Requirement Use Cases
4.9 Allow Optimization 5.29 Large-Document Subtree Iteration, 5.30 Adding Navigation to an Arbitrarily Large Document
4.10 Streaming XML Pipelines 5.29 Large-Document Subtree Iteration, 5.30 Adding Navigation to an Arbitrarily Large Document
4.2 Allow Defining New Components and Steps 5.9 Run a Custom Program, 5.27 Integrate Computation Components (MathML)
4.7 Error Handling and Fall-back 5.32 No Fallback for XQuery Causes Error, 5.31 Fallback to Choice of XSLT Processor
4.6 Conditional Processing of Inputs 5.21 Content-Dependent Transformations, 5.30 Adding Navigation to an Arbitrarily Large Document
4.1 Standard Names for Component Inventory 5.3 Parse/Validate/Transform, 5.2 XInclude Processing
4.3 Minimal Component Support for Interoperability 5.3 Parse/Validate/Transform, 5.2 XInclude Processing
4.4 Allow Pipeline Composition 5.23 Response to XML-RPC Request, 5.24 Database Import/Ingestion, 5.15 Parse and/or Serialize RSS descriptions
4.5 Iteration of Documents and Elements 5.15 Parse and/or Serialize RSS descriptions, 5.6 Multiple-file Command-line Document Generation, 5.11 Make Absolute URLs, 5.24 Database Import/Ingestion
4.8 Support for the XPath 2.0 Data Model 5.16 XQuery and XSLT 2.0 Collections


The above table is known to be incomplete and will be completed in a later draft.

5.1 Apply a Sequence of Operations [use-case-apply-sequence]

Apply a sequence of operations such XInclude, validation, and transformation to a document, aborting if the result or an intermediate stage is not valid.

(source: [xml-core-wg])

5.2 XInclude Processing [use-case-xinclude]

  1. Retrieve a document containing XInclude instructions.

  2. Locate documents to be included.

  3. Perform XInclude inclusion.

  4. Return a single XML document.

(source: Erik Bruchez)

5.3 Parse/Validate/Transform [use-case-parse-validate-transform]

  1. Parse the XML.

  2. Perform XInclude.

  3. Validate with Relax NG, possibly aborting if not valid.

  4. Validate with W3C XML Schema, possibly aborting if not valid.

  5. Transform.

(source: Norm Walsh)

5.4 Document Aggregation [use-case-document-aggregation]

  1. Locate a collection of documents to aggregate.

  2. Perform aggregation under a new document element.

  3. Return a single XML document.

(source: Erik Bruchez)

5.5 Single-file Command-line Document Processing [use-case-simple-command-line]

  1. Read a DocBook document.

  2. Validate the document.

  3. Process it with XSLT.

  4. Validate the resulting XHTML.

  5. Save the HTML file using HTML serialization.

(source: Erik Bruchez)

5.6 Multiple-file Command-line Document Generation [use-case-multiple-command-line]

  1. Read a list of source documents.

  2. For each document in the list:

    1. Read the document.

    2. Perform a series of XSLT transformations.

    3. Serialize each result.

  3. Alternatively, aggregate the resulting documents and serialize a single result.

(source: Erik Bruchez)

5.7 Extracting MathML [use-case-extract-mathml]

Extract MathML fragments from an XHTML document and render them as images. Employ an SVG renderer for SVG glyphs embedded in the MathML.

(source: [xml-core-wg])

5.8 Style an XML Document in a Browser [use-case-style-browser]

Style an XML document in a browser with one of several different stylesheets without having multiple copies of the document containing different xml-stylesheet directives.

(source: [xml-core-wg])

5.9 Run a Custom Program [use-case-run-program]

Run a program of your own, with some parameters, on an XML file and display the result in a browser.

(source: [xml-core-wg])

5.10 XInclude and Sign [use-case-xinclude-dsig]

  1. Process an XML document through XInclude.

  2. Transform the result with XSLT using a fixed transformation.

  3. Digitally sign the result with XML Signatures.

(source: Henry Thompson)

5.11 Make Absolute URLs [use-case-make-absolute-urls]

  1. Process an XML document through XInclude.

  2. Remove any xml:base attributes anywhere in the resulting document.

  3. Schema validate the document with a fixed schema.

  4. For all elements or attributes whose type is xs:anyURI, resolve the value against the base URI to create an absolute URI. Replace the value in the document with the resulting absolute URI.

This example assumes preservation of infoset ([base URI]) and PSVI ([type definition]) properties from step to step. Also, there is no way to reorder these steps as the schema doesn't accept xml:base attributes but the expansion requires xs:anyURI typed values.

(source: Henry Thompson)

5.12 A Simple Transformation Service [use-case-simple-transform-service]

  1. Extract XML document (XForms instance) from an HTTP request body

  2. Execute XSLT transformation on that document.

  3. Call a persistence service with resulting document

  4. Return the XML document from persistence service (new XForms instance) as the HTTP response body.

(source: Erik Bruchez)

5.13 Service Request/Response Handling on a Handheld [use-case-handheld-service]

Allow an application on a handheld device to construct a pipeline, send the pipeline and some data to the server, allow the server to process the pipeline and send the result back.

(source: [xml-core-wg])

5.14 Interact with Web Service (Tide Information) [use-case-web-service]

  1. Parse the incoming XML request.

  2. Construct a URL to a REST-style web service at the NOAA (see website).

  3. Parse the resulting invalid HTML document with by translating and fixing the HTML to make it XHTML (e.g. use TagSoup or tidy).

  4. Extract the tide information from a plain-text table of data from document by applying a regular expression and creating markup from the matches.

  5. Use XQuery to select the high and low tides.

  6. Formulate an XML response from that tide information.

(source: Alex Milowski)

5.15 Parse and/or Serialize RSS descriptions [use-case-rss-descriptions]

Parse descriptions:

  1. Iterate over the RSS description elements and do the following:

    1. Gather the text children of the 'description' element.

    2. Parse the contents with a simulated document element in the XHTML namespace.

    3. Send the resulting children as the children of the 'description element.

  2. Apply rest of pipeline steps.

Serialize descriptions

  1. Iterate over the RSS description elements and do the following:

    1. Serialize the children elements.

    2. Generate a new child as a text children containing the contents (escaped text).

  2. Apply rest of pipeline steps.

(source: Alex Milowski)

5.16 XQuery and XSLT 2.0 Collections [use-case-collections]

In XQuery and XSLT 2.0 there is the idea of an input and output collection and a pipeline must be able to consume or produce collections of documents both as inputs or outputs of steps as well as whole pipelines.

For example, for input collections:

  1. Accept a collection of documents.

  2. Apply a single XSLT 2.0 transformation that processes the collection and produces another collection.

  3. Serialize the collection to files or URIs.

For example, for output collections:

  1. Accept a single document as input.

  2. Apply an XQuery that produces a sequence of documents (a collection).

  3. Serialize the collection to files or URIs.

5.17 An AJAX Server [use-case-ajax-server]

  1. Receive XML request with word to complete.

  2. Call a sub-pipeline that retrieves list of completions for that word.

  3. Format resulting document with XSLT.

  4. Serialize response to XML.

(source: Erik Bruchez)

5.18 Dynamic XQuery [use-case-dynamic-xquery]

  1. Dynamically create an XQuery query using XSLT, based on input XML document.

  2. Execute the XQuery against a database.

  3. Construct an XHTML result page using XSLT from the result of the query.

  4. Serialize response to HTML.

(source: Erik Bruchez)

5.19 Read/Write Non-XML File [use-case-rw-non-xml]

  1. Read a CSV file and convert it to XML.

  2. Process the document with XSLT.

  3. Convert the result to a CSV format using text serialization.

(source: Erik Bruchez)

5.20 Update/Insert Document in Database [use-case-update-insert-db]

  1. Receive an XML document to save.

  2. Check the database to see if the document exists.

  3. If the document exists, update the document.

  4. If the document does not exists, add the document.

(source: Erik Bruchez)

5.21 Content-Dependent Transformations [use-case-content-depend]

  1. Receive an XML document to format.

  2. If the document is XHTML, apply a theme via XSLT and serialize as HTML.

  3. If the document is XSL-FO, apply an XSL FO processor to produce PDF.

  4. Otherwise, serialize the document as XML.

(source: Erik Bruchez)

5.22 Configuration-Dependent Transformations [use-case-config-depend]

Mobile example:

  1. Receive an XML document to format.

  2. If the configuration is "desktop browser", apply desktop XSLT and serialize as HTML.

  3. If the configuration is "mobile browser", apply mobile XSLT and serialize as XHTML.

News feed example:

  1. Receive an XML document in Atom format.

  2. If the configuration is "RSS 1.0", apply "Atom to RSS 1.0" XSLT.

  3. If the configuration is "RSS 2.0", apply "Atom to RSS 2.0" XSLT.

  4. Serialize the document as XML.

(source: Erik Bruchez)

5.23 Response to XML-RPC Request [use-case-xml-rpc]

  1. Receive an XML-RPC request.

  2. Validate the XML-RPC request with a RelaxNG schema.

  3. Dispatch to different sub-pipelines depending on the content of /methodCall/methodName.

  4. Format the sub-pipeline response to XML-RPC format via XSLT.

  5. Validate the XML-RPC response with an W3C XML Schema.

  6. Return the XML-RPC response.

(source: Erik Bruchez)

5.24 Database Import/Ingestion [use-case-import-ingestion]

Import example:

  1. Read a list of source documents.

  2. For each document in the list:

    1. Validate the document.

    2. Call a sub-pipeline to insert content into a relational or XML database.

Ingestion example:

  1. Receive a directory name.

  2. Produce a list of files in the directory as an XML document.

  3. For each element representing a file:

    1. Create an iTQL query using XSLT.

    2. Query the repository to check if the file has been uploaded.

    3. Upload if necessary.

    4. Inspect the file to check the metadata type.

    5. Transform the document with XSLT.

    6. Make a SOAP call to ingest the document.

(source: Erik Bruchez)

5.25 Metadata Retrieval [use-case-metadata]

  1. Call a SOAP service with metadata format as a parameter.

  2. Create an iTQL query with XSLT.

  3. Query a repository for the XML document.

  4. Load a list of XSLT transformations from a configuration.

  5. Iteratively execute the XSLT transformations.

  6. Serialize the result to XML.

(source: Erik Bruchez)

5.26 Non-XML Document Production [use-case-non-xml-production]

  1. An non-XML document is fed into the process.

  2. That input is converted into a well-formed XML document.

  3. A table of contents is extracted.

  4. Pagination is performed.

  5. Each page is transformed into some output language.

(source: Rui Lopes)

  1. Read a non-XML document.

  2. Transform.

(source: Norm Walsh)

5.27 Integrate Computation Components (MathML) [use-case-computations]

  1. Select a MathML content element.

  2. For that element, apply a computation (e.g. compute the kernel of a matrix).

  3. Replace the input MathML with the output of the computation.

(source: Alex Milowski)

5.28 Document Schema Definition Languages (DSDL) - Part 10: Validation Management [use-case-dsdl-validation]

This document provides a test scenario that will be used to create validation management scripts using a range of existing techniques, including those used for program compilation, etc.

The steps required to validate our sample document are:

  1. Use ISO 19757-4 Namespace-based Validation Dispatching Language (NVDL) to split out the parts of the document that are encoded using HTML, SVG and MathML from the bulk of the document, whose tags are defined using a user-defined set of markup tags.

  2. Validate the HTML elements and attributes using the HTML 4.0 DTD (W3C XML DTD).

  3. Use a set of Schematron rules stored in check-metadata.xml to ensure that the metadata of the HTML elements defined using Dublin Core semantics conform to the information in the document about the document's title and subtitle, author, encoding type, etc.

  4. Validate the SVG components of the file using the standard W3C schema provided in the SVG 1.2 specification.

  5. Use the Schematron rules defined in SVG-subset.xml to ensure that the SVG file only uses those features of SVG that are valid for the particular SVG viewer available to the system.

  6. Validate the MathML components using the latest version of the MathML schema (defined in RELAX-NG) to ensure that all maths fragments are valid. The schema will make use the datatype definitions in check-maths.xml to validate the contents of specific elements.

  7. Use MathML-SVG.xslt to transform the MathML segments to displayable SVG and replace each MathML fragment with its SVG equivalent.

  8. Use the ISO 19757-8 Document Schema Renaming Language (DSRL) definitions in convert-mynames.xml to convert the tags in the local nameset to the form that can be used to validate the remaining part of the document using docbook.dtd.

  9. Use the IS0 19757-7 Character Repertoire Definition Language (CRDL) rules defined in mycharacter-checks.xml to validate that the correct character sets have been used for text identified as being Greek and Cyrillic.

  10. Convert the Docbook tags to HTML so that they can be displayed in a web browser using the docbook-html.xslt transformation rules.

Each validation script should allow the four streams produced by step 1 to be run in parallel without requiring the other validations to be carried out if there is an error in another stream. This means that steps 2 and 3 should be carried out in parallel to steps 4 and 5, and/or steps 6 and 7 and/or steps 8 and 9. After completion of step 10 the HTML (both streams), and SVG (both streams) should be recombined to produce a single stream that can fed to a web browser. The flow is illustrated in the following diagram:

DSDL use case graphic

(source: Martin Bryan)

5.29 Large-Document Subtree Iteration [use-case-large-document-transform]

Running XSLT on a very large document isn't typically practical. In these cases, it is often the case that a particular element, that may be repeated over-and-over again, needs to be transformed. Conceptually, a pipeline could limit the transformation to a subtree by:

  1. Limiting the transform to a subtree of the document identified by an XPath.

  2. For each subtree, cache the subtree and build a whole document with the identified element as the document element and then run a transform to replace that subtree in the original document.

  3. For any non-matches, the document remains the same and "streams" around the transform.

This allows the transform and the tree building to be limited to a small subtree and the rest of the process to stream. As such, an arbitrarily large document can be processed in a bounded amount of memory.

(source: Alex Milowski)

5.30 Adding Navigation to an Arbitrarily Large Document [use-case-add-nav]

For a particular website, every XHTML document needs to have navigation elements added to the document. The navigation is static text that surrounds the body of the document. This navigation is added by:

  1. Matching the head and body elements using a XPath expression that can be streamed.

  2. Inserting a stub for a transformation for including the style and surrounding navigation of the site.

  3. For each of the stubs, transformations insert the markup using a subtree expansion that allows the rest of the document to stream.

In the end, the pipeline allows arbitrarily large XHTML document to be processed with a near-constant cost.

(source: Alex Milowski)

5.31 Fallback to Choice of XSLT Processor [use-case-fallback-choice]

A step in a pipeline produces multiple output documents. In XSLT 2.0, this is a standard feature of all XSLT 2.0 processors. In XSLT 1.0, this is not standard.

A pipeline author wants to write a pipeline that, at compile-time, the implementation chooses XSLT 2.0 when possible and degrades to XSLT 1.0 when XSLT 2.0 is not supported. In the case of XSLT 1.0, the step will use XSLT extensions to support the multiple output documents--which again may fail. Fortunately, the XSLT 1.0 transformation can be written to test for this.

(source: Alex Milowski)

5.32 No Fallback for XQuery Causes Error [use-case-no-fallback-error]

As the final step in a pipeline, XQuery is required to be run. If the XQuery step is not available, the compilation of the pipeline needs to fail. Here the pipeline author has chosen that the pipeline must not run if XQuery is not available.

(source: Alex Milowski)

A References

XML Processing Model Requirements. Dmitry Lenkov, Norman Walsh, editors. W3C Working Group Note 05 April 2004 (See http://www.w3.org/TR/proc-model-req/.)
XML Information Set (Second Edition) John Cowan, Richard Tobin, editors. W3C Working Group Note 04 February 2004 (See http://www.w3.org/TR/xml-infoset/.)

B Contributors

The following members of the XML Core Working Group contributed to this specification as part of their requirements document effort within that working group: