This document provides authors of specifications, software developers, and content developers a common reference for interoperable text manipulation on the World Wide Web. Topics addressed include encoding identification, early uniform normalization, string identity matching, string indexing, and URI conventions, building on the Universal Character Set, defined jointly by Unicode and ISO/IEC 10646. Some introductory material on characters and character encodings is also provided.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. The latest status of this series of documents is maintained at the W3C.
This is a W3C Last Call Working Draft for review by W3C Members and other interested parties. The Last Call period begins 26 January 2001 and ends 23 February 2001. The W3C Internationalization Working Group (Members only) invites comments on this specification. Due to the architectural nature of this document, it affects a large number of W3C Working Groups, but also software developers, content developers, and writers and users of specifications outside the W3C that have to interface with W3C specifications.
Comments are instrumental in the WG's deliberations, and we encourage readers to review this Last Call Working Draft carefully. Comments from the public and from organizations outside the W3C should be sent to firstname.lastname@example.org (archive). Comments from W3C Working Groups are best sent directly to the Internationalization Interest Group (email@example.com), with cross-posting to the originating Group, to facilitate discussion and resolution.
This document is published as part of the W3C Internationalization Activity by the Internationalization Working Group, with the help of the Internationalization Interest Group. The Internationalization Working Group will not allow early implementation to constrain its ability to make changes to this specification prior to final release. Publication as a Working Draft does not imply endorsement by the W3C Membership. It is inappropriate to use W3C Working Drafts as reference material or to cite them as other than "work in progress".
For information about the requirements that informed the development of important parts of this specification, see Requirements for String Identity Matching and String Indexing. A list of current W3C Recommendations and other technical documents can be found at the W3C Web site.
1.1 Goals and Scope
3.1 Perceptions of Characters
3.2 Digital Representation of Characters
3.5 Reference Processing Model
3.6 Choice and Identification of Character Encodings
3.7 Character Escaping
4 Early Uniform Normalization
4.2 Definitions for W3C Text Normalization
4.3 Responsibility for Normalization
5 Compatibility and Formatting Characters
6 String Identity Matching
7 String Indexing
8 Character Encoding in URI References
9 Referencing the Unicode Standard and ISO/IEC 10646
Appendix A Examples of Characters, Keystrokes and Glyphs
Change Log (Non-Normative)
The goal of this document is to facilitate use of the Web by all people, regardless of their language, script, writing system, and cultural conventions, in accordance with the W3C goal of Universal Access. One basic prerequisite to achieve this goal is to be able to transmit and process the characters used around the world in a well-defined and well-understood way.
The main target audience of this document are W3C specification developers. This document defines conformance requirements for other W3C specifications. This document and parts of it can also be referenced from other W3C specifications.
Other audiences of this document include software developers, content developers, and authors of specifications outside the W3C. Software developers and content developers implement and use W3C specifications. This document defines some conformance requirements for software developers and content developers that implement and use W3C specifications. It also helps software developers and content developers to understand the character-related provisions in other W3C specifications.
The character model described in this document provides authors of specifications, software developers, and content developers with a common reference for consistent, interoperable text manipulation on the World Wide Web. Working together, these three groups can build a more international Web.
Topics addressed include encoding identification, early uniform normalization, string identity matching, string indexing, and URI conventions. Some introductory material on characters and character encodings is also provided.
At the core of the model is the Universal Character Set (UCS), defined jointly by The Unicode Standard [Unicode] and ISO/IEC 10646 [ISO/IEC 10646]. In this document, Unicode is used as a synonym for the Universal Character Set. The model will allow Web documents authored in the world's scripts (and on different platforms) to be exchanged, read, and searched by Web users around the world.
All W3C specifications have to conform to this document (see section 2 Conformance). Authors of other specifications (for example, IETF specifications) are strongly encouraged to take guidance from it.
Since other W3C specifications will be based on some of the provisions of this document, without repeating them, software developers implementing W3C specifications have to conform to these provisions.
This section provides some historical background on the topics addressed in this document.
Starting with Internationalization of the Hypertext Markup Language ([RFC 2070]), the Web community has recognized the need for a character model for the World Wide Web. The first step towards building this model was the adoption of Unicode as the document character set for HTML.
The choice of Unicode was motivated by the fact that Unicode:
W3C adopted Unicode as the document character set for HTML in [HTML 4.0]. The same approach was later used for specifications such as [XML 1.0] and [CSS2]. Unicode now serves as a common reference for W3C specifications and applications.
The IETF has adopted some policies on the use of character sets on the Internet (see [RFC 2277]).
When data transfer on the Web remained mostly unidirectional (from server to browser), and where the main purpose was to render documents, the use of Unicode without specifying additional details was sufficient. However, the Web has grown:
In short, the Web may be seen as a single, very large application ([Nicol]), rather than as a collection of small independent applications.
While these developments strengthen the requirement that Unicode be the basis of a character model for the Web, they also create the need for additional specifications on the application of Unicode to the Web. Some aspects of Unicode that require additional specification for the Web include:
It should be noted that such properties also exist in legacy encodings (where legacy encoding is taken to mean any character encoding not based on Unicode), and in many cases have been inherited by Unicode in one way or another from such legacy encodings.
The remainder of this document presents additional specifications and requirements to ensure an interoperable character model for the Web, taking into account earlier work (from W3C, ISO and IETF).
In this document, requirements are expressed using the key words "MUST", "MUST NOT", "REQUIRED", "SHALL", and "SHALL NOT". Recommendations are expressed using the key words "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL". Key words are used in accordance with [RFC 2119].
In order to conform to this document, all applicable requirements MUST be satisfied. Requirements vary for content, software and specifications. All new or revised W3C specification MUST conform to the requirements applicable to specifications.
Where this specification contains a procedural description, it is to be understood as a way to specify the desired external behavior. Implementations MAY use other ways of achieving the same results, as long as observable behavior is not affected.
The word character is used in many contexts, with different meanings. Human cultures have radically differing writing systems, leading to radically differing concepts of a character. Such wide variation in end user experience can, and often does, result in misunderstanding. This variation is sometimes mistakenly seen as the consequence of imperfect technology. Instead, it derives from the great flexibility and creativity of the human mind and the long tradition of writing as an important part of the human cultural heritage.
The developers of W3C specifications, and the developers of software based on those specifications, are likely to be more familiar with usages they have experienced and less familiar with the wide variety of usages in an international context. Furthermore, within a computing context, characters are often confused with related concepts, resulting in incomplete or inappropriate specifications and software.
This section examines some of these contexts, meanings and confusions.
The glossary entry in [Unicode 3.0] gives:
Character. (1) The smallest component of written language that has semantic values; refers to the abstract meaning and or shape ...
In some scripts, characters have a close relationship to phonemes, while in others they are closely related to meanings. Even when characters (loosely) correspond to phonemes, this relationship may not be simple, and there is rarely a one-to-one correspondence between character and phoneme.
Example: Japanese Hiragana and Katakana are syllabaries, not phonemic alphabets. A character in these scripts corresponds not to a phoneme, but to a syllable.
Example: Korean Hangul combines symbols for phonemes into square syllabic blocks. Depending on the user and the application, either the individual symbols or the syllabic clusters can be considered to be characters.
Example: Indic scripts use semi-regular or irregular ways to combine consonants and vowels into clusters. Depending on the user and the application, either individual consonants or vowels, or the consonant or consonant-vowel clusters can be perceived as characters.
Specification writers and software developers MUST NOT assume that there is a one-to-one correspondence between characters and phonemes.
Visual rendering introduces the notion of a glyph. Glyphs can be defined as components used to generate the visible representation of characters. There is not a one-to-one correspondence between characters and glyphs:
A set of glyphs makes up a font. Glyphs can be construed as the basic units of organization of the visual rendering of text, just as characters are the basic unit of organization of encoded text.
See Appendix A for examples of the complexities of character to glyph mapping.
Specification writers and software developers MUST NOT assume a one-to-one mapping between character codes and units of displayed text.
Some scripts, in particular Arabic and Hebrew, are written from right to left. Text including characters from these scripts can run in both directions and is therefore called bidirectional text (see examples A.6 to A.8).
[Unicode] requires that characters are stored and interchanged in logical order. Protocols, data formats and APIs MUST store, interchange or process text data in logical order. Where protocols or APIs support selection, they MUST support logical selection (range of character offsets), and they SHOULD support visual selection (area on display), because many users prefer visual selection. Visual selection can give rise to discontiguous logical selections and vice versa (see examples A.6 to A.8). Protocols and APIs that specify selection MUST be able to specify discontiguous logical selections.
In keyboard input, it is not the case that keystrokes and input characters correspond one-to-one. A limited number of keys can fit on a keyboard and many writing systems have too many characters to allow such a correspondence and must rely on more complex input methods, which transform keystroke sequences into character sequences. Other languages may make it necessary to input some characters with special modifier keys. See Appendix A for examples of non-trivial input.
Specification writers and software developers MUST NOT assume that a single keystroke results in a single character, nor that a single character can be input with a single keystroke (even with modifiers), nor that keyboards are the same all over the world.
What a user considers a "letter" is dependent on the script or language in question.
Example: The letter "ö" is considered as a modified "o" in French, as an extra character quite related to "o" in German and as a character completely independent from "o" in Swedish.
Even though software fundamentally works on the level of encoded characters, the capabilities of software to sort (or otherwise process) text MUST NOT be restricted to those letters that have a one-to-one relation to encoded characters.
String comparison can both aggregate a character sequence into a single collation unit with its own position in the sorting order, and can separate various aspects of a character to be sorted separately.
Example: In traditional Spanish sorting, the character sequences "ch" and "ll" are treated as atomic collation units. Although Spanish sorting, and to some extent Spanish everyday use, treat "ch" as a character, digital encodings treat it as two characters, and keyboards do the same.
Example: The sorting of text written in a bicameral script (i.e. a script which has distinct upper and lower case letters) is usually required to ignore case differences in a first pass; case then is used to break ties in a later pass.
Software developers MUST NOT merely use a one-to-one mapping as their string-compare function, as in sorting operations.
Computer storage and communication rely on units of physical storage and information interchange, such as bits and bytes (also known as octets). A frequent error in specifications and implementations is the equating of characters with units of physical storage. The mapping between characters and such units of storage is actually quite complex, and is discussed in the next section, 3.2 Digital Representation of Characters.
Specification writers and software developers MUST NOT assume a simple relationship between characters and units of physical storage. Depending on the circumstances, this relationship may be many-to-many, one-to-many, many-to-one, or one-to-one.
The term character is used differently in a variety of contexts and often leads to confusion when used outside of these contexts. In the context of the digital representations of text, a character can be defined informally as a small logical unit of text. Text is then defined as sequences of characters. While such an informal definition is sufficient to create or capture a common understanding in many cases, it is also sufficiently open to create misunderstandings as soon as details start to matter. In order to write effective specifications, protocol implementations, and software for end users, it is very important to understand that these misunderstandings can occur.
Specifications using the term character MUST specify which of the possible meanings they intend. Specifications SHOULD avoid the use of the term character if a more specific term is available.
To be of any use in computers, in computer communications and in particular on the World Wide Web, characters must be encoded. In fact, much of the information processed by computers over the last few decades has been encoded text, exceptions being images, audio, video and numeric data. To achieve text encoding, a large variety of encoding schemes have been devised, which can loosely be defined as mappings between the character sequences that users manipulate and the sequences of bits that computers manipulate.
Given the complexity of text encoding and the large variety of schemes for character encoding invented throughout the computer age, a more formal description of the encoding process is useful. Text encoding can be described as follows (see [UTR #17] for a more detailed description):
A set of characters to be encoded is identified. The units of encoding, the characters, are pragmatically chosen to express text and to efficiently allow various text processes in one or more target languages. They may not correspond precisely to what users perceive as letters and other characters. The set of characters is called a repertoire.
Each character in the repertoire is then associated with a (mathematical, abstract) non-negative integer, the code point (also known as a character number). The result, a mapping from the repertoire to the set of non-negative integers, is called a coded character set (CCS).
To enable use in computers, a suitable base datatype is identified (such as a byte, a 16-bit wyde or other) and a character encoding form (CEF) is used, which encodes the abstract integers of a CCS into sequences of the code units of the base datatype. The encoding form can be extremely simple (for instance, one which encodes the integers of the CCS into the natural representation of integers of the chosen datatype of the computing platform) or arbitrarily complex (a variable number of code units, where the value of each unit is a non-trivial function of the encoded integer).
To enable transmission or storage using
byte-oriented devices, a serialization scheme or character
encoding scheme (CES) is next used. A CES maps the integers of one or
more CCSes to well-defined sequences of bytes, taking into account the
necessary specification of byte-order for multi-byte base datatypes and
including in some cases switching schemes between multiple CCSes (an
example is ISO 2022). A CES, together with the CCSes it is used with, is
identified by an IANA
charset identifier. Given a sequence of bytes representing
text and a
charset identifier, one can in principle
unambiguously recover the sequence of characters of the text.
Note: Unfortunately, there are some important cases of charset identifiers that denote a range of slight variants of an encoding scheme, where the differences may be crucial (e.g. the well-known yen/backslash case) and may vary over time. In those cases, recovery of the character sequence from a byte sequence is not totally unambiguous. See the [XML Japanese profile] for examples of such ambiguous charsets.
In very simple cases, the whole encoding process can be collapsed to a single step, a trivial one-to-one mapping from characters to bytes; this is the case, for instance, for US-ASCII and ISO 8859-1.
Transcoding is the process of converting text data from one Character Encoding Form to another. Transcoders work only at the level of character encoding and do not parse the text; consequently, they do not deal with character escapes such as numeric character references (see 3.7 Character Escaping) and do not fix up embedded charset information (for instance in an XML declaration or in an HTML <meta> element).
Note: Transcoding may involve one-to-one, many-to-one, one-to-many or many-to-many mappings. Because some legacy mappings are glyphic, they may not only be many-to-many, but also discontinuous: thus XYZ may map to yxz. See example A.5.
A normalizing-transcoder is a transcoder that converts from a legacy encoding to a Unicode encoding form and ensures that the result is in Unicode Normalization Form C (see 4.2.1 Unicode-normalized Text).
Various specifications use the notion of a string, sometimes without defining precisely what is meant and sometimes defining it differently from other specifications. The reason for this variability is that there are in fact multiple reasonable definitions for a string, depending on one's intended use of the notion. This section provides a few definitions which may be reused elsewhere.
Byte string: A byte string is a sequence of bytes representing characters in a particular encoding. This corresponds to a CES. As a definition for a string, this definition is most often useless, except when the textual nature is unimportant and the string is considered only as a piece of opaque data with a length in bytes. Specifications in general SHOULD NOT define a string as a byte string.
Physical string: A physical string is a sequence of code units representing characters in a particular encoding. This corresponds to a CEF. This definition is useful in APIs that expose a physical representation of string data. Example: For the [DOM Level 1], UTF-16 was chosen based on widespread implementation practice.
Character string: A character string is a sequence of characters, each represented by a code point in [Unicode]. This is usually what programmers consider to be a string, although it does not match exactly the user perception of characters. This is the highest layer of abstraction that ensures interoperability with very low implementation effort. This definition is generally the most useful and SHOULD be used by most specifications, following the examples of Production  of [XML 1.0], the SGML declaration of [HTML 4.01], and the character model of [RFC 2070].
Grapheme string: A grapheme string is a sequence of character clusters, where the clusters are defined to be as close as possible to what the user perceives as characters, but in a way that is still language-independent.
Language-dependent grapheme string: A language-dependent grapheme string is a sequence of character clusters, where the clusters are defined to be as close as possible to what the user perceives as characters, taking into account language dependencies.
The first three definitions directly correspond to digital representation of characters as described in Section 3.2. The grapheme strings do not have a direct equivalent since they represent sequences of digital representations of characters. Widely accepted definitions of this clustering do not currently exist, but information about many aspects of it is provided in Chapter 5, Implementation Guidelines, of [Unicode 3.0], and in several Unicode Technical Reports.
Many Internet protocols and data formats, most notably the very important Web formats HTML, CSS and XML, are based on text. In those formats, everything is text but the relevant specifications impose a structure on the text, giving meaning to certain constructs so as to obtain functionality in addition to that provided by plain text. HTML and XML are markup languages, defining entities entirely composed of text but with conventions allowing the separation of this text into markup and character data. Citing from [XML 1.0], section 2.4:
Text consists of intermingled character data and markup. [...] All text that is not markup constitutes the character data of the document.
For the purposes of this section, the important aspect is that everything is text, that is, a sequence of characters.
Since its early days, the Web has seen the development of a Reference Processing Model. This model was first adopted for HTML and later embraced by XML and CSS. It is applicable to any data format or protocol that is text-based as described above. The essence of the Reference Processing Model is the use of Unicode as a common reference. For a specification to use the Reference Processing Model does not require that implementations actually use Unicode. The requirement is only that the implementations behave as if the processing took place as described by the Model.
A specification conforms to the Reference Processing Model if all of the following apply:
Unless there are strong reasons to do otherwise, all W3C specifications that involve text SHOULD specify conformance to the Reference Processing Model. To ensure interoperability, all specifications SHOULD also require conformance to section 4 Early Uniform Normalization.
Note: All specifications that derive from [XML 1.0] automatically inherit this Reference Processing Model. XML is entirely defined in terms of Unicode characters and mandates the UTF-8 and UTF-16 encodings while allowing any other encoding for parsed entities.
Note: When specifications choose to allow encodings other than Unicode encodings, implementers should be aware that the correspondence between the characters of a legacy encoding and Unicode characters may in practice depend on the software used for transcoding. See the [XML Japanese profile] for examples of such inconsistencies.
Because encoded text cannot be interpreted and processed without knowing the encoding, it is vitally important that the character encoding is known at all times and places where text is exchanged or processed. Specifications MUST either specify a unique encoding, or provide mechanisms such that the encoding of text is always reliably identified. When designing a new protocol, format or API, specifications SHOULD mandate a unique character encoding.
Mandating a unique character encoding is simple, efficient, and robust. There is no need for specifying, producing, transmitting, and interpreting encoding tags. At the receiver, the encoding will always be understood. There is also no ambiguity if data is transferred non-electronically and later has to be converted back to a digital representation. Even when there is a need for compatibility with existing data, systems, protocols and applications, multiple encodings can often be dealt with at the boundaries or outside a protocol, format, or API. The DOM ([DOM Level 1]) is an example of where this was done. The advantages of choosing a unique encoding become more important the smaller the pieces of text used are and the closer to actual processing the specification is.
When a unique encoding is mandated, the encoding MUST be UTF-8 or UTF-16. If compatibility with ASCII is desired, UTF-8 (see [RFC 2279]) is RECOMMENDED; on the Internet, the IETF Charset Policy [RFC 2277] specifies that "Protocols MUST be able to use the UTF-8 charset". For APIs, UTF-16 (see [RFC 2781]) is more appropriate.
If the unique encoding approach is not chosen, then protocols MUST provide for character encoding identification which SHOULD be along the lines of the [MIME] Internet specification. The MIME charset parameter is defined such that it provides sufficient information to unambiguously decode the sequence of bytes of the payload into a sequence of characters. The values are drawn from the IANA charset registry [IANA].
Note: The term charset derives from "character set", an expression with a long and tortured history (see [Connolly] for a discussion). Specifications SHOULD avoid using the expression "character set", as well as the term "charset", except when the latter is used to refer to the MIME charset parameter or its IANA-registered values. The terms character encoding or character encoding scheme are recommended.
The IANA charset registry is the official list of names and aliases for character encodings on the Internet. If the unique encoding approach is not taken, specifications SHOULD mandate the use of those names, and in particular the names labeled in the registry as MIME preferred names, to designate character encodings in protocols, data formats and APIs. The "x-" convention for unregistered names SHOULD NOT be used, having led to abuse in the past (use of "x-" for character encodings that were widely used, even long after there was an official registration). Content developers and software that tags textual data MUST use one of the names mandated by the appropriate specification (e.g. the XML specification when editing XML text) and SHOULD use the MIME preferred name of an encoding to tag data in that encoding. An IANA-registered charset name MUST NOT be used to tag textual data in an encoding other than the one identified in the IANA registration of that name.
If the unique encoding approach is not chosen, specifications MUST designate at least one of the UTF-8 and UTF-16 encoding forms of Unicode as admissible encodings and SHOULD choose at least one of UTF-8 or UTF-16 as mandated encoding forms (encoding forms that MUST be supported by implementations of the specification). They MAY define either UTF-8 or UTF-16 as a default (or both if they define suitable means of distinguishing them), but they MUST NOT use any other character encoding as a default. Also, they MUST avoid reliance on unreliable heuristics.
Receiving software MUST determine the encoding from available information. It MAY recognize as many encodings (names and aliases) as appropriate. A field-upgradeable mechanism may be appropriate for this purpose. When a IANA-registered charset name is recognized, receiving software MUST interpret the received data according to the encoding associated with the name in the IANA registry. When no charset is provided the receiving software MUST adhere to the default encoding(s) specified in the specification.
Certain encodings are more or less associated with certain languages (e.g. Shift-JIS with Japanese); trying to support a given language or set of customers may mean that certain encodings have to be supported. The encodings that need to be supported may change over time. This document does not give any advice on which encoding may be appropriate or necessary for the support of any given language.
Implementers of software MUST fully support character encoding identification mechanisms and SHOULD make it easy to use them (for instance in HTTP servers). On interfaces to other protocols, implementers SHOULD support conversion between Unicode encoding forms as well as any other necessary conversions. Content developers MUST make use of the offered facilities by always indicating character encoding.
Because of the layered Web architecture (e.g. formats used over protocols), there may be multiple and at times conflicting information about character encoding. Specifications MUST define conflict-resolution mechanisms (e.g. priorities) for these cases, and implementers and content developers MUST follow them carefully.
Unicode designates certain ranges of code points for private use: the Private Use Area (U+E000-F8FF) and planes 15 and 16 (U+F0000-FFFFD and U+100000-10FFFD). These code points are guaranteed to never be allocated to standard characters, and are available for use by private agreement between a producer and a recipient. However, their use is strongly discouraged, since private agreements do not scale on the Web. Code points from different private agreements may collide, and a private agreement and therefore the meaning of the code points can quickly get lost.
Specifications MUST NOT define any assignments of private use code points, and MUST NOT rely on any assignments to private use code points by other parties. Specifications MUST NOT provide mechanisms for private agreement between parties. Specifications and implementations SHOULD be designed in such a way as to not disallow the use of these code points by private arrangement. As an example, XML does not disallow the use of private use code points.
Where specifications need to allow the transmission of symbols not in Unicode or need to identify specific variants of Unicode characters, they MAY define markup for this purpose.
In text-based protocols or formats where characters can be either part of character data or of markup (cf. 3.5 Reference Processing Model), it is often the case that certain characters are designated as having certain specific protocol/format functions in certain contexts (e.g. "<" and "&" serve as markup delimiters in HTML and XML). These syntactically relevant characters cannot be used to represent themselves in text data in the same way as all other characters do. Also, often formats are represented in an encoding that does not allow to represent all characters directly.
To express syntactically relevant or unrepresentable characters, a technique called escaping is used. This works by creating an additional syntactic construct, defining additional characters or defining character sequences that have special meaning. Escaping a character means expressing it using such a construct, appropriate to the format or protocol in which the character appears; expanding an escape (or unescaping) means replacing it with the character that it represents.
Certain guidelines apply to the way specifications define character escapes. These guidelines MUST be followed when designing new W3C protocols and formats and SHOULD be followed as much as possible when revising existing protocols and formats.
As explained at length in Requirements for String Identity Matching and String Indexing [CharReq], the existence, in many character encoding schemes, of multiple representations for what users perceive as the same string makes it necessary to define character data normalization. Without a precise specification, it is not possible to determine reliably whether or not two strings are identical. Such a specification must take into account character encoding, the way to perform normalization and where or when to perform it.
The Unicode Consortium provides four standard normalization forms (see Unicode Normalization Forms [UTR #15]). For use on the Web, this document defines W3C Text Normalization by picking the most appropriate of these (NFC) and additionally addressing the issues of legacy encodings and of character escapes (which can denormalize text when unescaped).
This document also specifies that normalization is to be performed early (by the sender) as opposed to late (by the recipient). The reasons for that choice are manifold:
Almost all legacy data as well as data created by current software is normalized (using NFC).
The number of Web components that generate or transform text is considerably smaller than the number of components that receive text and need to perform matching or other processes requiring normalized text.
Current receiving components (browsers, XML parsers, etc.) implicitly assume early normalization by not performing normalization themselves. This is a vast legacy.
Web components that generate and process text are in a much better position to do normalization than other components; in particular, they may be aware that they deal with a restricted repertoire only.
Not all components of the Web that implement functions such as string matching can reasonably be expected to do normalization. This, in particular, applies to very small components and components in the lower layers of the architecture.
Forward-compatibility issues can be dealt with more easily: less software needs to be updated, namely only the software that generates newly introduced characters.
It improves matching in cases where the character encoding is partly undefined, such as URIs [RFC 2396].
It increases interoperability and predictability when string data is exposed in an API.
Text data is, for the purposes of this specification, Unicode-normalized if it is in a Unicode encoding form and is in Unicode Normalization Form C (according to revision 18 of [UTR #15]).
Text data is W3C-normalized (normalized for short) if:
it is Unicode-normalized and does not contain any character escapes whose unescaping would cause the data to become no longer Unicode-normalized; or
it is in a legacy encoding and, after transcoding by a normalizing-transcoder, is W3C-normalized.
Note: Legacy text is always normalized unless it contains escapes which, once expanded, denormalize it.
Note: W3C-normalization is specified against the context of a markup language (or the absence thereof), which specifies the form of escapes. For plain text (no escapes) in a Unicode encoding form, W3C-normalization and Unicode-normalization are equivalent.
The string "suçon", expressed as the sequence of five characters U+0073 U+0075 U+00E7 U+006F U+006E and encoded in a Unicode encoding form, is both Unicode-normalized and W3C-normalized. The same string encoded in a legacy encoding for which there exists a normalizing-transcoder would be W3C-normalized but not Unicode-normalized.
The string "suçon", expressed as the sequence of six characters U+0073 U+0075 U+0063 U+0327 U+006F U+006E (U+0327 is the COMBINING CEDILLA) and encoded in a Unicode encoding form, is neither W3C-normalized nor Unicode-normalized.
In an XML or HTML context, the string "suçon" is not W3C-normalized, whatever the encoding form, because expanding "̧" yields the sequence "suc¸on" which is not Unicode-normalized. Note that, since Unicode-normalization doesn't take escapes into account, the string "suçon" is Unicode-normalized if encoded in a Unicode encoding form.
Producers MUST produce text data in normalized form. For the purpose of W3C specifications and their implementations, the producer of text data is the sender of the data in the case of protocols and the tool that produces the data in the case of formats.
Note: Implementers of producer software in the above sense are encouraged to delegate normalization to their respective data sources wherever possible. Examples of data sources are operating systems, libraries, and keyboard drivers.
The recipients of text data MUST assume the data is normalized and MUST NOT normalize it. Recipients which transcode text data from a legacy encoding to a Unicode encoding form MUST use a normalizing-transcoder.
If a software module functions as both a producer and a recipient of text data (e.g. a browser/editor), normalization MUST be applied in the producer part but MUST NOT be applied in the recipient part.
Note: The prohibition of normalization by recipients is necessary for consistency, on which security depends. As an example of a security vulnerability, an XML document containing two elements whose names differ only in their normalization state would be interpreted differently by processes that do or do not normalize received text. The only secure alternative to this prohibition would be late normalization, where all recipients have to normalize all the time.
Intermediate (recipient/producer) components whose role involves modification of text data MUST ensure that their modifications do not result in denormalization of the data. Consequently, any modification (addition, deletion, etc.) MUST be followed by normalization of the affected text region (which includes the characters neighboring the modification point) or, OPTIONALLY, of the entire text data.
Example: If the "z" is deleted from the (normalized) string "cz¸" (where "¸" represents a combining cedilla, U+0327), normalization is necessary to turn the denormalized result "c¸" into the properly normalized "ç". Analogous cases exist for addition and concatenation.
As an optimization, the normalization operations made necessary by modifications MAY be deferred until the text needs to be exposed (sent on the network, saved to disk, returned in an API call, etc.)
Intermediate components whose role does not involve modification of the data (e.g. caching proxies) MUST NOT perform normalization.
This specification does not address the suitability of particular characters for use in markup languages, in particular formatting characters and compatibility equivalents. For detailed recommendations about the use of compatibility and formatting characters, see [UXML].
Specifications SHOULD exclude compatibility characters in the syntactic elements (markup, delimiters, identifiers) of the formats they define (e.g. exclusion of compatibility characters for GIs in XML).
This specification does not address any further equivalents, such as case equivalents, the equivalence between katakana and hiragana, the equivalence between accented and un-accented characters, the equivalence between full characters and fallbacks (e.g., "ö" vs. "oe" in German), and the equivalence between various spellings (e.g., color vs. colour). Such equivalences are on a higher level; whether and where they are needed depends on the language, the application, and the preferences of the user.
One important operation that depends on early normalization is string identity matching [CharReq], which is a subset of the more general problem of string matching. There are various degrees of specificity for string matching, from approximate matching such as regular expressions or phonetic matching, to more specific matches such as case-insensitive or accent-insensitive matching and finally to identity matching. In the Web environment, where multiple encodings are used to represent strings, including some encodings which allow multiple representations for the same thing, identity is defined to occur if and only if the compared strings contain no user-identifiable distinctions. This definition allows strings to match when they differ only in their encodings (including escapes) or when strings in the same encoding differ only by their use of precomposed and decomposed character sequences, yet does not make equivalent strings that differ in case or accentuation.
To avoid unnecessary conversions and, more importantly, to ensure predictability, all components of the Web must use the same identity testing mechanism. To meet this requirement and support the above definition of identity, this specification mandates the following steps for string identity matching:
Early uniform normalization to W3C-normalized form, as defined in 4.2.2 W3C-normalized Text
Conversion to a common encoding of UCS, if necessary
Expansion of all escapes
In accordance with section 4 Early Uniform Normalization, the first step MUST be performed by the producers of the strings to be compared. This ensures 1) that the identity matching process can produce correct results using the next three steps and 2) that a minimum of effort is spent on solving the problem.
There are many situations where a software process needs to access a substring or to point within a string and does so by the use of indices, i.e. numeric "positions" within a string. Where such indices are exchanged between components of the Web, there is a need for an agreed-upon definition of string indexing in order to ensure consistent behavior. The requirements for string indexing are discussed in [CharReq], section 4. The two main questions that arise are: "What is the unit of counting?" and "Do we start counting at 0 or 1?".
Depending on the particular requirements of a process, the unit of counting may correspond to any of the definitions of a string provided in section 3.4 Strings (which all follow the pattern "a <foo string> is a sequence of <unit>s"). In particular:
It is noteworthy that there exist other, non-numeric ways of identifying substrings which have favorable properties. For instance, substrings based on string matching are quite robust against small edits; substrings based on document structure (in structured formats such as XML) are even more robust against edits and even against translation of a document from one language to another. Consequently, specifications that need a way to identify substrings or point within a string SHOULD provide ways other than string indexing to perform this operation. Users of such specifications (software developers, content developers) SHOULD prefer those other ways whenever possible.
Experience shows that more general, flexible and robust specifications result when individual characters are understood and processed as substrings, identified by a position before and a position after the substring. Understanding indices as boundary positions between the counting units also makes it easier to relate the indices resulting from the different string definitions. Specifications SHOULD use this form of indexing, regardless of the choice of counting units. APIs in addition SHOULD NOT specify single character or single encoding-unit arguments.
The issue of index origin, i.e. whether we count from 0 or 1, actually arises only after a decision has been made on whether it is the units themselves that are counted or the positions between the units. With the latter, starting with an index of 0 for the position at the start of the string is the RECOMMENDED solution, with the last index then being equal to the number of counting units in the string.
According to the definition in [RFC 2396], URI references are restricted to a subset of US-ASCII. This RFC also specifies an escaping mechanism to encode arbitrary byte values, using the %HH convention. However, because the RFC does not define the mapping from characters to bytes, the %HH convention by itself is of limited use. This chapter defines how to address this issue in W3C specifications in a way consistent with the model defined in this document and with deployed practice.
W3C specifications that define protocol or format elements (e.g. HTTP headers, XML attributes,...) whose role is that they be interpreted as URI references (or specific subsets of URI references, such as absolute URI references, URIs,...) MUST allow these protocol or format elements to contain characters disallowed by the URI syntax. The disallowed characters include all non-ASCII characters, plus the excluded characters listed in Section 2.4 of [RFC 2396], except for the number sign (#) and percent sign (%) characters and the square bracket characters re-allowed in [RFC 2732].
When passing such protocol or format elements to software components that cannot deal with anything else than characters legal in URI references, a conversion is needed. W3C specifications MUST define when this conversion is to be made. The conversion MUST take place as late as possible, i.e. all characters should be preserved as long as possible. W3C specifications MUST specify that conversion to a URI reference is carried out as follows:
- Each disallowed character is converted to UTF-8, resulting in one or more bytes.
- The resulting bytes are escaped using the URI escaping mechanism (that is, each byte is converted to %HH, where HH is the byte value expressed using hexadecimal notation).
- The original character is replaced by the resulting character sequence.
Example: The string "http://www.w3.org/People/Dürst/" is not a legal URI because the character "ü" is not allowed in URIs. The representation of "ü" in UTF-8 consists of two bytes with the values 0xC3 and 0xBC. The string is therefore converted to "http://www.w3.org/People/D%C3%BCrst/".
Note: [I-D URI-I18N] recently proposed the term Internationalized Resource Identifiers (IRI). A future version of this specification will adopt this term if it proves to be useful.
Request for feedback: There is currently a small but serious difference between [I-D URI-I18N] and the provisions above. US-ASCII characters which are not allowed in URIs (such as space, backslash, and curly brackets) are also not allowed in IRIs, but are allowed in protocol and format elements as defined above because they are escaped (as described above) during the conversion a legal URI. We are requesting feedback on how to deal with this discrepancy.
Note: The intent of this chapter is not to limit URI references to a subset of US-ASCII characters forever, but to ensure that W3C technology correctly and predictably interacts with systems that are based on the definition of URI references while taking advantage of the capabilities of W3C technology to handle the UCS repertoire.
Note: The ability to use URI references with encodings other than UTF-8 is not affected by this chapter. However, in such a case, the %HH-escaped form has always to be used. As an example, if the HTTP server at www.w3.org used only ISO-8859-1, the above string, when used in a protocol or format element defined by a W3C specification, would always have to be given in the escaped form, "http://www.w3.org/People/D%FCrst/" ('ü' in ISO-8859-1 is 0xFC).
Note: Many current W3C specifications already contain provisions in accordance with this chapter. For [XML 1.0], see Section 4.2.2, External Entities. For [HTML 4.01], see Appendix B.2.1: Non-ASCII characters in URI attribute values, which also contains some provisions for backwards compatibility. For [XLink], see Section 5.4, Locator Attribute (href). Further information and links can be found at [Info URI-I18N].
A W3C specification that defines new syntax for URIs, such as a new kind of fragment identifier, MUST specify that characters outside the US-ASCII repertoire are encoded in URIs using UTF-8 and %HH-escaping. This makes such new syntax fully interoperable with the above provisions.
Example: [XPointer] defines fragment identifiers
for XML documents. An XPointer to an element with the ID
xpointer(id('résumé')). The resulting fragment identifier is
xpointer(id('r%C3%A9sum%C3%A9')) as the escaped UTF-8 of 'é' is
%C3%A9. A URI reference to that element in a document called
[XLink], and assuming the encoding of the XML document can represent 'é'
directly, this can be written as
xlink:href="doc.xml#xpointer(id('résumé'))". This looks very
convenient and obvious, but it should be noted that it is only possible due to
the fact that the encoding is aligned between representation (in XLink href)
and interpretation (in XPointer).
Specifications often need to make references to the Unicode standard or International Standard ISO/IEC 10646. Such references must be made with care, especially when normative. The questions to be considered are:
ISO/IEC 10646 is developed and published jointly by ISO (the International Organisation for Standardisation) and IEC (the International Electrotechnical Commission). The Unicode Standard is developed and published by the Unicode Consortium, an organization of major computer corporations, software producers, database vendors, national governments, research institutions, international agencies, various user groups, and interested individuals. The Unicode Standard is comparable in standing to W3C Recommendations.
ISO/IEC 10646 and Unicode define exactly the same CCS (same repertoire, same code points) and encoding forms. This synchronism is actively maintained by liaisons and overlapping membership between the respective technical committees. In addition to the jointly defined CCS and encoding forms, the Unicode Standard adds normative and informative lists of character properties, normative character equivalence and normalization specifications, a normative algorithm for bidirectional text and a large amount of useful implementation information. In short, Unicode adds semantics to the characters that ISO/IEC 10646 merely enumerates. Conformance to Unicode implies conformance to ISO/IEC 10646, see [Unicode 3.0] Appendix C.
Since specifications in general need both a definition for their characters and the semantics associated with these characters, specifications SHOULD include a reference to the Unicode Standard, whether or not they include a reference to ISO/IEC 10646. By providing a reference to The Unicode Standard implementers can benefit from the wealth of information provided in the standard and on the Unicode Consortium Web site.
The fact that both ISO/IEC 10646 and Unicode are evolving (in synchronism) raises the issue of versioning: should a specification refer to a specific version of the standard, or should it make a generic reference, so that the normative reference is to the version current at the time of reading the specification? In general the answer is both. A generic reference MUST be made if it is desired that characters allocated after a specification is published are usable with that specification. A specific reference MAY be included to ensure that functionality depending on a particular version is available and will not change over time (an example would be the set of characters acceptable as Name characters in [XML 1.0], which is an enumerated list that parsers must implement to validate names).
A generic reference to the Unicode Standard does not carry a version number. All generic references to Unicode MUST be understood to refer to version 3.0 or later. Generic references to ISO/IEC 10646 MUST be written such that they make allowance for the future publication of additional parts of the standard. They MUST be understood to refer to [ISO/IEC 10646-1:2000] or later, including any amendments.
Whether such a reference is included in the bibliography section of a specification, or a simpler reference with explanatory text in the body of the specification, is an editorial matter best left to each specification. Examples of the latter, as well as a discussion of the versioning issue with respect to MIME charset parameters for UCS encodings, can be found in [RFC 2279] and [RFC 2781].
A few examples will help make sense of all this complexity (which is mostly a reflection of the complexity of human writing systems). Let us start with a very simple example: a user, equipped with a US-English keyboard, types "Foo", which the computer encodes as 16-bit values (the UTF-16 encoding of Unicode) and displays on the screen.
|Encoded characters (byte values in hex)||0x0046||0x006F||0x006F|
The only complexity here is the use of a modifier (Shift) to input the capital F.
A slightly more complex example is a user typing "çé" on a French-Canadian keyboard, which the computer again encodes in UTF-16 and displays. We assume that this particular computer uses a fully composed form of UTF-16.
|Encoded characters (byte values in hex)||0x00E7||0x00E9|
A few interesting things are happening here: when the user types the cedilla (¸), nothing happens except for a change of state of the keyboard driver; the cedilla is a dead key. When the driver gets the c, it provides a complete ç character to the system, which represents it as a single 16-bit code unit and displays a ç glyph. The user then presses the dedicated é key, which results in, again, a character represented by two bytes. Most systems will display this as one glyph, but it is also possible to combine two glyphs (the base letter and the accent) to obtain the same rendering.
On to a Japanese example: our user employs an input method to type "", which the computer encodes in UTF-16 and displays.
|Keystrokes||n i h o n g o <space><return>|
|Encoded characters (byte values in hex)||0x65E5||0x672C||0x8A9E|
The interesting aspect here is input, where the user has to type a total of nine keystrokes before the three characters are produced, which are then encoded and displayed rather trivially. An Arabic example will show different phenomena:
|Encoded characters (byte values in hex)||0x0644||0x0627||0x0644||0x0627||0x0639||0x0639|
Here the first two keystrokes each produce an input character and an encoded character, but the pair is displayed as a single glyph (, a lam-alif ligature). The next keystroke is a lam-alif, which some Arabic keyboards have; it produces the same two characters which are displayed similarly, but this second lam-alif is placed to the left of the first one. The last two keystrokes produce two identical characters which are rendered by two different glyphs (a medial form followed to its left by a final form). We thus have 5 keystrokes producing 6 characters and 4 glyphs laid out right-to-left.
A final example in Tamil, typed with an ISCII keyboard, will illustrate some additional phenomena:
|Encoded characters (byte values in hex)||0x0B9F||0x0BBE||0x0B99||0x0BCD||0x0B95||0x0BCB|
Here input is straightforward, but note that contrary to the preceding accented Latin example, the diacritic (virama, vowel killer) is entered after the to which it applies. Rendering is interesting for the last two characters. The last one () clearly consists of two glyphs which surround the glyph of the next to last character ().
A number of operations routinely performed on text can be impacted by the complexities of the world's writing systems. An example is the operation of selecting text on screen by a pointing device in a bidirectional (bidi) context. First, let's have some bidi text, in this case Arabic letters (written right-to-left) mixed with Arabic-Hindi digits (left-to-right):
|In memory||<space> <space>|
In the presence of bidi text, two possible selection modes must be considered. The first is logical selection mode, which selects all the characters logically located between the end-points of the user's mouse gesture. Here the user selects from between the first and second letters of the second word to the middle of the number. Logical selection looks like this:
It is a consequence of the bidirectionality of the text that a single, continuous logical selection in memory results in a discontinuous selection appearing on the screen. This discontinuity, as well as the somewhat unintuitive behavior of the cursor, makes many users prefer a visual selection mode, which selects all the characters visually located between the end-points of the user's mouse gesture. With the same mouse gesture as before, we now obtain:
In this mode, popular with users, a single visual selection range results in two logical ranges, which MUST be accommodated by protocols, APIs and implementations.
Special thanks go to Ian Jacobs for ample help with editing. Tim Berners-Lee and James Clark provided important details in the section on URIs. The W3C I18N WG and IG, as well as others, provided many comments and suggestions.
Major restructuring and rewriting.