Strings on the Web: Language and Direction Metadata

W3C First Public Working Draft

This version:
https://www.w3.org/TR/2019/WD-string-meta-20190416/
Latest published version:
https://www.w3.org/TR/string-meta/
Latest editor's draft:
https://w3c.github.io/string-meta/
Editors:
(Amazon.com)
(W3C)
Participate:
GitHub w3c/string-meta
File a bug
Commit history
Pull requests

Abstract

This document describes the best practices for identifying language and base direction for strings used on the Web.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

We welcome comments on this document, but to make it easier to track them, please raise separate issues for each comment, and point to the section you are commenting on using a URL.

This document was published by the Internationalization Working Group as a First Public Working Draft.

GitHub Issues are preferred for discussion of this specification.

Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 March 2019 W3C Process Document.

1. Introduction

This document was developed as a result of observations by the Internationalization Working Group over a series of specification reviews related to formats based on JSON, WebIDL, and other non-markup data languages. Unlike markup formats, such as XML, these data languages generally do not provide extensible attributes and were not conceived with built-in language or direction metadata.

The concepts in this document are applicable any time strings are used on the Web, either as part of a formalised data structure, but also where they simply originate from JavaScript scripting or any stored list of strings.

Natural language information on the Web depends on and benefits from the presence of language and direction metadata. Along with support for Unicode, mechanisms for including and specifying the base direction and the natural language of spans of text are one of the key internationalization considerations when developing new formats and technologies for the Web.

Markup formats, such as HTML and XML, as well as related styling languages, such as CSS and XSL, are reasonably mature and provide support for the interchange and presentation of the world's languages via built-in features. Strings and string-based data formats need similar mechanisms in order to ensure complete and consistent support for the world's languages and cultures.

1.1 Terminology

This section defines terminology necessary to understand the contents of this document. The terms defined here are specific to this document.

A producer is any process where natural language string data is created for later storage, processing, or interchange.

A consumer is any process that receives natural language strings, either for display or processing.

A serialization agreement (or "agreement" for short) is the common understanding between a producer and consumer about the serialization of string metadata: how it is to be understood, serialized, read, transmitted, removed, etc.

Language negotiation is any process which selects or filters content based on language. Usually this implies selecting content in a single language (or falling back to some meaningful default language that is available) by finding the best matching values when several languages or locales [LTLI] are present in the content. Some common language negotiation algorithms include the Lookup algorithm in [BCP47] or the BestFitMatcher in [ECMA-402].

LTR stands for "left-to-right" and refers to the inline base direction of left-to-right [UAX9]. This is the base text direction used by languages whose starting character progression begins on the left side of the page in horizontal text. It's used for scripts such as Latin, Cyrillic, Devanagari, and many others.

RTL stands for "right-to-left" and refers to the inline base direction of right-to-left [UAX9]. This is the base text direction used by languages whose starting character progression begins on the right side of the page in horizontal text. It's used for scripts such as Arabic, Hebrew, Syriac, and a few others.

Note

If you are unfamiliar with bidirectional or right-to-left text, there is a basic introduction here. Additional materials can be found in the Internationalization Working Group's Techniques Index.

1.2 The String Lifecycle

It's not possible to consider alternatives for handling string metadata in a vacuum: we need to establish a framework for talking about string handling and data formats.

1.2.1 Producers

A string can be created in a number of ways, including a content author typing strings into a plain text editor, text message, or editing tool; or a script scraping text from web pages; or acquisition of an existing set of strings from another application or repository. In the data formats under consideration in this document, many strings come from back end data repositories or databases of various kinds. Sources of strings often provide an interface, API, or metadata that includes information about the base direction and language of the data. Some also provide a suitable default for when the direction or language is not provided or specified. In this document, the producer of a string is the source, be it human or a mechanism, that creates or provides a string for storage or transmission.

When a string is created, it's necessary to (a) detect or capture the appropriate language and base direction to be associated with the string, and (b) take steps, where needed, to set the string up in a way that stores and communicates the language and base direction.

For example, in the case of a string that is extracted from an HTML form, the base direction can be detected from the computed value of the form's field. Such a value could be inherited from an earlier element, such as the html element, or set using markup or styling on the input element itself. The user could also set the direction of the text by using keyboard shortcut keys to change the direction of the form field. The dirname attribute provides a way of automatically communicating that value with a form submission.

Similarly, language information in an HTML form would most likely be inherited from the lang attribute on the html tag, or any element in the tree with a lang attribute.

If the producer of the string is receiving the string from a location where it was stored by another producer, and where the base direction/language has already been established, the producer needs to understand that the language and base direction has already been set and convert or encode that information for its consumers.

1.2.2 Consumers

A consumer is an application or process that receives a string for processing and possibly places it into a context where it will be exposed to a user. For display purposes, it must ensure that the base direction and language of the string is correctly applied to the string in that context. For processing purposes, it must at least persist the language and direction and may need to use the language and direction data in order to perform language-specific operations.

Displaying the string usually involves applying the base direction and language by constructing additional markup, adding control codes, or setting display properties. This indicates to rendering software the base direction or language that should be applied to the string in this display context to get the string to appear correctly. For text direction, it must also isolate embedded strings from the surrounding text to avoid spill-over effects of the bidi algorithm [UAX9]. For language, it must make clear the boundaries for the range of text to which the language applies.

Note that a consumer of one document format might be a producer of another document format.

1.2.3 Serialization Agreements

Between any producer and consumer, there needs to be an agreement about what the document format contains and what the data in each field or attribute means. Any time a producer of a string takes special steps to collect and communicate information about the base direction or language of that string, it must do so with the expectation that the consumer of the string will understand how the producer encoded this information. If no action is taken by the producer, the consumer must still decide what rules to follow in order to decide on the appropriate base direction and language, even if it is only to provide some form of default value.

In some systems or document formats, the necessary behaviour of the producers and consumers of a string are fully specified. In others, such agreements are not available; it is up to users to provide an agreement for how to encode, transmit, and later decode the necessary language or direction informat. Low level specifications, such as JSON, do not provide a string metadata structure by default, so any document formats based on these need to provide the "agreement" themselves.

1.3 Why is this important?

Information about the language of content is important when processing and presenting natural language data for a variety of reasons. When language information is not present, the resulting degradation in appearance or functionality can frustrate users, render the content unintelligible, or disable important features. Some of the affected processes include:

Similarly, direction metadata is important to the Web. When a string contains text in a script that runs right-to-left (RTL), it must be possible to eventually display that string correctly when it reaches an end user. For that to happen, it is necessary to establish what base direction needs to be applied to the string as a whole. The appropriate base direction cannot always be deduced by simply looking at the string; even if it were possible, the producer and consumer of the string would need to use the same heuristics to interpret its direction.

Static content, such as the body of a Web page or the contents of an e-book, often has language or direction information provided by the document format or as part of the content metadata. Data formats found on the Web generally do not supply this metadata. Base specifications such as Microformats, WebIDL, JSON, and more, have tended to store natural language text in string objects, without additional metadata.

This places a burden on application authors and data format designers to provide the metadata on their own initiative. When standardized formats do not address the resulting issues, the result can be that, while the data arrives intact, its processing or presentation cannot be wholly recovered.

In a distributed Web, any consumer can also be a producer for some other process or system. Thus, a given consumer might need to pass language and direction metadata from one document format (and using one agreement) to another consumer using a different document format. Lack of consistency in representing language and direction metadata in serialization agreements poses a threat to interoperability and a barrier to consistent implementation.

1.3.1 An Example

Suppose that you are building a Web page to show a customer's library of e-books. The e-books exist in a catalog of data and consist of the usual data values. A JSON file for a single entry might look something like:

{
    "id": "978-111887164-5",
    "title": "HTML و CSS: تصميم و إنشاء مواقع الويب",
    "authors": [ "Jon Duckett" ],
    "language": "ar",
    "pubDate": "2008-01-01",
    "publisher": "مكتبة",
    "coverImage": "https://example.com/images/html_and_css_cover.jpg",
    // etc.
},

Each of the above is a data field in a database somewhere. There is even information about what language the book is in: ("language": "ar").

A well-internationalized catalog would include additional metadata to what is shown above. That is, for each of the fields containing natural language text, such as the title and authors fields, there should be language and base direction information stored as metadata. (There may be other values as well, such as pronunciation metadata for sorting East Asian language information.) These metadata values are used by consumers of the data to influence the processing and enable the display of the items in a variety of ways. As the JSON data structure provides no place to store or exchange these values, it is more difficult to construct internationalized applications.

One work-around might be to encode the values using a mix of HTML and Unicode bidi controls, so that a data value might look like one of the following:

// following examples are NOT recommended
// contains HTML markup
"title": "<span lang='ar' dir='rtl'>HTML و CSS: تصميم و إنشاء مواقع الويب</span>",
// contains LRM as first character
"authors": [ "\u200eJon Duckett" ],

But JSON is a data interchange format: the content might not end up with the title field being displayed in an HTML context. The JSON above might very well be used to populate, say, a local data store which uses native controls to show the title and these controls will treat the HTML as string contents. Producers and consumers of the data might not expect to introspect the data in order to supply or remove the extra data or to expose it as metadata. Most JSON libraries don't know anything about the structure of the content that they are serializing. Producers want to generate the JSON file directly from a local data store, such as a database. Consumers want to store or retrieve the value for use without additional consideration of the content of each string. In addition, either producers or consumers can have other considerations, such as field length restrictions, that are affected by the insertion of additional controls or markup. Each of these considerations places special burden on implementers to create arbitrary means of serializing, deserializing, managing, and exchanging the necessary metadata, with interoperability as a casualty along the way.

(As an aside, note that the markup shown in the above example is actually needed to make the title as well as the inserted markup display correctly in the browser.)

1.4 Isn't Unicode Enough?

[Unicode] and its character encodings (such as UTF-8) are key elements of the Web and its formats. They provide the ability to encode and exchange text in any language consistently throughout the Internet. However, Unicode by itself does not guarantee perfect presentation and processing of natural language text, even though it does guarantee perfect interchange.

Several features of Unicode are sometimes suggested as part of the solution to providing language and direction metadata. Specificially, Unicode bidi controls are suggested for handling direction metadata. In addition, there are "tag" characters in the U+E0000 block of Unicode originally intended for use as language tags (although this use is now deprecated).

There are a variety of reasons why the addition of characters to data in an interchange format is not a good idea. These include:

Note

This last consideration is important to call out: document formats are often built and serialized using several layers of code. Libraries, such as general purpose JSON libraries, are expected to store and retrieve faithfully the data that they are passed. Higher-level implementations also generally concern themselves with faithful serialization and de-serialization of the values that they are passed. Any process that alters the data itself introduces variability that is undesirable. For example, consider an application's unit test that checks if the string returned from the document is identical to the one in the data catalog used to generate the document. If bidi controls, HTML markup, or Unicode language tags have been inserted, removed, or changed, the strings might not compare as equal, even though they would be expected to be the same.

2. Best Practices, Recommendations, and Gaps

Editor's note

This section is being actively developed. Comments on it are incredibly welcome but take the stuff in here with a grain of salt.

The TAG and I18N WG are currently discussing what the best practice recommendations should be. This section represents our understanding currently.

This section consists of the Internationalization (I18N) Working Group's set of best practices for identifying language and base direction in data formats on the Web. In some cases, there are gaps in existing standards, where the recommendations of the I18N WG require additional standardization or there might be barriers to full adoption.

Note

The main issue is how to establish a common serialization agreement between producers and consumers of data values so that each knows how to encode, find, and interpret the language and base direction of each data field. The use of metadata for supplying both the language and base direction of natural language string fields ensures that the necessary information is present, can be supplied and extracted with the minimal amount of processing, and does not require producers or consumers to scan or alter the data.

Many resources use only a single language and have a consistent base text direction. For efficiency, the following are best practices:

Define a field to provide the default language and base direction for all strings in a given resource.

Specifications MUST NOT assume that a document-level default is sufficient.

Document level defaults, when combined with per-field metadata, can reduce the overall complexity of a given document instance, since the language and direction values don't have to be repeated across many fields. However, they do not solve all language or directionality problems, and so it must be possible to override the default on a string-by-string basis.

Use metadata to indicate the language and the base direction for each natural language string.

There is widespread low-level support for natural language string metadata because the use of metadata for storage and interchange of the language of data values is long-established and widely supported in the basic infrastructure of the Web. This includes language attributes in [XML] and [HTML]; string types in schema languages (e.g. [xmlschema11-2]) or the various RDF specifications including [JSON-LD]; or protocol- or document format-specific provisions for language.

The use of metadata for indicating base direction is preferred, because it avoids requiring the consumer to interpolate the direction using methods such as first strong or which require modification of the data itself (such as the insertion of RLM/LRM markers or bidirectional controls).

Issue 1

Schema languages, such as the RDF suite of specifications, have no in-built mechanism for associating base direction metadata with natural language string values.

Issue 2

There is no built-in attribute for base direction in [JSON-LD]. There needs to be a corresponding built-in attribute (e.g. an @dir) or de facto convention for indicating document-level base direction.

For [WebIDL]-defined data structures, define each natural language text field as a Localizable.

This combines both language and direction metadata and, if consistently adopted, makes interchange between different formats easier. Consistency between different specifications and document formats allows for the easy interchange of string data. By naming field attributes in the same way and adopting the same semantics, different specifications can more easily extract values from or add values into resources from other data sources.

Use of [JSON-LD] @context and the built-in @language attribute is RECOMMENDED as a document level default.

For document formats that use it, [JSON-LD] includes some data structures that are helpful in assigning language (but not base direction) metadata to collections of strings (including entire resources). Notably, it defines what it calls string internationalization in the form of a context-scoped @language value which can be associated with blocks of JSON or within individual objects. There is no definition for base direction, so the @context mechanism does not currently address all concerns raised by this document.

If metadata is not available, consumers of strings should use heuristics, preferably based on the Unicode Standard's first-strong detection algorithm, to detect the base direction of a string.

The first-strong algorithm looks for the first strongly-directional character in a string (skipping certain preliminary substrings), and assumes that it represents the base direction for the string as a whole. The first strong directional character doesn't always coincide with the required base direction for the string as a whole, so it should be possible to provide metadata, where needed, to address this problem.

If metadata is not available and cannot otherwise be provided, specifications MAY allow a base direction to be interpolated from available language metadata.

Not all resources make use of the available metadata mechanisms. The script subtag of a language tag (or the "likely" script subtag based on [BCP47] and [LDML]) can sometimes be used to provide a base direction when other data is not available. Note that using language information is a "last resort" and specifications SHOULD NOT use it as the primary way of indicating direction: make the effort to provide for metadata.

Specifications MUST NOT require the production or use of paired bidi controls.

Another way to say this is: do not require implementations to modify data passing through them. Unicode bidi control characters might be found in a particular piece of string content, where the producer or data source has used them to make the text display properly. That is, they might already be part of the data. Implementations should not disturb any controls that they find—but they shouldn't be required to produce additional controls on their own.

Specifications SHOULD recommend the use of language indexing when Localizable strings can be supplied in multiple languages for the same value.

Producers sometimes need to supply multiple language values (see Localization Considerations) for the same content item or data record. One use for this language negotiation by the consumer.

Issue 3

[JSON-LD] language indexing should be modified to support the use of Localizable values in language indexing.

3. Requirements and Use Cases

This section of the document describes in depth the need for language and direction metadata and various use cases helpful in understanding the best practices and alternatives listed above.

3.1 Identifying the Language of Content

3.1.1 Definitions

Language metadata typically indicates the intended linguistic audience or user of the resource as a whole, and it's possible to imagine that this could, for a multilingual resource, involve a property value that is a list of languages. A property that is about language metadata may have more than one value, since it aims to describe all potential users of the information

The text-processing language is the language of a particular range of text (which could be a whole resource or just part of it). A property that represents the text-processing language needs to have a single value, because it describes the text content in such a way that tools such as spell-checkers, default font applicators, hyphenation and line breakers, case converters, voice browsers, and other language-sensitive applications know which set of rules or resources to apply to a specific range of text. Such applications generally need an unambiguous statement about the language they are working on.

3.1.2 Language Tagging Use Cases

Kensuke is reading an old Tibetan manuscript from the Dunhuang collection. The tool he is using to read the manuscript has access to annotations created by scholars working in the various languages of the International Dunhuang Project, who are commenting on the text. The section of the manuscript he is currently looking at has commentaries by people writing in Chinese, Japanese, and Russian. Each of these commentaries is stored in a separate annotation, but the annotations point to the same point in the target document. Each commentary is mainly written in the language of the scholar, but may contain excerpts from the manuscript and other sources written in Tibetan as well quoted text in Chinese and English. Some commentaries may contain parallel annotations, each in a different language. For example, there are some with the same text translated into Japanese, Chinese and Tibetan.

Kensuke speaks Japanese, so he generally wants to be presented with the Japanese commentary.

3.1.2.1 Capturing the language of the audience

The annotations containing the Japanese commentary have a language property set to "ja" (Japanese). The tool he is using knows that he wants to read the Japanese commentaries, and it uses this information to select and present to him the text contained in that body. This is language information being used as metadata about the intended audience – it indicates to the application doing the retrieval that the intended consumer of the information wants Japanese.

Some of the annotations contain text in more than one language. For example, there are several with commentary in Chinese, Japanese and Tibetan. For these annotations, it's appropriate to set the language property to "ja,zh,bo" – indicating that both Japanese and Chinese readers may want to find it.

The language tagging that is happening here is likely to be at the resource level, rather than the string level. It's possible, however, that the text-processing language for strings inside the resource may be assumed by looking at the resource level language tag – but only if it is a single language tag. If the tag contains "ja,zh,bo" it's not clear which strings are in Japanese, which are in Chinese, and which are in Tibetan.

3.1.2.2 Capturing the text-processing language

Having identified the relevant annotation text to present to Kensuke, his application has to then display it so that he can read it. It's important to apply the correct font to the text. In the following example, the first line is labeled ja (Japanese), and the second zh-Hant (Traditional Chinese) respectively. The characters on both lines are the same code points, but they demonstrate systematic differences between how those and similar codepoints are rendered in Japanese vs. Chinese fonts. It's important to associate the right forms with the right language, otherwise you can make the reader uncomfortable or possibly unhappy.

雪, 刃, 直, 令, 垔

So, it's important to apply a Japanese font to the Japanese text that Kensuke is reading. There are also language-specific differences in the way text is wrapped at the end of a line. For these reasons we need to identify the actual language of the text to which the font or the wrapping algorithm will be applied.

Another consideration that might apply is the use of text-to-speech. A voice browser will need to know whether to use Japanese or Chinese pronunciations, voices, and dictionaries for the ideographic characters contained in the annotation body text.

Various other text rendering or analysis tools need to know the language of the text they are dealing with. Many different types of text processing depend on information about the language of the content in order to provide the proper processing or results and this goes beyond mere presentation of the text. For example, if Kensuke wanted to search for an annotation, the application might provide a full text search capability. In order to index the words in the annotations, the application would need to split the text according to word boundaries. In Japanese and Chinese, which do not use spaces in-between words, this often involves using dictionaries and heuristics that are language specific.

We also need a way to indicate the change of language to Chinese and Tibetan later in the commentary for some annotations, so that appropriate fonts and wrapping algorithms can be applied there.

3.1.2.3 Additional Requirements for Localization

Having viewed the commentaries he is interested in, Kensuke realizes that he needs another reference work, but he's not sure of the catalog number. He uses an application for searching out catalog entries. This application is written in JavaScript and can be switched between several languages, according to the user preference. One way to accomplish this would be to reload the application's user interface from the server each time the user selects a new language. However, because this application is relatively small, the developer has elected to package all of the translations with the JavaScript (there are several open source projects that allow runtime selection of locale). Similarly, the catalog search service sends records back in all of the available languages, rather than pre-selecting according to the user's current language preference.

The original example shows a data record available in a single language. But some applications, such as the catalog search tool and its supporting service, might need the ability to send multiple languages for the same field, such as when localizing an application or when multilingual data is available. This is particularly true in cases like this, when the producer needs to support consumers that perform their own language negotiation or when the consumer cannot know which language or languages will be selected for display.

Serialization agreements to support this therefore need to represent several different language variations of the same field. For instance, in the example above the values title or description might each have translations available for display to users who speak a language other than English. Or an application might have localized strings that the consumer can select at runtime. In some cases, all language variations might be shown to the user. In other cases, the different language values might be matched to user preferences as part of language negotiation to select the most appropriate language to show.

When multiple language representations are possible, a serialization might provide a means (defined in the specification for that document format) for setting a default value for language or direction for the whole of the document. This allows the serialized document to omit language and direction metadata from individual fields in cases where they match the default.

3.2 Identifying the Base Direction of Content

In order for a consumer to correctly display bidirectional text, such as those in the following use cases, there must be a way for the consumer to determine the required base direction for each string. It is not enough to rely on the Unicode Bidirectional Algorithm to solve these issues. What is needed is a way to establish the overall directional context in which the string will be displayed (which is what 'base direction' means).

These use cases illustrate situations where a failure to apply the necessary base direction creates a problem.

3.2.1 Final punctuation

This use case consists of a string containing Hebrew text followed by punctuation – in this case an exclamation mark. The characters in this string are shown here in the order in which they are stored in memory.

"בינלאומי!"

If the string is dropped into a LTR context, it will display like this, which is incorrect – the exclamation mark is on the wrong side:

Result: "בינלאומי!"

Dropped into a RTL context, this will be the result, which is correct:

תוצאה: "בינלאומי!"

The Hebrew characters are automatically displayed right-to-left by applying the Unicode Bidirectional Algorithm (UBA). However, in a LTR context the UBA cannot make the exclamation mark appear to the left of the Hebrew text, where it belongs, unless the base direction is set to RTL around the inserted string.

In HTML this can be done by inserting the string into a dir attribute with the value rtl. That yields the following:

Result: "בינלאומי!"

3.2.2 Initial Latin

In this case the Hebrew word is preceded by some Latin text (such as a hashtag). The characters in the order in which they are stored in memory.

"bidi בינלאומי"

If the string is dropped into a LTR context, it will display like this, which is incorrect – the word 'bidi' should be to the right:

bidi בינלאומי

Dropped into a RTL context, this will be the result, which is correct:

bidi בינלאומי

The Hebrew characters are reversed by applying the Unicode Bidirectional Algorithm (UBA). However, in a LTR context the UBA cannot make the 'bidi' word appear to the right of the Hebrew text, where it belongs, unless the base direction is set to RTL around it.

Notice how our original example demonstrates this. The title of the book was displayed in an LTR context like this:

Title: HTML و CSS: تصميم و إنشاء مواقع الويب

However, the title is not displayed properly. The first word in the title is "HTML" and it should show on the right side, like this:

Title: HTML و CSS: تصميم و إنشاء مواقع الويب

This has an additional complication. Often, applications will test the first strong character in the string in order to guess the base direction that needs to be applied. In this case, that heuristic will produce the wrong result.

The example that follows is in a RTL context, but the injected string has been given a base direction based on the first strong directional character, and again the words 'HTML' and 'CSS' `are in the wrong place.

عنوان كتاب: HTML و CSS: تصميم و إنشاء مواقع الويب

3.2.3 Bidirectional text ordering

In this case the string contains three words with different directional properties. Here are the characters in the order in which they are stored in memory.

"one שתיים three"

If the string is dropped into a LTR context, it will display like this:

one שתיים three

Dropped into a RTL context, this will be the result – the order of the items has changed:

one שתיים three

If a bidirectional string is inserted into a LTR context without specifying the RTL base direction for the inserted string, it can produce unreadable text. This is an example.

Translation is: "في XHMTL 1.0 يتم تحقيق ذلك بإضافة العنصر المضمن bdo."

What you should have seen is:

Translation is: "في XHMTL 1.0 يتم تحقيق ذلك بإضافة العنصر المضمن bdo."

This can be much worse when combined with punctuation, or in this case markup. Take the following example of source code, presented to a user in an educational context in a RTL page: <span>one שתיים three</span>. If the base direction of the string is not specified as LTR, you will see the result below.

<span>one שתיים three</span>

(This happens because the Unicode bidi algorithm sees span>one as a single directional run, and three</span as another. The outermost angle brackets are balanced by the algorithm.)

3.2.4 Interpreted HTML

The characters in this string are shown in the order in which they are stored in memory.

"<span dir='ltr'>one שתיים three</span>"

This use case is for applications that will parse the string and convert any HTML markup to the DOM. In this case, the text should be rendered correctly in an HTML context because the dir attribute indicates the base direction to be applied within the markup. (It also applies bidi isolation to the text in browsers that fully support bidi markup, avoiding any spill-over effects.) It relies, however, on a system where the consumer expects to receive HTML, and knows how to handle bidi markup.

It also requires the producer to take explicit action to identify the appropriate base direction and set up the required markup to indicate that.

3.2.5 Neutral LTR text

The text in this use case could be a phone number, product catalogue number, mac address, etc. The characters in this string are shown in the order in which they are stored in memory.

"123 456 789"

If the string is dropped into a LTR context, it will display like this, which is correct:

123 456 789

Dropped into a RTL context, this will be the result, which is incorrect – the sequencing is wrong, and this may not even be apparent to the reader:

123 456 789

When presented to a user, the order of the numbers must remain the same even when the directional context of the surrounding text is RTL. There are no strong directional characters in this string, and the need to preserve a strong LTR base direction is more to do with the type of information in the string than with the content.

3.2.6 Spill-over effects

A common use for strings is to provide data that is inserted into a page or user interface at runtime. Consider a scenario where, in a LTR application environment, you are displaying book names and the number of reviews each book has received. The display should produce something ordered like this:

$title - $numReviews reviews

Then you insert a book with a title like that in the original example. You would expect to see this:

HTML: تصميم و إنشاء مواقع الويب - 4 reviews

What you would actually see is this:

HTML: تصميم و إنشاء مواقع الويب - 4 reviews

This problem is caused by spillover effects as the Unicode bidirectional algorithm operates on the text inside and outside the inserted string without making any distinction between the two.

The solution to this problem is called bidi isolation. The title needs to be directionally isolated from the rest of the text.

3.2.7 What consumers need to do

Given the use cases in this section it will be clear that a consumer cannot simply insert a string into a target location without some additional work or preparation taking place, first to establish the appropriate base direction for the string being inserted, and secondly to apply bidi isolation around the string.

This requires the presence of markup or Unicode formatting controls around the string. If the string's base direction is opposite that into which it is being inserted, the markup or control codes need to tightly wrap the string. Strings that are inserted adjacent to each other all need to be individually wrapped in order to avoid the spillover issues we saw in the previous section.

[HTML5] provides base direction controls and isolation for any inline element when the dir attribute is used, or when the bdi element is used. When inserting strings into plain text environments, isolating Unicode formatting characters need to be used. (Unfortunately, support for the isolating characters, which the Unicode Standard recommends as the default for plain text/non-markup applications, is still not universal.)

The trick is to ensure that the direction information provided by the markup or control characters reflects the base direction of the string.

4. Approaches Considered for Identifying the Base Direction

The fundamental problem for bidirectional text values is how a consumer of a string will know what base direction should be used for that string when it is eventually displayed to a user. Note that some of these approaches for identifying or estimating the base direction have utility in specific applications and are in use in different specifications such as [HTML5]. The issue here is which are appropriate to adopt generally and specify for use as a best practice in document formats.

4.1 First-strong property detection (alone)

This approach is NOT recommended.

This section looks at the use of first-strong detection as the sole method for identifying base direction for a string.

4.1.1 How it works

A producer doesn't need to do anything.

The string is stored as it is.

Consumers must look for the first character in the string with a strong Unicode directional property, and set the base direction to match it. They then take appropriate action to ensure that the string will be displayed as needed. This is not quite so simple as it may appear, for the following reasons:

  1. Characters at the start of string without a strong direction (eg. punctuation, numbers, etc) and isolated sequences (ie. sequences of characters surrounded by RLI/LRI/FSI...PDI formatting characters) within a string must be skipped in order to find the first strong character.
  2. The detection algorithm needs to be able to handle markup at the start of the string. It needs to be able to tell whether the markup is just string text, or whether the markup needs to be parsed in the target location – in which case it must understand the markup, and understand any direction-related information that is carried in the markup.

4.1.2 Advantages

Where it is reliable, information about direction can be obtained without any changes to the string, and without the agreements and structures that would be needed to support out-of-band metadata.

4.1.3 Issues

The main problem with this approach is that it produces the wrong result for

  1. strings that begin with a strong character with a different directionality than that needed for the string overall (eg. an Arabic tweet that starts with a hashtag)
  2. strings that don't have a strong directional character (such as a telephone number) are likely to be displayed incorrectly in a RTL context.
  3. strings that begin with markup, such as <span>, since the first strong character is always going to be LTR.

In cases where the entire string starts and ends with RLI/LRI/FSI...PDI formatting characters, it is not possible to detect the first strong character by following the Unicode Bidirectional Algorithm. This is because the algorithm requires that bidi-isolated text be excluded from the detection.

If no strong directional character is found in the string, the direction should probably be assumed to be LTR, and the consumer should act on that basis. This has not been tested fully, however.

If a string contains markup that will be parsed by the consumer as markup, there are additional problems. Any such markup at the start of the string must also be skipped when searching for the first strong directional character.

If parseable markup in the string contains information about the intended direction of the string (for example, a dir attribute with the value rtl in HTML), that information should be used rather than relying on first-strong heuristics. This is problematic in a couple of ways: (a) it assumes that the consumer of the string understands the semantics of the markup, which may be ok if there is an agreement between all parties to use, say, HTML markup only, but would be problematic, for example, when dealing with random XML vocabularies, and (b) the consumer must be able to recognise and handle a situation where only the initial part of the string has markup, ie. the markup applies to an inline span of text rather than the string as a whole.

If, however, there is angle bracket content that is intended to be an example of markup, rather than actual markup, the markup must not be skipped – trying to display markup source code in a RTL context yields very confusing results! It isn't clear how a consumer of the string would always know the difference between examples and parseable strings.

4.1.4 Additional notes

Although first-strong detection is outlined in the Unicode Bidirectional Algorithm (UBA) [UAX9], it is not the only possible higher-level protocol mentioned for estimating string direction. For example, Twitter and Facebook currently use different default heuristics for guessing the base direction of text – neither use just simple first-strong detection, and one uses a completely different method.

4.2 Metadata

This approach is recommended.

4.2.1 How it works

A producer ascertains the base direction of the string and adds that to a metadata field that accompanies the string when it is stored or transmitted.

There are a couple of possible approaches:

  1. Label every string for base direction.
  2. Rely on the consumer to do first-strong detection, and label only those strings which would produce the wrong result (ie. a RTL string that starts with LTR strong characters).

If storing or transmitting a set of strings at a time, it helps to have a field for the resource as a whole that sets a global, default base direction which can be inherited by all strings in the resource. Note that in addition to a global field, you still need the possibility of attaching string-specific metadata fields in cases where a string's base direction is not that of the default. The base direction set on an individual string must override the default.

Consumers would need to understand how to read the metadata sent with a string, and would need to apply first-strong heuristics in the absence of metadata.

The use of the Localizable dictionary structure is RECOMMENDED for individual values in JSON-based document formats, as it combines both language and direction metadata and, if consistently adopted, makes interchange between different formats easier.

Note

As noted here, [JSON-LD] includes some data structures that are helpful in assigning language (but not base direction) metadata to collections of strings (including entire resources). These gaps in support for pre-built metadata at the resource or item level are one of the key reasons for this documents development.

4.2.2 Advantages

Passing metadata as separate data value from the string provides a simple, effective and efficient method of communicating the intended base direction without affecting the actual content of the string.

If every string is labelled for direction, or the direction for all strings can be ascertained by applying the global setting and any string-specific deviations, it avoids the need to inspect and run heuristics on the string to determine its base direction.

4.2.3 Issues

Out-of-band information needs to be associated with and kept with strings. This may be problematic for some sets of string data which are not part of a defined framework.

In particular, JSON-LD doesn't allow direction to be associated with individual strings in the same way as it works for language.

4.3 Augmenting first-strong by inserting RLM/LRM markers

This approach is NOT recommended.

4.3.1 How it works

A producer ascertains the base direction of the string and adds an marker character (either U+200F RIGHT-TO-LEFT MARK (RLM) or U+200E LEFT-TO-RIGHT MARK (LRM)) to the beginning of the string. The marker is not functional, ie. it will not automatically apply a base direction to the string that can be used by the consumer, it is simply a marker.

There are a number of possible approaches:

  1. Add a marker to every string.
  2. Rely on the consumer to do first-strong detection, and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).
  3. Assume a default of LTR (no marker), and apply only RLM markers.

Consumers apply first-strong heuristics to detect the base direction for the string. The RLM and LRM characters are strongly typed, directionally, and should therefore indicate the appropriate base direction.

4.3.2 Advantages

It provides a reliable way of indicating base direction, as long as the producer can reliably apply markers.

In theory, it should be easier to spot the first-strong character in strings that begin with markup, as long as the correct RLM/LRM is prepended to the string.

4.3.3 Issues

If the producer is a human, they could theoretically apply one of these characters when creating a string in order to signal the directionality. One problem, especially on mobile devices, is the availability or inconvenience of inputting an RLM/LRM character. Perhaps more important, because the characters are invisible and because Unicode bidi is complicated, it can be difficult for the user to know that a bidi control will be necessary (or even what it is).

Furthermore, if a person types information into, say, an HTML form and relies on the form's base direction (in a RTL page) or use of shortcut keys to make the string look correct in the form field, they would not need to add RLM/LRM to make the string 'look correct' for themselves. However, outside of that context the string would look incorrect unless an appropriate strong character was added to it. Similarly, strings scraped from a web page that has dir=rtl set in the html element would not normally have or need an RLM/LRM character at the start of the string in HTML.

Another issue with this approach is that is changes the string value and identity. This may also create problems for working with string length or pointer positions, especially if some producers add markers and others don't.

If directional information is contained in markup that will be parsed as such by the consumer (for example, dir=rtl in HTML), the producer of the string needs to understand that markup in order to set or not set an RLM/LRM character as appropriate. If the producer always adds RLM/LRM to the start of such strings, the consumer is expected to know that. If the producer relies instead on the markup being understood, the consumer is expected to understand the markup.

The producer of a string should not automatically apply RLM or LRM to the start of the string, but should test whether it is needed. For example, if there's already an RLM in the text, there is no need to add another. If the context is correctly conveyed by first-strong heuristics, there is no need to add additional characters either. Note, however, that testing whether supplementary directional information of this kind is needed is only possible if the producer has access, and knows that it has access, to the original context of the string. Many document formats are generated from data stored away from the original context. For example, the catalog of books in the original example above is disconnected from the user inputing the bidirectional text.

4.4 Paired formatting characters

This approach is NOT recommended.

4.4.1 How it works

A producer ascertains the base direction of the string and adds a directional formatting character (one of U+2066 LEFT-TO-RIGHT ISOLATE (LRI), U+2067 RIGHT-TO-LEFT ISOLATE (RLI), U+2068 FIRST STRONG ISOLATE (FSI), U+202A LEFT-TO-RIGHT EMBEDDING (LRE), or U+202B RIGHT-TO-LEFT EMBEDDING (RLE)) to the beginning of the string, and U+2069 POP DIRECTIONAL ISOLATE (PDI) or U+202C POP DIRECTIONAL FORMATTING (PDF) to the end.

There are a number of possible approaches:

  1. Add the formatting codes to every string.
  2. Rely on the consumer to do first-strong detection, and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).

Consumers would theoretically just insert the string in the place it will be displayed, and rely on the formatting codes to apply the base direction. However, things are not quite so simple (see below).

There are two types of paired formatting characters. The original set of controls provide the ability to add an additional level of bidirectional "embedding" to the Unicode bidirectional Algorithm. More recently, Unicode added a complementary set of "isolating" controls. Isolating controls are used to surround a string. The inside of the string is treated as its own bidirectional sequence, and the string is protected against spill-over effects related to any surrounding text. The enclosing string treats the entire surrounded string as a single unit that is ignored for bidi reordering. This issue is described here.

Code Point Abbreviation Description Code Point Abbreviation Description
U+200A LRE Left to Right Embedding U+2066 LRI Left to Right Isolate
U+200B RLE Left to Right Embedding U+2067 RLI Right to Left Isolate
U+2068 FSI First String Isolate
U+200C PDF Pop Directional Formatting (ending an embedding) U+2069 PDI Pop Directional Isolate (ending an isolate)

If paired formatting characters are used, they should be isolating, ie. starting with RLI, LRI, FSI, and not with RLE or LRE.

4.4.2 Advantages

There are no real advantages to using this approach.

4.4.3 Issues

This approach is only appropriate if it is acceptable to change the value of the string. In addition to possible issues such as changed string length or pointer positions, this approach runs a real and serious risk of one of the paired characters getting lost, either through handling errors, or through text truncation, etc.

A producer and a consumer of a string would need to recognise and handle a situation where a string begins with a paired formatting character but doesn't end with it because the formatting characters only describe a part of the string.

Unicode specifies a limit to the number of embeddings that are effective, and embeddings could build up over time to exceed that limit.

Consuming applications would need to recognise and appropriately handle the isolating formatting characters. At the moment such support for RLI/LRI/FSI is far from pervasive.

This approach would disqualify the string from being amenable to UBA first-strong heuristics if used by a non-aware consumer, because the Unicode bidi algorithm is unable to ascertain the base direction for a string that starts with RLI/LRI/FSI and ends with PDI. This is because the algorithm skips over isolated sequences and treats them as a neutral character. A consumer of the string would have to take special steps, in this case, to uncover the first-strong character.

4.5 Script subtags

This approach is only recommended as a workaround for situations that prevent the use of metadata.

4.5.1 How it works

A producer supplies language metadata for strings, specifying, where necessary, the script in use.

There are a number of possible approaches:

  1. Label every string for language, including a script subtag as needed. Consumers may need to compute the script subtag when the producer does not provide one.
  2. It might be reasonable to assume a default of LTR for all strings unless marked with a language tag whose script subtag (either present or implied) indicates RTL.
  3. Alternatively, limit the use of script subtag metadata to situations where first-strong heuristics are expected to fail — provided that such cases can be identified, and appropriate action taken by the producer (not always reliable). Consumers would then need to use first-strong heuristics in the absence of a script subtag in order to identify the appropriate base direction. The use of script subtags should not, however, be restricted to strings that need to indicate direction; it is perfectly valid to associate a script subtag with any string.
  4. Set a default language for a set of strings at a higher level, but provide a mechanism to override that default for a given string where needed.

Consumers extract the script subtag from the language tag associated with each string, computing the string's base direction as necessary. Script subtags associated with RTL scripts are used to assign a base direction of RTL to their associated strings.

Language information MUST use [BCP47] language tags. The portion of the language tag that carries the information is the script subtag, not the primary language subtag. For example, Azeri may be written LTR (with the Latin or Cyrillic scripts) or RTL (with the Arabic script). Therefore, the subtag az is insufficient to clarify intended base direction. A language tag such as az-Arab (Azeri as written in the Arabic script), however, can generally be relied upon to indicate that the overall base direction should be RTL.

Note

4.5.2 Advantages

There is no need to inspect or change the string itself.

This approach avoids the issues associated with first-strong detection when the first-strong character is not indicative of the necessary base direction for the string, and avoids issues relating to the interpretation of markup.

Note that a string that begins with markup that sets a language for the string text content (eg. <cite lang="zh-Hans">) is not problematic here, since that language declaration is not expected to play into the setting of the base direction.

4.5.3 Issues

The use of metadata as outlined above is a much better approach, if it is available. This script-related approach is only for use where that approach is unavailable, for legacy reasons.

There are many strings which are not language-specific but which absolutely need to be associated with a particular base direction for correct consumption. For example, MAC addresses inserted into a RTL context need to be displayed with a LTR overall base direction and isolation from the surrounding text. It's not clear how to distinguish these cases from others (in a way that would be feasible when using direction metadata). Special language tags, such as zxx (Non-Linguistic), exist for identifying this type of content, but usually data fields of this type omit language information altogether, since it is not applicable.

The list of script subtags may be added to in future. In that case, any subtags that indicate a default RTL direction need to be added to the lists used by the consumers of the strings.

There are some rare situations where the appropriate base direction cannot be identified from the script subtag, but these are really limited to archaic usage of text. For example, Japanese and Chinese text prior to World War 2 was often written RTL, rather than LTR. Languages such as those written using Egyptian Hieroglyphs, or the Tifinagh Berber script, could formerly be written either LTR or RTL, however the default for scholastic research tends to LTR.

4.5.4 Other comments

The approach outlined here is only appropriate when declaring information about the overall base direction to be associated with a string. We do not recommend use of language data to indicate text direction within strings, since the usage patterns are not interchangeable.

4.6 Require bidi markup for content

This approach is NOT recommended except under agreements that expect to exclusively interchange HTML or XML markup data.

4.6.1 How it works

The producer ensures that all strings begin and end with markup which indicates the appropriate base direction for that string. This requires the producer to examine the string. If the string is not bounded by markup with directional information, the producer must add wrap the string with elements that have the dir or its:direction [ITS20] attributes, or other markup appropriate to a given XML application. If the string is bounded by markup, but it is something such as an HTML h1 element, the producer needs to introduce directional information into the existing markup, rather than simply surround the string with a span.

This example uses HTML markup. (Simply to make the example easier to read, it shows the text content of the string as it should be displayed, rather than in the order in which the characters are stored.)

The consumer then relies on the markup to set the base direction around the text content of the string when it is displayed. (Note that, unless additional metadata is provided, the consumer cannot remove the markup before integrating the string in the target location, because it cannot tell what markup has been added by the producer and what was already there. In general, however, such added markup is harmless.)

4.6.2 Advantages

The benefit for content that already uses markup is clear. The content will already provide complete markup necessary for the display and processing of the text or it can be extracted from the source page context. HTML and XML processors already know how to deal with this markup and provide ready validation.

For HTML, the dir attribute bidirectionally isolates the content from the surrounding text, which removes spillover conflicts. This reduces the work of the consumer.

Markup can also be used for string-internal directional information, something base direction on its own cannot solve.

4.6.3 Issues

Effectively, all levels of the implementation stack have to participate in understanding the markup (or ensure that they do no harm).

If the system uses HTML, end to end, then appropriate markup is available and its semantics are understood (ie. the dir attribute, and the bdi and bdo elements). For XML applications, however, there is no standard markup for bidi support. Such markup would need to first be defined, and then understood by both the producer and consumer.

A key downside of this approach is that many data values are just strings. As with adding Unicode tags or Unicode bidi controls, the addition of markup to strings alters the original string content. Altering the length of the content can cause problems with processes that enforce arbitrary limits or with processes that "sanitize" content by escaping HTML/XML unsafe characters such as angle brackets.

Another issue is the work and sophistication required for producers to examine strings and add markup as needed.

There are limits to the number of embeddings allowed by the Unicode bidirectional algorithm. Consumers would need to ensure that this limit is not passed when embedding strings into a wider context.

The addition of markup also requires consumers to guard against the usual problems with markup insertion, such as XSS attacks.

4.7 Create a new bidi datatype

This approach is not currently available.

4.7.1 How it works

This is similar to the idea of sending metadata with a string as discussed previously, however the metadata is not stored in a completely separate field (as in § 4.2 Metadata), or inserted into the string itself (as in § 4.3 Augmenting first-strong by inserting RLM/LRM markers), but is associated with the string as part of the string format itself.

Some datatypes, such as [RDF-PLAIN-LITERAL], already exist that allow for language metadata to be serialized as part of a string value. However, these do not include a consideration for base direction. This might be addressed by defining a new datatype (or extending an existing one) that document formats could then use to serialize natural language strings that includes both language and direction metadata.

Note that the last string does not include language information because it is an internal data value, but does include direction information because strings of this kind must be presented in the LTR order.

Producers would need to attach the direction information to a string.

Again, it would be sensible to establish rules that expect the consumer to use first-strong heuristics for those strings that are amenable to that approach, and for the producer to only add directional information if the first-strong approach would otherwise produce the wrong result. This would greatly simplify the management of strings and the amount of data to be transmittted, because the number of strings requiring metadata is relatively small.

The consumer would look to see whether the string has metadata associated with it, in which case it would set the indicated base direction. Otherwise, it would use first-strong heuristics to determine the base direction of the string.

4.7.2 Advantages

If a new datatype were added to JSON to support natural language strings, then specifications could easily specify that type for use in document formats. Since the format is standardized, producers and consumers would not need to guess about direction or language information when it is encoded.

4.7.3 Issues

Apart from the fact that this currently doesn't work, the downside of adding a datatype is that JSON is a widely implemented format, including many ad-hoc implementations. Any new serialization form would likely break or cause interoperability problems with these existing implementations. JSON is not designed to be a "versioned" format. Any serialization form used would need to be transparent to existing JSON processors and thus could introduce unwanted data or data corruption to existing strings and formats.

5. Approaches Considered for Identifying the Language of Content

This section deals with different means of determining or conveying the language of string values.

5.1 Metadata

This approach is recommended.

5.1.1 How it works

A producer ascertains the language of the string (generally from metadata supplied upstream) and includes this information a metadata field that accompanies the string when it is stored or transmitted.

When storing or transmitting a set of strings at a time, it helps to have a field for the resource as a whole that sets a language which can be inherited by all strings in the resource. Note that in addition to a global field, you still need the possibility of attaching string-specific metadata fields in cases where a string's language is not that of the default. The language set on an individual string must override any resource-level value.

A consumer needs to understand how to read the metadata associated with a string and apply it to the display, processing, or data structures that it generates. Note that this might include the need to apply a resource-level default language when serializing or exchanging an individual value.

5.1.2 Advantages

Using a consistent and well-defined data structure makes it more likely that different standards are composable and will work together seamlessly.

Metadata can be supplied without affecting the content itself.

Where metadata is unavailable, it can be omitted.

Consumers and producers do not have to instrospect the data outside of their normal processing.

5.1.3 Issues

Serialized files utilizing the dictionary and its data values will contain additional fields and can be more difficult to read as a result.

For existing document formats, it represents a change to the values being exchanged.

5.2 Require markup for content

This approach is NOT recommended except in special cases where the content being exchanged is expected to consist of and is restricted to literal values in a given markup language.

5.2.1 How it works

When a document is expected to consist of HTML or XML fragments and will be processed and displayed strictly in a markup context, the producer can use markup to convey the language of the content by wrapping strings with elements that have the lang or xml:lang attributes.

5.2.2 Advantages

This approach, and thus the advantages, are effectively the same as in this section.

5.2.3 Issues

See above.

5.3 Use Unicode language tag characters

This approach is NOT recommended.

5.3.1 How it works

Producers insert Unicode tag characters into the data to tag strings with a language.

Consumers process the Unicode tag characters and use them to assign the language.

Unicode defines special characters that can be used as language tags. These characters are "default ignorable" and should have no visual appearance. Here is how Unicode tags are supposed to work:

Each tag is a character sequence. The sequence begins with a tag identification character. The only one currently defined is U+E0001, which identifies [BCP47] language tags. Other types of tags are possible, via private agreement. The remainder of the Unicode block for forming tags mirrors the printable ASCII characters. That is, U+E0020 is space (mirroring U+0020), U+E0041 is capital A (mirroring U+0041), and so forth. Following the tag identification character, producers use each tag character to spell out a [BCP47] language tag using the upper/lowercase letters, digits, and the hyphen character. A given source language tag, which is composed from ASCII letters, digits and hyphens, can be transmogrified into tags by adding 0xE0000 to each character's code point. Additional structure, such as a language priority list (see [RFC4647]) might be constructed using other characters such as comma or semi-colon, although Unicode does not define or even necessarily permit this.

The end of a tag's scope is signalled by the end of the string, or can be signalled explicitly using the cancel tag character U+E007F, either alone (to cancel all tags) or preceeded by the language tag identification character U+E0001 (i.e. the sequence <U+E0001,U+E007F> to end only language tags).

Tags therefore have a minimum of three characters, and can easily be 12 or more. Furthermore, these characters are supplementary characters. That is, they are encoded using 4-bytes per character in UTF-8 and they are encoded as a surrogate pair (two 16-bit code units) in UTF-16. Surrogate pairs are needed to encode these characters in string types for languages such as Java and JavaScript that use UTF-16 internally. The use of surrogates makes the strings somewhat opaque. For example, U+E0020 is encoded in UTF-16 as 0xDB40.DC20 and in UTF-8 as the byte sequence 0xF3.A0.80.A0.

5.3.2 Advantages

These language tag characters could be used as part of normal Unicode text without modification to the structure of the document format.

5.3.3 Issues

Unicode tag characters are strongly deprecated by the Unicode Consortium. These tag characters were intended for use in language tagging within plain text contexts and are often suggested as an alternate means of providing in-band non-markup language tagging. We are unaware of any implementations that use them as language tags.

Applications that treat the characters as unknown Unicode characters will display them as tofu (hollow box replacement characters) and may count them towards length limits, etc. So they are only useful when applications or interchange mechanisms are fully aware of them and can remove them or disregard them appropriately. Although the characters are not supposed to be displayed or have any effect on text processing, in practice they can interfere with normal text processes such as truncation. line wrapping, hyphenation, spell-checking and so forth.

By design, [BCP47] language tags are intended to be ASCII case-insensitive. Applications handling Unicode tag characters would have to apply similar case-insensitivity to ensure correct identification of the language. (The Unicode data doesn't specify case conversion pairings for these characters; this complicates the processing and matching of langauge tag values encoded using the tag characters.)

Moreover, language tags need to be formed from valid subtags to conform to [BCP47]. Valid subtags are kept in an IANA registry and new subtags are added regularly, so applications dealing with this kind of tagging would need to always check each subtag against the latest version of the registry.

The language tag characters do not allow nesting of language tags. For example, if a string contains two languages, such as a quote in French inside an English sentence, Unicode tag characters can only indicate where one language starts. To indicate nested languages, tags would need to be embedded into the text not just prefixed to the front.

Although never implemented, other types of tags could be embedded into a string or document using Unicode tag characters. It is possible for these tags to overlap sections of text tagged with a language tag.

Finally, Unicode has recently "recycled" these characters for use in forming sub-regional flags, such as the flag of Scotland (🏴󠁧󠁢󠁳󠁴󠁿󠁧), which is made of the sequence:󠁢󠁳󠁣󠁴󠁿

  • 🏴 [U+1F3F4 WAVING BLACK FLAG]
  • 󠁧 [U+E0067 TAG LATIN SMALL LETTER G]
  • 󠁢 [U+E0062 TAG LATIN SMALL LETTER B]
  • 󠁳 [U+E0073 TAG LATIN SMALL LETTER S]
  • 󠁣 [U+E0063 TAG LATIN SMALL LETTER C]
  • 󠁴 [U+E0074 TAG LATIN SMALL LETTER T]
  • 󠁿 [U+E007F CANCEL TAG]
Editor's note

The above is a new feature of emoji added in Unicode 10.0 (version 5.0 of UTR#51) in June 2017. Proper display depends on your system's adoption of this version.

5.4 Use a language detection heuristic

This approach is NOT recommended.

5.4.1 How it works

Producers do nothing.

Consumers run a language detection algorithm to determine the language of the text. These are usually statistically based heuristics, such as using n-gram frequency in a language, possibly coupled with other data.

5.4.2 Advantages

There are no fundamental advantages to this approach.

5.4.3 Issues

Heuristics are more accurate the longer and more representative the text being scanned is. Short strings may not detect well.

Language detection is limited to the languages for which one has a detector.

Inclusions, such as personal or brand names in another language or script, can throw off the detection.

Language detection tends to be slow and can be memory intensive. Simple consumers probably can't afford the complexity needed to determine the language.

6. Localization Considerations

Many specifications need to allow multiple different language values to be returned for a given field. This might be to support runtime localization or because the producer has multiple different language values and cannot select or distinguish them appropriately. There are several ways that multiple language values could be organized. For speed and ease of access, the use of language indexing is a useful strategy.

In language indexing, a given field's value is an array of key-value pairs. The keys in the array are language tags. The values of each language tag are strings or, ideally, Localizable objects. Here's an example of what a language indexed field title might look like:

Using the language tag as a key to the value array allow for rapid selection of the correct value for a given request. Notice that, if the value of the language tag is a Localizable, the language might be repeated in the data structure.

For example, if the language requested were U.S. English (en-US), this format makes it easier to match and extract the best fitting title object {"value": "Learning Web Design", "lang": "en"}. An additional potential advantage is that the indexed language tag can indicate the intended audience of the value separately from the language tag of the actual data value. An example of this might be the use of language ranges from [RFC4647], as in the following example, where a more specific language value might be wrapped with a less-specific language tag. In this example, the content has been labeled with a specific language tag (de-DE), but is available and applicable to users who speak other variants of German, such as de-CH or de-AT:

A less common example would be when a system supplies a specific value in a different ("wrong") language from the indexing language tag, perhaps because the actual translated value is missing:

The primary issue with this approach is the need to extract the indexing language tag from the content in order to generate the index. Producers might also need to have a serialization agreement with consumers about whether the indexing language tag will be in any way canonicalized. For example, the language tag cel-gaulish is one of the [BCP47] grandfathered language tags. Some implementations, such as those following the rules in [CLDR], would prefer that this tag be replaced with a modern equivalent (xtg-x-cel-gaulish in this case) for the purposes of language negotiation.

[JSON-LD] defines a specific implementation of language indexing, which depends on the use of the @context structure. This structure does not support the use of Localizable values (only strings or arrays of strings are supported), so changes would be needed to allow some of the above capabilities in [JSON-LD] documents.

A. The Localizable WebIDL Dictionary

This section contains a WebIDL definition for a Localizable dictionary.

To be effective, specification authors should consistently use the same formats and data structures so that the majority of data formats are interoperable (in other words, so that data can be copied between many formats without having to apply additional processing). We recommend adoption of the Localizable WebIDL "dictionary" as the best available format for JSON-derived formats to do that.

By defining the language and direction in a WebIDL dictionary form, specifications can incorporate language and direction metadata for a given String value succinctly. Implementations can recyle the dictionary implementation straightforwardly.

B. Acknowledgements

The Internationalization (I18N) Working Group would like to thank the following contributors to this document: Mati Allouche, David Baron, Ivan Herman, Tobie Langel, Sangwhan Moon, Felix Sasaki, Najib Tounsi, and many others.

The following pages formed the initial basis of this document:

C. References

C.1 Informative references

[BCP47]
Tags for Identifying Languages. A. Phillips; M. Davis. IETF. September 2009. IETF Best Current Practice. URL: https://tools.ietf.org/html/bcp47
[CLDR]
Unicode Common Locale Data Repository. Unicode Consortium. URL: http://cldr.unicode.org/
[ECMA-402]
ECMAScript Internationalization API Specification. Ecma International. URL: https://tc39.github.io/ecma402/
[HTML]
HTML Standard. Anne van Kesteren; Domenic Denicola; Ian Hickson; Philip Jägenstedt; Simon Pieters. WHATWG. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[HTML5]
HTML5. Ian Hickson; Robin Berjon; Steve Faulkner; Travis Leithead; Erika Doyle Navara; Theresa O'Connor; Silvia Pfeiffer. W3C. 27 March 2018. W3C Recommendation. URL: https://www.w3.org/TR/html5/
[ITS20]
Internationalization Tag Set (ITS) Version 2.0. David Filip; Shaun McCance; David Lewis; Christian Lieske; Arle Lommel; Jirka Kosek; Felix Sasaki; Yves Savourel. W3C. 29 October 2013. W3C Recommendation. URL: https://www.w3.org/TR/its20/
[JSON-LD]
JSON-LD 1.0. Manu Sporny; Gregg Kellogg; Markus Lanthaler. W3C. 16 January 2014. W3C Recommendation. URL: https://www.w3.org/TR/json-ld/
[LDML]
Unicode Technical Standard #35: Unicode Locale Data Markup Language (LDML). Mark Davis; CLDR Contributors.URL: https://unicode.org/reports/tr35/
[LTLI]
Language Tags and Locale Identifiers for the World Wide Web. Felix Sasaki; Addison Phillips. W3C. 23 April 2015. W3C Working Draft. URL: https://www.w3.org/TR/ltli/
[RDF-PLAIN-LITERAL]
rdf:PlainLiteral: A Datatype for RDF Plain Literals (Second Edition). Jie Bao; Sandro Hawke; Boris Motik; Peter Patel-Schneider; Axel Polleres. W3C. 11 December 2012. W3C Recommendation. URL: https://www.w3.org/TR/rdf-plain-literal/
[RFC2119]
Key words for use in RFCs to Indicate Requirement Levels. S. Bradner. IETF. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[RFC4647]
Matching of Language Tags. A. Phillips; M. Davis. IETF. September 2006. Best Current Practice. URL: https://tools.ietf.org/html/rfc4647
[UAX9]
Unicode Bidirectional Algorithm. Mark Davis; Aharon Lanin; Andrew Glass. Unicode Consortium. 14 May 2017. Unicode Standard Annex #9. URL: https://www.unicode.org/reports/tr9/tr9-37.html
[Unicode]
The Unicode Standard. Unicode Consortium. URL: https://www.unicode.org/versions/latest/
[WebIDL]
Web IDL. Boris Zbarsky. W3C. 15 December 2016. W3C Editor's Draft. URL: https://heycam.github.io/webidl/
[XML]
Extensible Markup Language (XML) 1.0 (Fifth Edition). Tim Bray; Jean Paoli; Michael Sperberg-McQueen; Eve Maler; François Yergeau et al. W3C. 26 November 2008. W3C Recommendation. URL: https://www.w3.org/TR/xml/
[xmlschema11-2]
W3C XML Schema Definition Language (XSD) 1.1 Part 2: Datatypes. David Peterson; Sandy Gao; Ashok Malhotra; Michael Sperberg-McQueen; Henry Thompson; Paul V. Biron et al. W3C. 5 April 2012. W3C Recommendation. URL: https://www.w3.org/TR/xmlschema11-2/