This specification specifies an Hypothetical Render Model (HRM) that constrains the presentation complexity of documents that conform to any of the TTML Profiles for Internet Media Subtitles and Captions ([IMSC]).
The model is not intended as a specification of the processing requirements for implementations. For instance, while the model defines a glyph buffer for the purpose of limiting the number of glyphs displayed at any given point in time, it neither requires the implementation of such a buffer, nor models the sub-pixel glyph positioning and anti-aliased glyph rendering that can be used to produce text output.
Furthermore, the model is not intended to constrain readability complexity.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document was published by the Timed Text Working Group as a Working Draft using the Recommendation track.
Publication as a Working Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 2 November 2021 W3C Process Document.
This specification specifies an Hypothetical Render Model (HRM) that constrains the presentation complexity of a IMSC Document Instance.
This specification uses the same conventions as [IMSC].
character. The character code property of a Character Information Item.
code point. As defined by [i18n-glossary].
empty ISD. An Intermediate Synchronic Document with no presented region.
non-empty ISD. An Intermediate Synchronic Document with at least one presented region.
error. A failure to conform to the constraints defined by this specification.
IMSC Document Instance. A Document Instance that conforms to any profile defined in any edition of [IMSC].
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key word SHALL in this document is to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
Unless noted otherwise, this specification applies to an IMSC Document Instance.
A sequence of consecutive Intermediate Synchronic Documents conforms to the Hypothetical Render Model if is processed without error.
This section is non-normative.
The objective of the HRM is to allow subtitle and caption authors and providers to verify that the content they provide does not exceed defined complexity levels, so that playback systems can render the content synchronized with the author-specified display times.
Playback systems include desktop computers, mobile devices and home theatre devices.
The HRM is not a new concept: it has been included in all versions and editions of [IMSC] and has remained substantially unchanged. It is refactored herein to simplify document maintenance.
IMSC Document Instances are typically authored by a first party and rendered by a second party. Unless both parties agree on the maximum complexity of a IMSC Document Instance, it is likely that:
As illustrated in Figure 1, by defining a method (the HRM) to compute a proxy for the complexity of an IMSC Document Instance and specifying a complexity limit based on such proxy:
The HRM supplements the syntactic and structural constraints imposed in [IMSC] by imposing constraints on the contents of the presentation.
Because of the temporal and spatial variability of subtitles and captions across types of content, territories and languages, it is not possible to limit the complexity of an IMSC Document Instance using only average values.
An average-based constraint of 840 characters per minute could be met in multiple ways, with different rendering complexities. Contrast two potential approaches:
In the first, 5 characters are presented for a fraction of a second, followed by 835 characters that are then presented for over 59 seconds. This generates a high rendering complexity for the 835 characters, since there is only a brief time available to paint them.
In the second, 210 characters must be painted every 15 seconds, giving 15 seconds to prepare for the next presentation. This has a much lower rendering complexity.
The HRM achieves a more accurate representation of the complexity of an IMSC Document Instance at any given time by taking into account its past complexity in addition to its instantaneous complexity. The same approach is commonly used in video to limit bitstream complexity, e.g., the Hypothetical Reference Decoder (HRD) specified in [iso14496-10].
The HRM defines a simple model for the rendering of subtitles and captions, and uses the time it takes to render subtitles and captions according to that model as a proxy for the complexity of the subtitles and captions. Rendering includes drawing region backgrounds, rendering and copying text, and decoding and copying images. Complexity is then limited by requiring that the time to render one subtitle or caption is shorter than the time elapsed since the previous subtitle or caption.
This simple model requires only a static analysis of the IMSC Document Instance, requires no fetching of external resources and does not require the IMSC Document Instance to be actually rendered. Several simplifying assumptions are made to achieve this. For example, the model assumes that each character is drawn independently, and accounts for that assumption being, in many cases, false, by assigning different render speeds for different scripts. In general the model is not intended to capture the actual time that an implementation takes to render subtitles and captions, but rather scale with it: a document that is twice as complex according to the model would require roughly twice as many resources to actually render.
The HRM is typically used prior to distribution of the IMSC Document Instance to the end-user, as an integral part of authoring and as a quality check before distribution.
When the HRM is used, the consequences of an IMSC Document Instance exceeding the HRM limits depends on the context:
The HRM is not intended to be used when the IMSC Document Instance is presented to end-users since:
This section is non-normative.
The model illustrated in Figure 2 operates on successive Intermediate Synchronic Documents Ei obtained from an input IMSC Document Instance:
The model specifies a (hypothetical) time required for completely painting a non-empty ISD as a proxy for complexity. Painting includes clearing the Back Buffer, drawing region backgrounds, rendering and copying glyphs, and decoding and copying images. Complexity is then limited by requiring that painting of non-empty ISD En begins no earlier than the presentation time of the previous non-empty non-empty ISD Em and completes by the presentation time of En.
In contrast, there is no complexity involved connecting and disconnecting the Front Buffer from the display, and thus no complexity associated with empty ISDs.
Whenever applicable, constraints are specified relative to Root Container Region dimensions, allowing subtitle sequences to be authored independently of Related Video Object resolution.
To enable scenarios where the same glyphs are used in multiple successive Intermediate Synchronic Documents, e.g. to convey a CEA-608/708-style roll-up (see [CEA-608] and [CEA-708]), the Glyph Buffers Gn and Gn-1 store rendered glyphs across Intermediate Synchronic Documents, allowing glyphs to be copied into the Presentation Buffer instead of rendered, a more costly operation.
Similarly, Decoded Image Buffers Dn and Dn-1 store decoded images across Intermediate Synchronic Documents, allowing images to be copied into the Presentation Buffer instead of decoded.
The Presentation Compositor SHALL render in the Back Buffer each successive non-empty ISD En using the following steps in order:
The Presentation Compositor SHALL start rendering En:
The Presentation Compositor never begins rendering an ISD more than IPD ahead of its presentation time.
The duration DUR(En) for painting an Intermediate Synchronic Document En in the Back Buffer SHALL be:
DUR(En) = S(En) / BDraw + DURT(En) + DURI(En)
The contents of the Back Buffer SHALL be transferred instantaneously to the Front Buffer at the presentation time of a non-empty ISD En, making the latter available for display.
The Front Buffer SHALL be:
It is possible for the contents of the Front Buffer to never be displayed. This can happen, for example, if the Back Buffer is copied twice to Front Buffer between two consecutive video frame boundaries of the Related Video Object.
It SHALL be an error for the Presentation Compositor to fail to complete painting pixels for non-empty ISD En before its presentation time.
Unless specified otherwise, the following table SHALL specify values for IPD and BDraw.
|Initial Painting Delay (IPD)||1 s|
|Normalized background drawing performance factor (BDraw)||12 s-1|
BDraw effectively sets a limit on fillings regions - for example, assuming that the Root Container Region is ultimately rendered at 1920×1080 resolution, a BDraw of 12 s-1 would correspond to a fill rate of 1920×1080×12/s=23.7×220pixels s-1.
The total normalized drawing area S(En) for Intermediate Synchronic Document En SHALL be
S(En) = CLEAR(En) + PAINT(En )
where CLEAR(En) = 1.
PAINT(En) SHALL be the normalized area to be painted for all regions that are used in Intermediate Synchronic Document En according to:
PAINT(En) = ∑Ri∈Rp NSIZE(Ri) ∙ NBG(Ri)
where Rp SHALL be the set of presented regions in the Intermediate Synchronic Document En.
NSIZE(Ri) SHALL be given by:
NSIZE(Ri) = (width of Ri ∙ height of Ri ) ÷ (Root Container Region height ∙ Root Container Region width)
NBG(Ri) SHALL be the total number of elements within the tree rooted at region Ri that satisfy the following criteria:
NBG(Ri) counts the number of
tts:backgroundColor attributes specified
In a common scenario illustrated below, this results in the complexity of painting (relatively small) span backgrounds to be equal to painting the background of (relatively much larger) region that essentially fills the root container.
This can be addressed by excluding
span from the NBG(Ri) computation, and instead including
tts:backgroundColor in the list of glyph properties at https://www.w3.org/TR/ttml-imsc1.1/#paint-text.
An element and its parent that satisfy the criteria above and share identical computed values of
tts:backgroundColor are counted as two distinct elements for the purpose of computing NBG(Ri).
set element is not included in the computation of NBG(Ri). While it can affect the
computed values of
tts:backgroundColor, it is removed during Intermediate Synchronic Document
The Presentation Compositor SHALL paint into the Back Buffer all visible pixels of presented images of Intermediate Synchronic Document En.
For each presented image, the Presentation Compositor SHALL either:
Two images SHALL be identical if and only if they reference the same encoded image source.
The duration DURI(En) for painting images of an Intermediate Synchronic Document En in the Back Buffer SHALL be as follows:
DURI(En) = ∑Ii ∈ Ic NRGA(Ii) / ICpy + ∑Ij ∈ Id NSIZ(Ij) / IDec
NRGA(Ii) is the Normalized Image Area of presented image Ii and SHALL be equal to:
NRGA(Ii)= (width of Ii ∙ height of Ii ) ÷ ( Root Container Region height ∙ Root Container Region width )
NSIZ(Ii) SHALL be the number of pixels of presented image Ii.
The contents of the Decoded Image Buffer Dn SHALL be transferred instantaneously to Decoded Image Buffer Dn-1 at the presentation time of Intermediate Synchronic Document En.
The total size occupied by images stored in Decoded Image Buffers Dn or Dn-1 SHALL be the sum of their Normalized Image Area.
The size of Decoded Image Buffers Dn or Dn-1 SHALL be the Normalized Decoded Image Buffer Size (NDIBS).
Unless specified otherwise, the following table SHALL specify ICpy, IDec, and NDBIS.
|Normalized image copy performance factor (ICpy)||6|
|Image Decoding rate (IDec)||1 × 220 pixels s-1|
|Normalized Decoded Image Buffer Size (NDIBS)||0.9885|
In the context of this section, a glyph is a tuple consisting of (i) one character and (ii) the computed values of the following style properties:
The Hypothetical Render Model defines a one-to-one mapping between characters and glyphs (using the definition of glyph from this document). While a one-to-one mapping between code points and glyphs (using the definition of glyph from [i18n-glossary]) is common in some scripts (such as the Latin script), the actual relationship is more complex. Some scripts, such as Arabic, use different glyphs for a given character, depending on its position in a word. Some scripts require combining marks or use a sequence of code points to form a glyph. Cases exist where a given sequence of code points can have different glyph representations depending on context. This complexity is accounted for by reducing the performance of the glyph buffer for scripts where a one-to-one mapping is not the general rule (see GCpy below).
For each glyph associated with a character in a presented region of Intermediate Synchronic Document En, the Presentation Compositor SHALL:
The duration DURT(En) for rendering the text of an Intermediate Synchronic Document En in the Back Buffer is as follows:
DURT(En) = ∑gi ∈ Γr NRGA(gi) / Ren(gi) + ∑gj ∈ Γc NRGA(gj) / GCpy
The Normalized Rendered Glyph Area NRGA(gi) of a glyph gi SHALL be equal to:
NRGA(gi) = (fontSize of gi as percentage of Root Container Region height)2
NRGA(Gi) does not take into account decorations (e.g. underline), effects (e.g. outline) or actual typographical glyph aspect ratio. An implementation can determine an actual buffer size needs based on worst-case glyph size complexity.
The contents of the Glyph Buffer Gn SHALL be copied instantaneously to Glyph Buffer Gn-1 at the presentation time of Intermediate Synchronic Document En.
It SHALL be an error for the sum of NRGA(gi) over all glyphs Glyph Buffer Gn to be larger than the Normalized Glyph Buffer Size (NGBS).
Unless specified otherwise, the following table specifies values of GCpy, Ren and NGBS.
|Normalized glyph copy performance factor (GCpy)|
|Script property, as defined at [UAX24], for the character of gi||GCpy|
|any other value||3|
|Text rendering performance factor Ren(Gi)|
|Script property, as defined at [UAX24], for the character of gi||Ren(Gi)|
|any other value||1.2|
|Normalized Glyph Buffer Size (NGBS)|
While DURT(En) is not affected, the choice of font by the presentation processor can increase actual rendering
complexity at time of presentation. For instance, a cursive font might select different glyphs for a given grapheme (in order to maintain joining or for the start/end of the
word) even in the
Latin script. Conversely the rendering of scripts that fall in the any other
value category can in practice achieve performance comparable to, say, the
This section is non-normative.
In a system where IMSC Document Instances are expected to conform to the Hypothetical Render Model, an IMSC Document Instance that does not conform to the Hypothetical Render Model might negatively impact accessibility during presentation of the IMSC Document Instance and its associated content.
This specification does not attempt to model any additional complexity for presentation processors that might arise due to the user customisation of presentation, for example as described by [media-accessibility-reqs]; such user customisation is not defined by [IMSC].
Implementers of presentation processors that support user customisation of presentation should ensure that those processors are able to present IMSC Document Instances that conform to the Hypothetical Render Model, even if the customisation effectively increases the complexity of presentation.
This section is non-normative.
This specification has no inherent security or privacy implications.
The algorithm defined within this specification is used for static analysis of a resource. This specification does not define any protocol or interface for obtaining such a resource, and it does not define any interface for exposing the results of the analysis. No personal or sensitive information is processed as part of the algorithm, other than any such information that might happen to be part of the IMSC Document Instance being analysed. No information is exposed by the algorithm to any origin. No scripts are loaded or processed as part of the algorithm and no links to external resources are dereferenced.
Implementers of this specification should capture and meet privacy and security requirements for their intended application. For example, an implementation could, when reporting on an error encountered during processing of an IMSC Document Instance, include a section of the content of an IMSC Document Instance to elaborate the error. If that content could include sensitive or personal information, the implementation should ensure that any such output is provided using appropriately secure protocols. No such reporting is defined or required by this specification.
This section is non-normative.
This specification does not define how, or even if, errors should be reported.
For example, an implementation could stop on the first error encountered, or continue to process the IMSC Document Instance and report every error. Or an implementation could exit with an appropriate status code without reporting any details at all.
This specification does not define any runtime exceptions, or how such exceptions should be handled.
This section is non-normative.
This section is non-normative.
In order to allow short (less than 100 ms) gaps between subtitles, which is common practice, the complexity of presenting empty ISDs has been reduced to zero: instead of being drawn into the Back Buffer, an empty ISD merely disconnects the Front Buffer from the display while it is presented.
The first Intermediate Synchronic Document is no longer treated differently and incurs a cost for clearing the Back Buffer.