[go: nahoru, domu]

Incremental Font Transfer

Editor’s Draft,

This version:
https://w3c.github.io/IFT/Overview.html
Latest published version:
https://www.w3.org/TR/IFT/
Feedback:
public-webfonts-wg@w3.org with subject line “[IFT] … message topic …” (archives)
GitHub
Editors:
Chris Lilley (W3C)
(Google Inc.)
(Adobe Inc.)

Abstract

This specification defines a method to incrementally transfer fonts from server to client. Incremental transfer allows clients to load only the portions of the font they actually need which speeds up font loads and reduces data transfer needed to load the fonts. A font can be loaded over multiple requests where each request incrementally adds additional data.

Status of this document

This is a public copy of the editors’ draft. It is provided for discussion only and may change at any moment. Its publication here does not imply endorsement of its contents by W3C. Don’t cite this document other than as work in progress.

If you wish to make comments regarding this document, please file an issue on the specification repository.

This document was produced by the Web Fonts Working Group

This document was produced by groups operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 03 November 2023 W3C Process Document.

1. Introduction

This section is not normative.

Incremental Font Transfer (IFT) is a technology to improve the latency of remote fonts (or "web fonts") on the web. Without this technology, a browser needs to download every last byte of a font before it can render any characters using that font. IFT allows the browser to download only some of the bytes in the file, thereby decreasing the perceived latency between the time when a browser realizes it needs a font and when the necessary text can be rendered with that font. Unlike traditional font subsetting approaches Incremental Font Transfer retains the encoding of layout rules between segments (Progressive Font Enrichment: Evaluation Report § fail-subset).

The success of WebFonts is unevenly distributed. This specification allows WebFonts to be used where slow networks, very large fonts, or complex subsetting requirements currently preclude their use. For example, even using WOFF 2 [WOFF2], fonts for CJK languages can be too large to be practical.

1.1. Technical Motivation: Evaluation Report

See the Progressive Font Enrichment: Evaluation Report [PFE-report] for the investigation which led to this specification.

1.2. Overview

An incremental font is a regular OpenType font that is reformatted to include incremental functionality, partly in virtue of two additional tables. Using these new tables the font can be augmented (eg. to cover more codepoints) by loading and applying patches to it.

The IFT technology has four main pieces:

At a high level an incremental font is used like this:

  1. The client downloads an initial font file, which contains some initial subset of data from the full version of the font along with embedded data describing the set of patches which can be used to extend the font.

  2. Based on the content to be rendered, the client selects, downloads, and applies patches to extend the font to cover additional characters, layout features, and/or variation space. This step is repeated each time there is new content.

1.3. Creating an Incremental Font

It is expected that the most common way to produce an incremental font will be to convert an existing font to use the incremental encoding defined in this specification. At a high level converting an existing font to be incremental will look like this:

  1. Choose the content of the initial subset, this will be included in the initial font file the client loads and usually consists of any data from the original font that is expected to always be needed.

  2. Choose the patch type or types. Different patch types have different qualities, so different use cases can call for different patch types, or in some cases a mix of two types.

  3. Choose a segmentation of the font. Individual segments will be added to the base subset by the client using patches. Choosing an appropriate segmentation is one of the most important parts of producing an efficient encoding.

  4. Based on these choices, generate a set of patches, where each patch adds the data for a particular segment relative to either the initial font file or a previous patch.

  5. Generate the initial font file including the initial subset and a patch mapping. This mapping lists all of the available patch files, the url they reside at, and information on what data the patch will add to the font.

Note: this is a highly simplified description of creating an incremental font, a more in-depth discussion of generating an encoding and requirements on the encoding can be found in the § 7 Encoding section.

1.4. Performance Considerations and the use of Incremental Font Transfer

Using incremental transfer may not always be beneficial, depending on the characteristics of the font, the network, and the content being rendered. This section provides non-normative guidance to help decide when incremental transfer should be utilized.

It is common for an incremental font to trigger the loading of multiple patches in parallel. So to maximize performance, when serving an incremental font it is recommended that an HTTP server which is capable of multiplexing (such as [rfc9113] or [rfc9114]) is used.

Incrementally loading a font has a fundamental performance trade off versus loading the whole font. Simplistically, under incremental transfer less bytes may be transferred at the potential cost of increasing the total number of network requests being made, and/or increased request processing latency. In general incremental font transfer will be beneficial where the reduction in latency from sending less bytes outweighs additional latency introduced by the incremental transfer method.

The first factor to consider is the language of the content being rendered. The evaluation report contains the results of simulating incremental font transfer across three categories of languages (Progressive Font Enrichment: Evaluation Report § langtype). See it’s conclusions Progressive Font Enrichment: Evaluation Report § conclusions for a discussion of the anticipated performance of incremental font transfer across the language categories.

Next, how much of the font is expected to be needed? If it’s expected that most of the font will be needed to render the content, then incremental font transfer is unlikely to be beneficial. In many cases however only part of a font is expected to be needed. For example:

An alternative to incremental transfer is to break a font into distinct subsets (typically by script) and use the unicode range feature of @font-face to load only the subsets needed. However, this can alter the rendering of some content Progressive Font Enrichment: Evaluation Report § fail-subset if there are layout rules between characters in different subsets. Incremental font transfer does not suffer from this issue as it can emcompass the original font and all of it’s layout rules.

1.4.1. Reducing the Number of Network Requests

As discussed in the previous section the most basic implementation of incremental font transfer will tend to increase the total number of requests made vs traditional font loading. Since each augmentation will typically require at least one round trip time, performance can be negatively impacted if too many requests are made. Depending on which patch types are available and how much information is provided in the patch mapping, a client might preemptively request patches for codepoints that are not currently needed, but expected to be needed in the future. Intelligent use of this feature by an implementation can help reduce the total number of requests being made. The evaluation report explored this by testing the performance of a basic character frequency based codepoint prediction scheme and found it improved overall performance.

2. Opt-In Mechanism

Web pages can choose to opt-in to incremental transfer for a font via the use of a CSS font tech keyword (CSS Fonts 4 § 11.1 Font tech) inside the ''@font-face'' block. The keyword incremental is used to indicate the referenced font contains IFT data and should only be loaded by a user agent which supports incremental font transfer.

@font-face {
    font-family: "MyCoolWebFont";
    src: url("MyCoolWebFont.otf") tech(incremental);
}

Note: Each individual @font-face block may or may not opt-in to IFT. This is due to the variety of ways fonts are used on web pages. Authors have control over which fonts they want to use this technology with, and which they do not.

Note: the IFT tech keyword can be used in conjunction with other font tech specifiers to perform font feature selection. For example a @font-face could include two URIs one with tech(incremental, color-COLRv1) and the other with tech(incremental, color-COLRv0).

2.1. Offline Usage

Special consideration must be taken when saving a page for offline usage that uses an incrementally transferred font since the saved page won’t be able to increment the font if content changes (eg. due to JavaScript execution). In these cases the page saving mechanism should fully expand the incremental font by invoking Fully Expand a Font Subset and replace references to the incremental font with the fully expanded one.

3. Definitions

3.1. Font Subset

A font subset is a modified version of a font file [iso14496-22] that contains only the data needed to render a subset of:

supported by the original font. When a subsetted font is used to render text using any combination of the subset codepoints, layout features, or design-variation space it should render identically to the original font. This includes rendering with the use of any optional typographic features that a renderer may choose to use from the original font, such as hinting instructions. Design variation spaces are specified using the user-axis scales (OpenType Specification § otvaroverview#coordinate-scales-and-normalization).

A font subset definition describes the minimum data (codepoints, layout features, variation axis space) that a font subset should support.

Note: For convenience the remainder of this document links to the [open-type] specification which is a copy of [iso14496-22].

3.2. Font Patch

A font patch is a file which encodes changes to be made to an IFT-encoded font. Patches are used to extend an existing font subset and provide expanded coverage.

A patch format is a specified encoding of changes to be applied relative to a font subset. A set of changes encoded according to the format is a font patch. Each patch format has an associated patch application algorithm which takes a font subset and a font patch encoded in the patch format as input and outputs an extended font subset.

3.3. Patch Map

A patch map is an open type table which encodes a collection of mappings from font subset definitions to URIs which host patches that extend the incremental font. A patch map table encodes a list of patch map entries, where each entry has a key and value. The key is a font subset definition and the value is a URI, the § 6 Font Patch Formats used by the data at the URI, and the compatibility ID of the patch map table. More details of the format of patch maps can be found in § 5 Extensions to the Font Format.

Patch Map Entry summary:

Key Value

3.4. Explanation of Data Types

Encoded data structures in the remainder of this specification are described in terms of the data types defined in OpenType Specification § otff#data-types. As with the rest of OpenType, all fields use "big-endian" byte ordering.

4. Extending a Font Subset

This section defines the algorithm that a client uses to extend an incremental font subset to cover additional codepoints, layout features and/or design space.

4.1. Patch Invalidations

The application of a patch to an incremental font subset may invalidate some or all of the other mapped patches, so that if one of those were to be applied the result would be a corrupt font. Patch validity is tracked by the compatibility ID from the § 5.2 Patch Map Table. Every patch has a compatibility ID encoded within it which needs to match the compatibility ID from the § 5.2 Patch Map Table which lists that patch. Every patch is categorized into one of three types, depending on how it does or does not invalidate other patches in the map(s):

The invalidation behavior of a specific patch is encoded in its format number, which can be found in § 6.1 Formats Summary.

4.2. Incremental Font Extension Algorithm

Extend an Incremental Font Subset

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Set extended font subset to font subset.

  2. Load the 'IFT ' and 'IFTX' (if present) mapping tables from extended font subset. Both tables are formatted as a § 5.2 Patch Map Table. Check that they are valid according to the requirements in § 5.2 Patch Map Table. If either table is not valid, invoke Handle errors. If extended font subset does not have an 'IFT ' table, then it is not an incremental font and cannot be extended, return extended font subset.

  3. For each of tables 'IFT ' and 'IFTX' (if present): convert the table into a list of entries by invoking Interpret Format 1 Patch Map or Interpret Format 2 Patch Map. Concatenate the returned entry lists into a single list, entry list.

  4. For each entry in entry list invoke Check entry intersection with entry and target subset definition as inputs, if it returns false remove entry from entry list.

  5. Remove any entries in entry list which have a patch URI which was loaded and applied previously during the execution of this algorithm.

  6. If entry list is empty, then the extension operation is finished, return extended font subset.

  7. Pick one entry from entry list with the following procedure:

    • If entry list contains one or more patch map entries which have a patch format that is Full Invalidation then, select exactly one of the Full Invalidation entries in entry list. The criteria for selecting the single entry is left up to the implementation to decide.

    • Otherwise if entry list contains one or more patch map entries which have a patch format that is Partial Invalidation then, select exactly one of the Partial Invalidation entries in entry list. The criteria for selecting the single entry is left up to the implementation to decide.

    • Otherwise select exactly one of the No Invalidation entries in entry list. The criteria for selecting the single entry is left up to the implementation to decide.

  8. Load patch file by invoking Load patch file with the original font subset URI as the original font URI and the entry patch URI as the patch URI. The total number of patches that a client can load and apply during a single execution of this algorithm is limited to:

    Can be loaded and applied during a single invocation of this algorithm. If either count has been exceeded this is an error invoke Handle errors.

  9. Apply patch file using the appropriate application algorithm (matching the patches format in entry) from § 6 Font Patch Formats to apply the patch file using the patch URI and the compatibility id from entry to extended font subset.

  10. Go to step 2.

Note: the algorithm here presents patch loads as being done one at a time; however, to improve performance client implementations are encouraged to pre-fetch patch files that will be applied in later iterations by the algorithm. The invalidation categories can be used to predict which intersecting patches from step 4 will remain be valid to be applied. For example: in a case where there are only "No Invalidation" intersecting patches the client could safely load all intersecting patches in parallel, since no patch application will invalidate any of the other intersecting patches.

Check entry intersection

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. For each set in subset definition (codepoints, feature tags, design space) check if the set intersects the corresponding set from mapping entry. A set intersects when:

    subset definition set is empty subset definition set is not empty
    mapping entry set is empty true true
    mapping entry set is not empty false true if the two sets intersect
  2. If all sets checked in step 1 intersect, then return true for intersects otherwise false.

Load patch file

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Perform reference resolution on Patch URI using Original Font URI as the base URI to produce the target URI.

  2. Retrieve the contents of target URI using the fetching capabilities of the implementing user agent. For web browsers, [fetch] should be used. When using [fetch] a request for patches should use the same CORS settings as the initial request for the IFT font. This means that for a font loaded via CSS the patch request would follow: CSS Fonts 4 § 4.8.2 Font fetching requirements.

  3. Return the retrieved contents as patch file, or an error if the fetch resulted in an error.

Handle errors

If the extending the font subset process has failed with an error then, some of the data within the font may not be fully loaded and as a result rendering content which relies on the missing data may result in incorrect renderings. The client may choose to continue using the font, but should only use it for the rendering of codepoints, features, and design space that are fully loaded. An element of the font can be considered fully loaded if data for that element is present in the font (eg. for codepoints, listed in the cmap) and there are no mapping entries in the font subset which reference that element. Rendering of all other content should fallback to a different font following normal client fallback logic.

If the error occurred during Load patch file then, the client may continue trying to extend the font subset if there are remaining patches available other than the one(s) that failed to load. In the case of all other errors the client must not attempt to further extend the font subset.

4.3. Fully Expanding a Font

This sections defines an algorithm that can be used to transform an incremental font into a fully expanded non-incremental font. This process loads all available data provided by the incremental font and produces a single static font file that contains no further patches to be applied.

Fully Expand a Font Subset

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Invoke Extend an Incremental Font Subset with font subset. The input target subset definition is a special one which is considered to intersect all entries in the Check entry intersection step. Return the resulting font subset as the expanded font.

5. Extensions to the Font Format

An incremental font follows the existing OpenType format, but includes two new tables identified by the 4-byte tags 'IFT ' and 'IFTX'. These new tables are both patch maps. All incremental fonts must contain the 'IFT ' table. The 'IFTX' table is optional. When both tables are present, the mapping of the font as a whole is the union of the mappings of the two tables. The two new tables are used only in this specification and are not being added to the Open-Type specification.

Note: allowing the mapping to be split between two distinct tables allows an incremental font to more easily make use of multiple patch types. For example all patches of one type can be specified in the 'IFT ' table, and all patches of a second type in the 'IFTX' table. Those patches can make updates only to one of the mapping tables and avoid making conflicting updates.

5.1. Incremental Font Transfer and WOFF2

[WOFF2] is a commonly used to encode fonts for transfer on the web. Incremental fonts can be encoded with WOFF2 as long as special care is taken. If an incremental font will be encoded by WOFF2 for transfer:

  1. The incremental font should not make use of § 6.2 Brotli Patch patches. The WOFF2 format does not guarantee the ordering of tables in the decoded font. § 6.2 Brotli Patch patches are relative to a specific fixed set of bytes and thus cannot be used if the decoded font has unpredictable decoded bytes. § 6.3 Per Table Brotli patches do not depend on a specific table ordering and may be used.

  2. If the WOFF2 encoding will include a transformed glyf and loca table (WOFF 2.0 § 5.1 Transformed glyf table format) then, the incremental font should not contain § 6.3 Per Table Brotli patches which modify either the glyf or loca table. The WOFF2 format does not guarantee the specific bytes that result from decoding a transformed glyf and loca table, so as with #1 brotli patches cannot be used. § 6.4 Glyph Keyed patches may be used in conjunction with a transformed glyf and loca table.

  3. When utilizing a WOFF2 encoded IFT font, the client must first fully decode the WOFF2 font before § 4 Extending a Font Subset.

The 'IFT ' and 'IFTX' tables can be processed and brotli encoded by a WOFF2 encoder following the standard process defined in WOFF 2.0 § 5 Compressed data format.

5.2. Patch Map Table

A patch map is encoded in one of two formats:

Each format defines an algorithm for interpreting bytes encoded with that format to produce the list of entries it represents. The Extend an Incremental Font Subset algorithm invokes the interpretation algorithms and operates on the resulting entry list. The encoded bytes are the source of truth at all times for the patch map. Patch application during subset extension will alter the encoded bytes of the patch map and as a result the entry list derived from the encoded bytes will change. The extension algorithm reinterprets the encoded bytes at the start of every iteration to pick up any changes made in the previous iteration.

5.2.1. Patch Map Table: Format 1

Format 1 Patch Map encoding:

Type Name Description
uint8 format Set to 1, identifies this as format 1.
uint32 reserved Not used, set to 0.
uint32 compatibilityId[4] Unique ID used to identify patches that are compatible with this font (see § 4.1 Patch Invalidations). The encoder chooses this value. The encoder should set it to a random value which has not previously been used while encoding the IFT font.
uint32 entryCount Number of entries encoded in this table.
uint32 glyphCount Number of glyphs that mappings are provided for. Must match the numGlyphs field from maxp.
Offset32 glyphMapOffset Offset to a Glyph Map sub table. Offset is from the start of this table.
Offset32 featureMapOffset Offset to a Feature Map sub table. Offset is from the start of this table.
uint8 appliedEntriesBitMap[(entryCount + 7)/8] A bit map which tracks which entries have been applied. If bit i is set that indicates the patch for entry i has been applied to this font. Bit 0 is the least significant bit of appliedEntriesBitMap[0], while bit 7 is the most significant bit. Bit 8 is the least significant bit of appliedEntriesBitMap[1] and so on.
uint16 uriTemplateLength Length of the uriTemplate string.
uint8 uriTemplate[uriTemplateLength] A [UTF-8] encoded string. Contains a § 5.2.3 URI Templates which is used to produce URIs associated with each entry.
uint8 patchEncoding Specifies the format of the patches linked to by uriTemplate. Using the ID number from the § 6.1 Formats Summary table.

Glyph Map encoding:

A glyph map table associates each glyph index in the font with an entry index.

Type Name Description
uint16 firstMappedGlyph All glyph indices less than firstMappedGlyph are implicitly mapped to entry index 0.
uint8/uint16 entryIndex[glyphCount - firstMappedGlyph] The entry index for glyph i is stored in entryIndex[i - firstMappedGlyph]. Array members are uint8 if entryCount is less than 256, otherwise they are uint16.

Feature Map encoding:

A feature map table associates combinations of feature tags and glyphs with an entry index.

Type Name Description
uint16 featureCount Number of featureRecords.
FeatureRecord featureRecords[featureCount] Provides mappings for a specific feature tag. Must be sorted by featureTag with any feature tag occurring at most once.
EntryMapRecord entryMapRecords[variable] Provides the key (entry index) for each feature mapping. The entryMapRecords array contains as many entries as the sum of the entryMapCount fields in the featureRecords array, with entryMapRecords[0] corresponding to the first entry of featureRecords[0], entryMapRecords[featureRecord[0].entryMapCount] corresponding to the first entry of featureRecords[1], entryMapRecords[featureRecords[0].entryMapCount + featureRecord[1].entryMapCount]] corresponding to the first entry of featureRecords[2], and so on.

FeatureRecord encoding:

Type Name Description
Tag featureTag The feature tag this mapping is for.
uint8/uint16 firstEntryIndex uint8 if entryCount is less than 256, otherwise uint16. The first entry index this record maps too.
uint8/uint16 entryMapCount uint8 if entryCount is less than 256, otherwise uint16. The number of EntryMapRecords associated with this feature.

EntryMapRecord encoding:

Type Name Description
uint8/uint16 firstEntryIndex uint8 if entryCount is less than 256, otherwise uint16.
uint8/uint16 lastEntryIndex uint8 if entryCount is less than 256, otherwise uint16. Must be greater than or equal to firstEntryIndex.

An entry map record matches any entry indices that are greater than or equal to firstEntryIndex and less than or equal to lastEntryIndex.

5.2.1.1. Interpreting Format 1

This algorithm is used to convert a format 1 patch map into a list of patch map entries.

Interpret Format 1 Patch Map

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Check that the patch map has format equal to 1 and is valid according to the requirements in § 5.2.1 Patch Map Table: Format 1. If it is not return an error.

  2. For each unique entry index in entryIndex:

    • If the bit for entry index in appliedEntriesBitMap is set to 1, skip this entry index.

    • Collect the set of glyph indices that map to the entry index.

    • Convert the set of glyph indices to a set of unicode code points using the code point to glyph mapping in the cmap table of font subset. Ignore any glyph indices that are not mapped by cmap.

    • Convert entry index into a URI by applying uriTemplate following § 5.2.3 URI Templates.

    • Add an entry to entry list whose subset definition contains only the unicode code point set and maps to the generated URI, the patch encoding specified by patchEncoding, and compatibilityId.

  3. For each FeatureRecord and associated EntryMapRecord in featureRecords and entryMapRecords:

  4. Return entry list.

5.2.1.2. Remove Entries from Format 1

This algorithm is used to remove entries from a format 1 patch map. This removal modifies the bytes of the patch map but does not change the number of bytes.

Remove Entries from Format 1 Patch Map

The inputs to this algorithm are:

The algorithm:

  1. Check that the patch map has format equal to 1 and is valid according to the requirements in § 5.2.1 Patch Map Table: Format 1. If it is not return an error.

  2. For each unique entry index in entryIndex of patch map:

5.2.2. Patch Map Table: Format 2

Format 2 Patch Map encoding:

Type Name Description
uint8 format Set to 2, identifies this as format 2.
uint32 reserved Not used, set to 0.
uint32 compatibilityId[4] Unique ID used to identify patches that are compatible with this font (see § 4.1 Patch Invalidations). The encoder chooses this value. The encoder should set it to a random value which has not previously been used while encoding the IFT font.
uint8 defaultPatchEncoding Specifies the format of the patches linked to by uriTemplate (unless overridden by an entry). Uses the ID numbers from the § 6.1 Formats Summary table.
uint24 entryCount Number of entries encoded in this table.
Offset32 entries Offset to a Mapping Entries sub table. Offset is from the start of this table.
Offset32 entryIdStringData Offset to a block of data containing the concatentation of all of the entry ID strings. May be null (0). Offset is from the start of this table.
uint16 uriTemplateLength Length of the uriTemplate string.
uint8 uriTemplate[uriTemplateLength] A [UTF-8] encoded string. Contains a § 5.2.3 URI Templates which is used to produce URIs associated with each entry.

Mapping Entries encoding:

Type Name Description
uint8 entries[variable] Byte array containing the encoded bytes of entryCount Mapping Entry's. Each entry has a variable length, which is determined following Interpret Format 2 Patch Map Entry.

Mapping Entry encoding:

Type Name Description
uint8 formatFlags A bit field. Bit 0 (least significant bit) through bit 5 indicate the presence of optional fields. If bit 6 is set this entry is ignored. Bit 7 is reserved for future use and set to 0.
uint8 featureCount Number of feature tags in the featureTags list. Only present if formatFlags bit 0 is set.
Tag featureTags[featureCount] List of feature tags in the entry’s font subset definition. Only present if formatFlags bit 0 is set.
uint16 designSpaceCount Number of elements in the design space list. Only present if formatFlags bit 0 is set.
Design Space Segment designSpaceSegments[designSpaceCount] List of design space segments in the entry’s font subset definition. Only present if formatFlags bit 0 is set.
uint8 copyCount Number of entries in the copyIndices list. Only present if formatFlags bit 1 is set.
uint24 copyIndices[copyCount] List of indices from the entries array whose font subset definition should be copied into this entry. All values must be less than the index of this Mapping Entry in entries. Only present if formatFlags bit 1 is set.
int24 entryIdDelta Signed delta which is used to calculate the id for this entry. The id for this entry is the entry id of the previous Mapping Entry + 1 + entryIdDelta. Only present if formatFlags bit 2 is set and entryIdStringData is null (0). If not present delta is assumed to be 0.
uint16 entryIdStringLength The number of bytes that the id string for this entry occupies in the entryIdStringData data block. Only present if formatFlags bit 2 is set and entryIdStringData is not null (0). If not present the length is assumed to be 0.
uint8 patchEncoding Specifies the format of the patch linked to by this entry. Uses the ID numbers from the § 6.1 Formats Summary table. Overrides defaultPatchEncoding. Only present if formatFlags bit 3 is set.
uint16/uint24 bias Bias value which is added to all codepoint values in the codepoints set. If format bit 4 is 0 and bit 5 is 1, then this is present and a uint16. If format bit 4 is 1 and bit 5 is 1, then this is present and a uint24. Otherwise it is not present.
uint8 codepoints[variable] Set of codepoints for this mapping. Encoded as a Sparse Bit Set. Only present if formatFlags bit 4 and/or 5 is set. The length is determined by following the decoding procedures in § 5.2.2.3 Sparse Bit Set.

If an encoder is producing patches that will be stored on a file system and then served it’s recommended that only numeric entry IDs be used (via entryIdDelta) as these will generally produce the smallest encoding of the format 2 patch map. String IDs are useful in cases where patches are not being stored in advance and the ID strings can be then used to encode information about the patch being requested.

Design Space Segment encoding:

Type Name Description
Tag tag Axis tag value.
Fixed start Start (inclusive) of the segment. This value uses the user axis scale: OpenType Specification § otvaroverview#coordinate-scales-and-normalization.
Fixed end End (inclusive) of the segment. Must be greater than or equal to start. This value uses the user axis scale: OpenType Specification § otvaroverview#coordinate-scales-and-normalization.
5.2.2.1. Interpreting Format 2

This algorithm is used to convert a format 2 patch map into a list of patch map entries.

Interpret Format 2 Patch Map

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Check that the patch map has format equal to 2 and is valid according to the requirements in § 5.2.2 Patch Map Table: Format 2. If it is not return an error.

  2. Initialize last entry id to 0, current byte to 0, and current id string byte to 0.

  3. Invoke Interpret Format 2 Patch Map Entry, entryCount times. For each invocation:

    • pass in the bytes from patch map starting from entries[current byte] to the end of patch map, the bytes from patch map starting from entryIdStringData[current id string byte] to the end of patch map if entryIdStringData is non zero, last entry id, defaultPatchEncoding, and uriTemplate.

    • Set last entry id to the returned entry id.

    • Add the returned consumed byte count to current byte.

    • Add the returned consumed id string byte count to current id string byte.

    • If the returned value of ignored is false, then set the compatibility ID of the returned entry to compatibilityId and add the entry to entry list.

  4. Return entry list.

Interpret Format 2 Patch Map Entry

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. For the all steps whenever data is loaded from entry bytes increment consumed bytes with the number of bytes read.

  2. If id string bytes is not present then, set entry id = last entry id + 1.

  3. Set the patch encoding of entry to default patch encoding.

  4. Read formatFlags from entry bytes.

  5. If formatFlags bit 0 is set, then the feature tag and design space lists are present:

    • Read the feature tag list specified by featureCount and featureTags from entry bytes and add the loaded tags to entry.

    • Read the design space segment list specified by designSpaceCount and designSpaceSegments from entry bytes and add the design space segments to entry. Each segment defines an interval from start to end inclusive for the axis identified by tag.

  6. If formatFlags bit 1 is set, then the copy indices list is present:

    • Read the copy indices list specified by copyCount and copyIndices from entry bytes.

    • The copy indices refer to previously loaded entries. 0 is the first Mapping Entry in entries, 1 the second and so on. For each index in copyIndices locate the previously loaded entry with a matching index and add all code points, feature tags, and design space segments from it to entry.

  7. If formatFlags bit 2 is set, then an id delta or id string length is present:

    • If id string bytes is not present then, read the id delta specified by entryIdDelta from entry bytes and add the delta to entry id. If entry id is negative this is an error and the mapping is malformed.

    • Otherwise if id string bytes is present then, read entryIdStringLength bytes from id string bytes and set entry id to the result.

  8. If formatFlags bit 3 is set, then a patch encoding is present. Read the encoding specified by patchEncoding from entry bytes and set the patch encoding of entry to the read value.

  9. If one or both of formatFlags bit 4 and bit 5 are set, then a codepoint list is present:

    • If formatFlags bit 4 is 0 and bit 5 is 1, then read the 2 byte (uint16) bias value from entry bytes.

    • If formatFlags bit 4 is 1 and bit 5 is 1, then read the 3 byte (uint24) bias value from entry bytes.

    • Otherwise the bias is 0.

    • Read the sparse bit set codepoints from entry bytes following § 5.2.2.3 Sparse Bit Set. For each codepoint add the bias value to it and then add the result to the codepoint set in entry.

  10. If formatFlags bit 6 is set, then set ignored to true. Otherwise ignored is false.

  11. Convert entry id into a URI by applying uri template following § 5.2.3 URI Templates. Set the patch uri of entry to the generated URI.

  12. Return entry id, entry, consumed bytes, entryIdStringLength as consumed id string bytes, and ignored.

5.2.2.2. Remove Entries from Format 2

This algorithm is used to remove entries from a format 2 patch map. This removal modifies the bytes of the patch map but does not change the number of bytes.

Remove Entries from Format 2 Patch Map

The inputs to this algorithm are:

This algorithm is a modified version of Interpret Format 2 Patch Map, invoke Interpret Format 2 Patch Map with patch map as an input but with the following changes:

5.2.2.3. Sparse Bit Set

A sparse bit set is a data structure which compactly stores a set of distinct unsigned integers. The set is represented as a tree where each node has a fixed number of children that recursively sub-divides an interval into equal partitions. A tree of height H with branching factor B can store set membership for integers in the interval [0 to BH-1] inclusive. The tree is encoded into an array of bytes for transport.

Sparse Bit Set encoding:

Type Name Description
uint8 header Bits 0 (least significant) and 1 encode the trees branch factor B via Branch Factor Encoding. Bits 2 through 6 are a 5-bit unsigned integer which encodes the value of H. Bit 7 is set to 0 and reserved for future use.
uint8 treeData[variable] Binary encoding of the tree.

The exact length of treeData is initially unknown, it’s length is determined by executing the decoding algorithm. When using branch factors of 2 or 4 the last node may only partially consume the bits in a byte. In that case all remaining bits are unused and ignored.

Branch Factor Encoding:

Bit 1 Bit 0 Branch Factor (B)
0 0 2
0 1 4
1 0 8
1 1 32

Decoding sparse bit set treeData

The inputs to this algorithm are:

The algorithm outputs:

The algorithm, using a FIFO (first in first out) queue Q:

  1. If H is equal to 0, then this is an empty set and no bytes of treeData are consumed. Return an empty set.

  2. Insert the tuple (0, 1) into Q.

  3. Initialize S to an empty set.

  4. treeData is interpreted as a string of bits where the least significant bit of treeData[0] is the first bit in the string, the most significant bit of treeData[0] is the 8th bit, and so on.

  5. If Q is empty return S.

  6. Extract the next tuple t from Q. The first value of in t is start and the second value is depth.

  7. Remove the next B bits from the treeData bit string. The first removed bit is v1, the second is v2, and so on until the last removed bit which is vB. If prior to removal there were less than B bits left in treeData, then treeData is malformed, return an error.

  8. If all bits v1 through vB are 0, then insert all integers in the interval [start, start + BH - depth + 1) into S. Go to step 5.

  9. For each vi which is equal to 1 in v1 through vB: If depth is equal to H add integer start + i - 1 to S. Otherwise, insert the tuple (start + (i - 1) * BH - depth, depth + 1) into Q.

  10. Go to step 5.

Note: when encoding sparse bit sets the encoder can use any of the possible branching factors, but it is recommended to use 4 as that has been shown to give the smallest encodings for most unicode codepoint sets typically encountered.

The set {2, 33, 323} in a tree with a branching factor of 8 can be encoded as the bit string:
bit string:
|-- header --|- lvl 0 |---- level 1 ----|------- level 2 -----------|
| B=8 H=3    |   n0   |   n1       n2   |   n3       n4       n5    |
[ 01  11000 0 10000100 10001000 10000000 00100000 01000000 00010000 ]

Which then becomes the byte string:
[
  0b00001110,
  0b00100001,
  0b00010001,
  0b00000001,
  0b00000100,
  0b00000010,
  0b00001000
]
The empty set in a tree with a branching factor of 2 is encoded as the bit string:
bit string:
|-- header -- |
| B=2 H=0     |
[ 00  00000 0 ]

Which then becomes the byte string:
[
  0b00000000,
]
The set {0, 1, 2, ..., 17} can be encoded with a branching factor of 4 as:
bit string:
|-- header --| l0 |- lvl 1 -| l2  |
| B=4 H=3    | n0 | n1 | n2 | n3  |
[ 10  11000 0 1100 0000 1000 1100 ]

byte string:
[
  0b00001101,
  0b00000011,
  0b00110001
]

5.2.3. URI Templates

URI templates [rfc6570] are used to convert numeric or string IDs into URIs where patch files are located. A string ID is a sequence of Unicode code point values obtained from decoding a [UTF-8] string (see URI Template § section-1.6). Several variables are defined which are used to produce the expansion of the template:

Variable Value
id The input id encoded as a base32hex string (using the digits 0-9, A-V) with padding omitted. When the id is an unsigned integer it must first be converted to a big endian 64 bit unsigned integer, but then all leading bytes that are equal to 0 are removed before encoding. (For example, when the integer is less than 256 only one byte is encoded.) When the input id is a string the raw bytes are encoded as base32hex.
d1 The last character of the string in the id variable. If id variable is empty then, the value is the character _ (U+005F).
d2 The second last character of the string in the id variable. If the id variable has less than 2 characters then, the value is the character _ (U+005F).
d3 The third last character of the string in the id variable. If the id variable has less than 3 characters then, the value is the character _ (U+005F).
d4 The fourth last character of the string in the id variable. If the id variable has less than 4 characters then, the value is the character _ (U+005F).
id64 The input id encoded as a base64url string (using the digits A-Z, a-z, 0-9, - (minus) and _ (underline)) with padding included. Because the padding character is '=', it must be URL-encoded as "%3D'. When the id is an unsigned integer it must first be converted to a big endian 64 bit unsigned integer, but then all leading bytes that are equal to 0 are removed before encoding. (For example, when the integer is less than 256 only one byte is encoded.) When the input id is a string its raw bytes are encoded as base64url.

Some example inputs and the corresponding expansions:

Template Input ID Expansion
//foo.bar/{id} 123 //foo.bar/FC
//foo.bar{/d1,d2,id} 478 //foo.bar/0/F/07F0
//foo.bar{/d1,d2,d3,id} 123 //foo.bar/C/F/_/FC
//foo.bar{/d1,d2,d3,id} baz //foo.bar/K/N/G/C9GNK
//foo.bar{/d1,d2,d3,id} z //foo.bar/8/F/_/F8
//foo.bar{/d1,d2,d3,id} àbc //foo.bar/0/0/4/OEG64OO
//foo.bar{/id64} àbc //foo.bar/w6BiYw%3D%3D
//foo.bar{/+id64} àbcd //foo.bar/w6BiY2Q=

6. Font Patch Formats

In incremental font transfer font subsets are extended by applying patches. This specification defines three patch formats, each appropriate to its own set of augmentation scenarios. A single encoding can make use of more than one patch format.

6.1. Formats Summary

The following patch formats are defined by this specification:

Name Format Number Invalidation Description
§ 6.2 Brotli Patch 1 Full Invalidation A brotli encoded binary diff that uses the entire font subset as a base.
§ 6.3 Per Table Brotli 2 Full Invalidation A collection of brotli encoded binary diffs that use tables from the font subset as bases.
§ 6.3 Per Table Brotli 3 Partial Invalidation A collection of brotli encoded binary diffs that use tables from the font subset as bases.
§ 6.4 Glyph Keyed 4 No Invalidation Contains a collection of opaque binary blobs, each associated with a glyph id and table.

More detailed descriptions of each algorithm can be found in the following sections.

6.2. Brotli Patch

In a brotli patch the target file is encoded with brotli compression using the source file as a shared LZ77 dictionary. A brotli encoded patch consists of a short header followed by brotli encoded data.

Brotli patch encoding:

Type Name Description
Tag format Identifies the encoding as brotli, set to 'ifbr'
uint32 reserved Reserved for future use, set to 0.
uint32 compatibilityId[4] The compatibility id of the font subset which this patch can be applied too. See § 4.1 Patch Invalidations.
uint32 length The uncompressed length of brotliStream.
uint8 brotliStream[variable] Brotli encoded byte stream.

6.2.1. Applying Brotli Patches

This patch application algorithm is used to apply a brotli patch to extend a font subset to cover additional codepoints, features, and/or design-variation space.

Apply brotli patch

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Check that the format field in patch is equal to 'ifbr', if it is not equal, then patch is not correctly formatted. Patch application has failed, return an error.

  2. Check that the compatibilityId field in patch is equal to compatibility id. If there is no match, or base font subset does not have either an 'IFT ' or 'IFTX' table, then patch application has failed, return an error.

  3. Decode the brotli encoded data in brotliStream following Brotli Compressed Data Format § section-10 and using the base font subset as a shared LZ77 dictionary. Return the decoded result as the extended font subset

6.3. Per Table Brotli

A per table brotli patch contains a collection of patches which are applied to the individual font tables in the input font file. Each table patch is encoded with brotli compression using the corresponding table from the input font file as a shared LZ77 dictionary. A per table brotli encoded patch consists of a short header followed by one or more brotli encoded patches. In addition to patching tables, patches may also replace (existing table data is not used) or remove tables in a font subset.

Per table brotli patch encoding:

Type Name Description
Tag format Identifies the encoding as per table brotli, set to 'ifbt'
uint32 reserved Reserved for future use, set to 0.
uint32 compatibilityId[4] The id of the font subset which this patch can be applied too. See § 4.1 Patch Invalidations.
uint16 patchesCount The number of entries in the patches array.
Offset32 patches[patchesCount+1] Each entry is an offset from the start of this table to a TablePatch. Offsets must be sorted in ascending order.

The difference between two consecutive offsets in the patches array gives the size of that TablePatch.

TablePatch encoding:

Type Name Description
Tag tag The tag that identifies the font table this patch applies too.
uint8 flags Bit-field. If bit 0 (least significant bit) is set this patch replaces the existing table. If bit 1 is set this table is removed.
uint32 length The uncompressed length of brotliStream.
uint8 brotliStream[variable] Brotli encoded byte stream.

6.3.1. Applying Per Table Brotli Patches

This patch application algorithm is used to apply a per table brotli patch to extend a font subset to cover additional codepoints, features, and/or design-variation space.

Apply per table brotli patch

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Initialize extended font subset to be an empty font with no tables.

  2. Check that the format field in patch is equal to 'ifbt', if it is not equal then patch is not correctly formatted. Patch application has failed, return an error.

  3. Check that the compatibilityId field in patch is equal to compatibility id. If there is no match, or base font subset does not have either an 'IFT ' or 'IFTX' table, then patch application has failed, return an error.

  4. In the following steps, adding a table to extended font subset consists of adding the table’s data to the font and inserting a new entry into the table directory according to the requirements of the open type specification. That entry includes a checksum for the table data. When an existing table is copied unmodified, the client can re-use the checksum from the entry in the source font. Otherwise a new checksum will need to be computed.

  5. For each entry in patches, with index i:

  6. For each table in base font subset which has a tag that was not found in any of the entries processed in step 5, add a copy of that table to extended font subset.

6.4. Glyph Keyed

A glyph keyed patch contains a collection of data chunks that are each associated with a glyph index and a font table. The encoded data replaces any existing data for that glyph index in the referenced table. Glyph keyed patches can encode data for glyf/loca, gvar, CFF, and CFF2 tables.

Glyph keyed patch encoding:

Type Name Description
Tag format Identifies the encoding as glyph keyed, set to 'ifgk'
uint32 reserved Reserved for future use, set to 0.
uint8 flags Bit-field. If bit 0 (least significant bit) is set then glyphIds uses uint24’s, otherwise it uses uint16’s.
uint32 compatibilityId[4] The compatibility id of the font subset which this patch can be applied too. See § 4.1 Patch Invalidations.
uint32 length The uncompressed length of brotliStream.
uint8 brotliStream[variable] Brotli encoded GlyphPatches table.

GlyphPatches encoding:

Type Name Description
uint32 glyphCount The number of glyphs encoded in the patch.
uint8 tableCount The number of tables the patch has data for.
uint16/uint24 glyphIds[glyphCount] An array of glyph indices included in the patch. Elements are uint24’s if bit 0 (least significant bit) of flags is set, otherwise elements are uint16’s.
Tag tables[tableCount] An array of tables (by tag) included in the patch.
Offset32 glyphDataOffsets[glyphCount * tableCount + 1] An array of offsets of to glyph data for each table. The first glyphCount offsets corresponding to tables[0], the next glyphCount offsets (if present) corresponding to tables[1], and so on. All offsets are from the start of the GlyphPatches table. Offsets must be sorted in ascending order.
uint8 glyphData[variable] The actual glyph data picked out by the offsets.

The difference between two consecutive offsets in the glyphDataOffsets array gives the size of that glyph data.

6.4.1. Applying Glyph Keyed Patches

This patch application algorithm is used to apply a glyph keyed patch to extend a font subset to cover additional codepoints, features, and/or design-variation space.

Apply glyph keyed patch

The inputs to this algorithm are:

The algorithm outputs:

The algorithm:

  1. Check that the format field in patch is equal to 'ifgk', if it is not equal, then patch is not correctly formatted and patch application has failed, return an error.

  2. Check that the compatibilityId field in patch is equal to compatibility id. If there is no match, or base font subset does not have either an 'IFT ' or 'IFTX' table, then patch application has failed, return an error.

  3. Decode the brotli encoded data in brotliStream following Brotli Compressed Data Format § section-10. The decoded data is a GlyphPatches table.

  4. For each font table listed in tables, with index i:

    • Using the corresponding table in base font subset, synthesize a new table where the data for each glyph is replaced with the data corresponding to that glyph index from glyphData if present, otherwise copied from the corresponding table in base font subset for that glyph index.

    • The patch glyph data for a glyph index is located by finding glyphIds[j] equal to the glyph index. The offset to the associated glyph data is glyphDataOffsets[i * glyphCount + j]. The length of the associated glyph data is glyphDataOffsets[i * glyphCount + j + 1] minus glyphDataOffsets[i * glyphCount + j].

    • The specific process for synthesizing the new table, depends on the specified format of the table. Any non-glyph associated data should be copied from the table in base font subset. Tables of the type glyf/loca, gvar, CFF, or CFF2 are supported. Entries for tables of any other types must be ignored.

    • If base font subset does not have a matching table, return an error.

    • Insert the synthesized table into extended font subset.

  5. Locate the § 5.2 Patch Map Table which has the same compatibilityId as compatibility id. If it is a format 1 patch map then, invoke Remove Entries from Format 1 Patch Map with the patch map table and patch uri as an input. Otherwise if it is a format 2 patch map then, invoke Remove Entries from Format 2 Patch Map with the patch map table and patch uri as an input. Copy the modified patch map table into extended font subset.

  6. For each table in base font subset which has a tag that was not found in any of the entries processed in step 4 or 5, add a copy of that table to extended font subset.

  7. If the contents of any font table was modified during the previous steps then, for each modified table: update the checksums in the fonts table directory to match the table’s new contents.

7. Encoding

An encoder is a tool which produces an incremental font and set of associated patches. "Encoding" refers to the process of using an encoder, including whatever parameters an encoder requires or allows to influence the result in a particular case. The incremental font and associated patches produced by a compliant encoder:

  1. Must meet all of the requirements in § 5 Extensions to the Font Format and § 6 Font Patch Formats.

  2. Must be consistent, that is: for any possible font subset definition the result of invoking Extend an Incremental Font Subset with that subset definition and the incremental font must always be the same regardless of the particular order of patch selection chosen in step 6 of Extend an Incremental Font Subset.

  3. When an encoder is used to transform an existing font into an incremental font the associated fully expanded font should be equivalent to the existing font. An equivalent fully expanded font should have all of the same tables as the existing font (excluding the incremental IFT/IFTX tables) and each of those tables should be functionally equivalent to the corresponding table in the existing font. Note: the fully expanded may not always be an exact binary match with the existing font.

  4. Should preserve the functionality of the fully expanded font throughout the augmentation process, that is: given the fully expanded font derived from the incremental font and any content, then the font subset produced by invoking Extend an Incremental Font Subset with the incremental font and the minimal subset definition covering that content should render identically to the fully expanded font for that content.

When an encoder is used to transform an existing font file into and incremental font and a client is implemented according to the other sections of this document, the intent of the IFT specification is that appearance and behavior of the font in the client will be the same as if the entire file were transferred to the client. A primary goal of the IFT specification is that the IFT format and protocol can serve as a neutral medium for font transfer, comparable to WOFF2. If an encoder produces an encoding from a source font which meets all of the above requirements (1. through 4.), then the encoding will preserve all of the functionality of the original font. Requirement 3 above ensures that all of the functionality in the original font can be reached. This works in conjunction with requirement 4, which requires that partial versions of an IFT font have equivalent functionality as the full version (original font here) for content which is a subset of the subset definition used to derive the partial font.

This may be important for cases where a foundry or other rights-owner of a font wants be confident that the encoding and transfer of that font using IFT will not change its behavior and therefore the intent of the font’s creators. Licenses or contracts might then include requirements about IFT conformance, and situations in which re-encoding a font in WOFF2 format is de facto permissible due to its content-neutrality might also permit IFT encoding of that font.

However, nothing about these requirements on encoding conformance is meant to rule out or deprecate the possibility and practical use of encodings that do not preserve all of the functionality of a source font. Any encoding meeting the minimum requirements (1. and 2. above) is valid and may have an appropriate use. Under some circumstances it might be desirable for an encoded font to omit support for some functionality/data from all of its patch files even if those were included in the original font file. In other cases a font might be directly encoded into the IFT format from font authoring source files. In cases where an encoder chooses not to meet requirement 3 above it is still strongly encouraged to meet 4, which ensures consistent behavior of the font throughout the augmentation process.

7.1. Encoding Considerations

This section is not normative.

The details of the encoding process may differ by encoder and are beyond the scope of this document. However, this section provides guidance that encoder implementations may want to consider, and that can be important to reproducing the appearance and behavior of an existing font file when producing an incremental version of that font.

About § 6.2 Brotli Patch and § 6.3 Per Table Brotli patches

A § 6.2 Brotli Patch patch changes the content of an initial incremental font or subsequently updated font on a whole-file basis. This type of patch is therefore more or less incompatible with the § 6.4 Glyph Keyed patch type, although in theory a § 6.2 Brotli Patch patch could add support for § 6.4 Glyph Keyed patches. § 6.2 Brotli Patch patches also tend to mix poorly with § 6.3 Per Table Brotli patches. Therefore, a § 6.2 Brotli Patch patch will typically mix only with other § 6.2 Brotli Patch patches and will be Fully Invalidiating, as the application of a § 6.2 Brotli Patch patch only yields the right result relative to a specific file content.

A § 6.3 Per Table Brotli patch can change the contents of some font tables and not others. Each patched table typically needs to be relative to a specific table content, but other tables can have different contents. Therefore as long as a § 6.3 Per Table Brotli patch does not alter the tables containing glyph data it can be compatible with § 6.4 Glyph Keyed patches and therefore be only Partially Invalidating (in that it will invalidate other § 6.3 Per Table Brotli patches but not § 6.4 Glyph Keyed patches.

Generally speaking the advantage of a § 6.2 Brotli Patch patch over a § 6.3 Per Table Brotli patch is that the content of the patch is compressed together, and can use the full content of the file it patches as a compression "dictionary". The typical advantage of a § 6.3 Per Table Brotli patch over a § 6.2 Brotli Patch patch is that it can be compatible with § 6.4 Glyph Keyed patches. The patch types leave the possibility of less typical patterns open. For example, one could use § 6.3 Per Table Brotli patches for all content other than the glyph tables but then use another set of § 6.3 Per Table Brotli patches for those tables rather than § 6.4 Glyph Keyed patches, and each of these could in theory be Partially Invalidating—leaving them mutually dependent but independent of one another.

The two patch types are similar in that they are at least Partially Invaliding, so that the choice of one from a table (IFT or IFTX) means that all others in that table are invalidated. Assuming the patched result does not represent the fully expanded font, that patch will typically alter the IFT or IFTX table it was was listed in to add a new set of patches to further extend the font. This means that the total set of § 6.2 Brotli Patch or § 6.3 Per Table Brotli patches forms a graph, in which each font subset in the segmentation is a node and each patch is an edge. This also means that patches of these types are typically downloaded and applied in series, which has implications for the performance of this patch type relative to latency.

About § 6.4 Glyph Keyed patches

§ 6.4 Glyph Keyed patches are quite distinct from the other patch types. First, § 6.4 Glyph Keyed patches can only modify tables containing glyph outline data, and therefore an incremental font that only uses § 6.4 Glyph Keyed must include all other font table data in the initial font file. Second, § 6.4 Glyph Keyed patches are not Invalidating, and can therefore be downloaded and applied independently. This independence means multiple patches can be downloaded in parallel which can significantly reduce the number of round trips needed relative to the invalidating patch types.

Choosing patch formats for an encoding

All encodings must chose one or more patch types to use. § 6.2 Brotli Patch and § 6.3 Per Table Brotli patches allow all types of data in the font to be patched, but because these types are at least Partially Invalidating, the total number of patches needed increases exponentially with the number of segments rather than linearly. § 6.4 Glyph Keyed patches are limited to updating outline and variation delta data but the number needed scales linearly with number of segments.

In addition to the number of patches, the encoder should also consider the number of network round trips that will be needed to get patches for typical content. For invalidating patch types it is necessary to make patch requests in series. This means that if some content requires multiple segments then, multiple network round trips may be needed. Glyph keyed patches on the other hand are not invalidating and the patches can be fetched in parallel, needing only a single round trip.

At the extremes the two types of Brotli patches are most appropriate for fonts with sizable non-outline data that only require a small number of patches, with the § 6.2 Brotli Patch type providing better compression. Glyph keyed patches are most appropriate for fonts where the vast majority of data consists of glyph outlines, which is true of many existing CJK fonts.

For fonts that are in-between, or in cases where fine-grained segmentation of glyph data is desired but segmentation of data in other tables is still needed, it can be desirable to mix the § 6.3 Per Table Brotli and § 6.4 Glyph Keyed patch types in this way:

  1. Keep all brotli patch entries in one mapping table and all glyph keyed entries in the other mapping table.

  2. Use per-table brotli patches to update all tables except for the tables touched by the glyph keyed patches (outline, variation deltas, and the glyph keyed patch mapping table). These patches should use a small number of large segments to keep the patch count reasonable.

  3. Because glyph keyed patches reference the specific glyph IDs that are updated, the per-table brotli patches must not change the glyph to glyph ID assignments used in the original font; otherwise, the glyph IDs listed in the glyph keyed patches may become incorrect. In font subsetters this is often available as an option called "retain glyph IDs".

  4. Lastly, use glyph keyed patches to update the remaining tables, here much smaller fine-grained segments can be utilized without requiring too many patches.

The mixed patch type encoding is expected to be a commonly used approach since many fonts will fall somewhere in between the two extremes.

Reducing round trips with invalidating patches

One way to reduce the number of round trips needed with a segmentation that uses one of the invalidating patch types is to provide patches that add multiple segments at once (in addition to patches for single segments). For example consider a font which has 4 segments: A, B, C, and D. The patch table could list patches that add: A, B, C, D, A + B, A + C, A + D, B + C, B + D, and C + D. This would allow any two segments to be added in a single round trip. The downside to this approach is that it further increases the number of unique patches needed.

Managing the number of patches

Using brotli patches along side a large number of segments can result in a very large number of patches needed, which can have two negative effects. First, the storage space needed for all of the pre-generated patches could be undesirably large. Second, more patches will generally mean lower CDN cache performance, because a higher number of patches represents a higher number of paths from a given subset to a given subset, with different paths being taken by different users depending on the content they access. There are some techniques which can be used to reduce the total number of pre-generated patches:

  1. Use a maximum depth for the patch graph, once this limit is reached the font is patched to add all remaining parts of the full original font. This will cause the whole remaining font to be loaded after a certain number of segments have been added. Limiting the depth of the graph will reduced the combinatorial explosion of patches at the lower depths.

  2. Alternatively, at lower depths the encoder could begin combining multiple segments into single patches to reduce the fan out at each level.

Choosing segmentations

One of the most important and complex decisions an encoder needs to make is how to segment the data in the encoded font. The discussion above focused on the number of segments, but the performance of an incremental font depends much more on the grouping of data within segments. To maximize efficiency an encoder needs to group data (eg. codepoints) that are commonly used together into the same segments. This will reduce the amount of unneeded data load by clients when extending the font. The encoder must also decide the size of segments. Smaller segments will produce more patches, and thus incur more overhead by requiring more network requests, but will typically result in less unneeded data in each segment compared to larger segments. When segmenting codepoints, data on codepoint usage frequency can be helpful to guide segmentation.

Some codepoints may have self-evident segmentations, or at least there may be little question as to what codepoints to group together. For example, upper and lowercase letters of the Latin alphabet form a natural group. Other cases may be more complicated. For example, Chinese, Japanese, and Korean share some codepoints, but a codepoint that is high-frequency in Japanese may be lower-frequency in Chinese. In some cases one might choose to optimize an encoding for a single language. Another approach is to produce a compromise encoding. For example, when segmenting an encoder could put codepoints that are frequent in Japanese, Chinese, and Korean into one segment, and then those that are frequent only in Japanese and Chinese into another segment, and so on. Then the codepoints that are frequent in only one language can be handled in the usual way. This will result in less even segment sizes, but means that loading high-frequency patches for any one of the languages will not pull in lower-frequency glyphs.

Maintaining Functional Equivalence

As discussed in § 7 Encoding an encoder should preserve the functionality of the original font. Fonts are complex and often contain interactions between codepoints so maintaining functional equivalence with a partial copy of the font can be tricky. The next two subsections discuss maintaining functional equivalent using the different patch types.

Brotli patches

When preparing § 6.2 Brotli Patch or § 6.3 Per Table Brotli patches, one means of achieving functional equivalence is to leverage an existing font subsetter implementation to produce font subsets that retain the functionality of the original font. The IFT patches can then be derived from these subsets.

A font subsetter produces a font subset from an input font based on a desired font subset definition. The practice of reliably subsetting a font is well understood and has multiple open-source implementations (a full formal description is beyond the scope of this document). It typically involves a reachability analysis, where the data in tables is examined relative to the font subset definition to see which portions can be reached by any possible content covered by the subset definition. Any reachable data is retained in the generated font subset, while any unreachable data may be removed.

In the following example psuedo code a font subsetter is used to generate an IFT encoded font that utilizes only Brotli patches:

# Encode a font (full_font) into an incremental font that starts at base_subset_def
# and can incrementally add any of subset_definitions. Returns the IFT encoded font
# and set of associated patches.
encode_as_ift(full_font, base_subset_def, subset_definitions):
  base_font = subset(full_font, base_subset_def)
  base_font, patches  = encode_node(full_font, base_font, base_subset_def, subset_definitions)
  return base_font, patches

# Update base_font to add all of the ift patch mappings to reach any of
# subset_definitions and produces the associated patches.
encode_node(full_font, base_font, cur_def, subset_definitions):
  patches = []
  next_fonts = []
  
  for each subset_def in subset_definitions not fully covered by cur_def:
    next_def = subset_def union cur_def
    next_font = subset(full_font, next_def)
    let patch_url be a new unique url

    add a mapping from, (subset_def - cur_def) to patch_url, into base_font
    next_font, patches += encode_node(full_font, next_font, next_def, subset_definitions)

    next_fonts += (next_font, next_def, patch_url)

  for each (next_font, next_def, patch_url) in next_fonts:
    patch = brotli_patch_diff(base_font, next_font)
    patches += (patch, patch_url)
  
  return base_font, patches

In this example implementation, if the union of the input base subset definition and the list of subset definitions fully covers the input full font, and the subsetter implementation used correctly retains all functionality then, the above implementation should meet the requirements in § 7 Encoding to be a neutral encoding. This basic encoder implementation is for demonstration purposes and not meant to be representative of all possible encoder implementations. Notably it does not make use of nor demonstrate utilizing glyph keyed patches. Most encoders will likely be more complex and need to consider additional factors some of which are discussed in the remaining sections.

§ 6.4 Glyph Keyed patches

Specifically because they are parameterized by codepoints and feature tags but can be applied independently of one another, § 6.4 Glyph Keyed patches have additional requirements and cannot be directly derived by using a subsetter implementation. However, such an implementation can help clarify what an encoder needs to do to maintain functional equivalence when producing this type of patch. Consider the result of producing the subset of a font relative to a given font subset definition. We can define the glyph closure of that font subset definition as the full set of glyphs included in the subset, which the subsetter has determined is needed to render any combination of the described codepoints and layout features.

Using that definition, the glyph closure requirement on the full set § 6.4 Glyph Keyed patches is:

Assuming the subsetter does its job accurately, the glyph closure requirement is a consequence of the requirement for equivalent behavior: Suppose there is a font subset definition such that the subsetter includes glyph *i* in its subset, but an encoder that produces § 6.4 Glyph Keyed patches omits glyph *i* from the set of patches that correspond to that definition. If the subsetter is right, that glyph must be present to retain equivalent behavior when rendering some combination of the codepoints and features in the definition, which means that the incremental font will not behave equivalently when rendering that combination.

Therefore, when generating an encoding utilizing glyph-keyed patches the encoder must determine how to distribute glyphs between all of the patches in a way that meets the glyph closure requirement. This is primarily a matter of looking at the codepoints assigned to a segment and determining what other glyphs must be included in the patch that corresponds to it, as when a glyph variant can be substituted for a glyph included in the segment by codepoint. In some cases a glyph might only be needed when multiple segments have been loaded, in which case that glyph can be added to the patch corresponding to any of those segments. (This can be true of a ligature or a pre-composed accented character.) Finally, after the initial analysis of segments the same glyph might be needed when loading the patches for two or more segments. There are three main strategies for dealing with that situation:

  1. Two or more segments can be combined to be contained within a single patch. This avoids duplicating the common glyphs, but increases the segment’s size.

  2. Alternatively, the glyph can be included in more than one of the patches that correspond to those segments. This will keep the segments smaller, at the cost of duplicating the glyph’s data into multiple patches.

  3. Lastly, the common glyph can be moved into the initial font. This avoids increasing segment sizes and duplicating the glyph data, but will increase the size of the initial font. It also means that the glyph’s data is always loaded, even when not needed. This can be useful for glyphs that are required in many segments or are otherwise complicating the segmentation.

Pre-loading data in the Initial Font

In some cases it might be desirable to avoid the overhead of applying patches to an initial file. For example, it could be desirable that the font loaded on a company home page already be able to render the content on that page. The main benefit of such a file is reduced rendering latency: the content can be rendered after a single round-trip.

There are two approaches to including data in the downloaded font file. One is to simply encode the incremental font as a whole so that the data is in the initial file. Any such data will always be available in any patched version of the font. This can be helpful in cases when the same data would otherwise be needed in many different segments.

The other approach is to download an already-patched font. That is, one can encode a font with little or no data in the "base" file but then apply patches to that file on the server side, so that the file downloaded already includes the data contained in those patches.

When only one pre-loaded version of a font is needed these strategies will have roughly equivalent results, but the first is both simpler and in some cases more specific. However, when more than pre-loaded font is needed the pre-patching approach will often be better. When using the first approach, one would need to produce multiple encodings, one for each preloaded file. When using the second approach, all of the preloaded files will still share the same overall patch graph, which both reduces the total space needed to store the patches and improves CDN cache efficiency, as all the pre-loaded files will choose subsequent patches from the same set.

Table ordering

In the initial font file (whether encoded as woff2 or not) it is possible to customize the order of the actual table bytes within the file. Encoders should consider placing the mapping tables (IFT and IFTX) plus any additional tables needed to decode the mapping tables (cmap) as early as possible in the file. This will allow optimized client implementations to access the patch mapping prior to receiving all of the font data and potentially initiate requests for any required patches earlier.

Likewise per table brotli patches have a separate brotli stream for each patched table and the format allows these streams to be placed in any order in the patch file. So for the same reasons encoders should consider placing updates to the mapping tables plus any additional tables needed to decode the mapping tables as early as possible in the patch file.

Choosing the input ID encoding

The specification supports two encodings for embedding patch IDs in the URL template. The first is base32hex, which is a good match for pre-generated patches that will typically be stored in a filesystem. Base32hex encoding only uses the letters 0-9 and A-V, which are safe letters for a file name in every commonly used filesystem, with no risk of collisions due to case-insensitivity. Because the string is embedded without padding this format cannot be reliably decoded, so it may be a poor choice for dynamically generated patches. The other encoding is base64url, a variant of base64 encoding appropriate for embedding in a URL or case-sensitive filesystem. When using this encoding the id is embedded with padding so that the value can be reliably decoded.

The individual character selectors d1 through d4 are relative to the base32hex encoded id only. These are typically used to reduce the number of files stored in a single filesystem directory by spreading related files out into one or more levels of subdirectory according to the trailing letters of the id encoding. These will tend to be evenly distributed among the digits when using integer ids, but may be unevenly distributed or even constant for string ids. Encoders that wish to use string ids with d1 through d4 should take care to make the ends of the id strings vary. It is valid to mix d1 through d4 with a base64url-encoded id.

Privacy Considerations

Content inference from character set

IFT exposes, to the server hosting a Web font, information on the set of characters that the browser wants to render with the font (for details, see § 4 Extending a Font Subset).

For some languages, which use a very large character set (Chinese and Japanese are examples) the vast reduction in total bytes transferred means that Web fonts become usable, including on mobile networks, for the first time. However, for those languages, it is possible that individual requests might be analyzed by a rogue font server to obtain intelligence about the type of content which is being read. It is unclear how feasible this attack is, or the computational complexity required to exploit it, unless the characters being requested are very unusual.

More specifically, a IFT font includes a collection of unicode codepoint groups and requests will be made for groups that intersect content being rendered. This provides information to the hosting server that at least one codepoint in a group was needed, but does not contain information on which specific codepoints within a group were needed. This is functionally quite similar to the existing CSS Fonts 4 § 4.5 Character range: the unicode-range descriptor and has the same privacy implications. Discussion of the privacy implication of unicode-range can be found in the CSS Fonts 4 specification:

Per-origin restriction avoids fingerprinting

As required by [css-fonts-4]:

"A Web Font must not be accessible in any other Document from the one which either is associated with the @font-face rule or owns the FontFaceSet. Other applications on the device must not be able to access Web Fonts." - CSS Fonts 4 § 10.2 Web Fonts

Since IFT fonts are treated the same as regular fonts in CSS (§ 2 Opt-In Mechanism) these requirements apply and avoid information leaking across origins.

Similar requirements apply to font palette values:

"An author-defined font color palette must only be available to the documents that reference it. Using an author-defined color palette outside of the documents that reference it would constitute a security leak since the contents of one page would be able to affect other pages, something an attacker could use as an attack vector." - CSS Fonts 4 § 9.2 User-defined font color palettes: The @font-palette-values rule

Security Considerations

One security concern is that IFT fonts could potentially generate a large number of network requests for patches. This could cause problems on the client or the service hosting the patches. The IFT specification contains a couple of mitigations to limit excessive number of requests:

  1. § 4 Extending a Font Subset: disallows re-requesting the same URI multiple times and has limits on the total number of requests that can be issued during the extension process.

  2. Load patch file: specifies the use of [fetch] in implementing web browsers and matches the CORS settings for the initial font load. As a result cross-origin requests for patch files are disallowed unless the hosting service opts in via the appropriate access control headers.

Changes

Since the Working Draft of 30 May 2023 (see commit history):

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

References

Normative References

[CSS-FONTS-4]
Chris Lilley. CSS Fonts Module Level 4. URL: https://drafts.csswg.org/css-fonts-4/
[ISO14496-22]
Information technology — Coding of audio-visual objects — Part 22: Open Font Format. Under development. URL: https://www.iso.org/standard/87621.html
[OPEN-TYPE]
OpenType Specification. December 2021. Note. URL: https://docs.microsoft.com/en-us/typography/opentype/spec
[PFE-report]
Chris Lilley. Progressive Font Enrichment: Evaluation Report. 15 October 2020. Note. URL: https://www.w3.org/TR/PFE-evaluation/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[RFC3986]
T. Berners-Lee; R. Fielding; L. Masinter. Uniform Resource Identifier (URI): Generic Syntax. January 2005. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc3986
[RFC4648]
S. Josefsson. The Base16, Base32, and Base64 Data Encodings. October 2006. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc4648
[RFC6570]
J. Gregorio; et al. URI Template. March 2012. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc6570
[RFC7932]
J. Alakuijala; Z. Szabadka. Brotli Compressed Data Format. July 2016. Informational. URL: https://www.rfc-editor.org/rfc/rfc7932
[Shared-Brotli]
J. Alakuijala; et al. Shared Brotli Compressed Data Format. 27 Jul 2021. Internet Draft. URL: https://datatracker.ietf.org/doc/html/draft-vandevenne-shared-brotli-format-09
[UNICODE]
The Unicode Standard. URL: https://www.unicode.org/versions/latest/
[UTF-8]
F. Yergeau. UTF-8, a transformation format of ISO 10646. November 2003. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc3629
[WOFF2]
Vladimir Levantovsky; Raph Levien. WOFF File Format 2.0. URL: https://w3c.github.io/woff/woff2/

Informative References

[FETCH]
Fetch Standard. 22 May 2023. Living Standard. URL: https://fetch.spec.whatwg.org/
[RFC9113]
M. Thomson, Ed.; C. Benfield, Ed.. HTTP/2. June 2022. Proposed Standard. URL: https://httpwg.org/specs/rfc9113.html
[RFC9114]
M. Bishop, Ed.. HTTP/3. June 2022. Proposed Standard. URL: https://httpwg.org/specs/rfc9114.html