As the early Web matured, web sites evolved from simple documents to active programs, changing the web browser's role from a simple document renderer to an operating system for programs. Modern browsers like Chromium use multiple operating system processes to manage this workload, improving stability, security, and performance.
Chromium‘s process model determines how documents, workers, and other web content are divided into processes. First, the process model must identify which parts of a “program” on the web need to coexist in a single process. Somewhat surprisingly, a program on the web is not a single document plus its subresources, but rather a group of same (or similar) origin documents that can fully access each other’s contents. Once these atomic groups are defined, the process model can then decide which groups will share a process. These decisions can be tuned based on platform, available resources, etc, to achieve the right level of isolation for different scenarios.
This document outlines the goals and design of Chromium's process model and the various ways it is used today, including its support for Site Isolation.
At a high level, Chromium aims to use separate processes for different instances of web sites when possible. A web site instance is a group of documents or workers that must share a process with each other to support their needs, such as cross-document scripting. (This roughly corresponds to an “agent cluster” from the HTML Standard, as described below.)
For stability, putting web site instances in separate processes limits the impact of a renderer process crash or hang, allowing other content to continue working. For performance, this allows different web site instances to run in parallel with better responsiveness, at the cost of some memory overhead for each process.
For security, strictly using separate processes for different web sites allows significantly stronger defenses against malicious web sites. In addition to running web content within a low-privilege sandbox that limits an attacker‘s access to the user’s machine, Chromium's multi-process architecture can support Site Isolation, where each renderer process is only allowed to access data from a single site. Site Isolation involves:
Chromium uses several abstractions to track which documents and workers need synchronous access to each other, as a constraint for process model decisions.
Security Principal (implemented by SiteInfo): In security terminology, a principal is an entity with certain privileges. Chromium associates a security principal with execution contexts (e.g., documents, workers) to track which data their process is allowed to access. This principal is typically a “site” (i.e., scheme plus eTLD+1, such as https://example.com
), because web pages can modify their document.domain value to access other same-site documents, and not just same-origin documents. In some cases, though, the principal may be an origin or have a coarser granularity (e.g., file:
). The SiteInfo class tracks all values that identify a security principal.
Principal Instance (implemented by SiteInstance): A principal instance is the core unit of Chromium‘s process model. Any two documents with the same principal in the same browsing context group (see below) must live in the same process, because they have synchronous access to each other’s content. This access includes cross-document scripting and synchronous communication through shared memory (e.g., SharedArrayBuffer). If such documents were in different processes, data races or deadlocks would occur if they concurrently accessed objects in their shared DOM or JavaScript heaps.
This roughly corresponds to the agent cluster concept in the spec, although they do not match exactly: multiple agent clusters may sometimes share a principal instance (e.g., with data:
URLs in the same principal instance as their creator), and principals may keep track of more factors than agent cluster keys (e.g., whether the StoragePartition differs).
Note that the user may visit multiple instances of a given principal in the browser, sometimes in unrelated tabs (i.e., separate browsing context groups). These separate instances do not need synchronous access to each other and can safely run in separate processes.
Browsing Context Group (implemented by BrowsingInstance): A browsing context group is a group of tabs and frames (i.e., containers of documents) that have references to each other (e.g., frames within the same page, popups with window.opener references, etc). Any two documents within a browsing context group may find each other by name, so it is important that any same-principal documents in the group live in the same process. In other words, there is only one principal instance per principal in a given browsing context group. Note that a tab may change its browsing context group on some types of navigations (e.g., due to a Cross-Origin-Opener-Policy header, browser-initiated cross-site navigations, and other reasons).
From an implementation perspective, Chromium keeps track of the SiteInstance of each RenderFrameHost, to determine which renderer process to use for the RenderFrameHost's documents. SiteInstances are also tracked for workers, such as ServiceWorker or SharedWorkerHost.
Used on: Desktop platforms (Windows, Mac, Linux, ChromeOS).
In (one-)site-per-process mode, each process is locked to documents from a single site. Sites are defined as scheme plus eTLD+1, since different origins within a given site may have synchronous access to each other if they each modify their document.domain. This mode provides all sites protection against compromised renderers and Spectre-like attacks, without breaking backwards compatibility.
This mode can be enabled on Android using chrome://flags/#enable-site-per-process
.
Used on: Chrome for Android (2+ GB RAM).
On platforms like Android with more significant resource constraints, Chromium only uses dedicated (locked) processes for some sites, putting the rest in unlocked processes that can be used for any web site. (Note that there is a threshold of about 2 GB of device RAM required to support any level of Site Isolation on Android.)
Locked processes are only allowed to access data from their own site. Unlocked processes can generally access data from any site that does not require a locked process. Chromium usually creates one unlocked process per browsing context group.
Currently, several heuristics are used to isolate the sites that are most likely to have user-specific information. As on all platforms, privileged pages like WebUI are always isolated. Chromium also isolates sites that users tend to log into in general, as well as sites on which a given user has entered a password, logged in via an OAuth provider, or encountered a Cross-Origin-Opener-Policy (COOP) header.
Used on: Low-memory Chrome for Android (<2 GB RAM), Android WebView, Chrome for iOS.
On some platforms, Site Isolation is not available, due to implementation or resource constraints.
Available on: Desktop platforms, Chrome for Android (2+ GB RAM).
There are several optional ways to lock processes at an origin granularity rather than a site granularity, with various tradeoffs for compatibility (e.g., breaking pages that modify document.domain). These are available on platforms that support some level of Site Isolation.
--isolate-origins=
...), chrome://flags#isolate-origins
, or enterprise policy (IsolateOrigins or IsolateOriginsAndroid).Certain powerful web platform features now require an opt-in CrossOriginIsolated mode, which ensures that all cross-origin content (e.g., documents and workers, as well as subresources like media or scripts) consents to being loaded in the same process as an origin using these features. This opt-in is required because these powerful features (e.g., SharedArrayBuffers) can be used for very precise timing, which can make attacks that leak data from the process (e.g., using Spectre or other transient execution attacks) more effective. This mode is important because not all browsers support out-of-process iframes for cross-origin documents, and not all cross-origin subresources can be put in a separate process.
CrossOriginIsolated mode requires the main document to have Cross-Origin-Opener-Policy and Cross-Origin-Embedder-Policy headers. These headers impose restrictions on all content that may load within the page or process (e.g., requiring similar headers on subframes, and CORS, CORP, or a credentialless mode for subresources).
Before Site Isolation was introduced, Chromium initially supported a few other process models that affected the number of renderer processes.
chrome://
URLs.Chromium provides several ways to view the current state of the process model:
chrome://process-internals/#web-contents
: This is an internal diagnostic page which shows information about the SiteInstances and processes for each open document.chrome://discards/graph
: This is an internal diagnostic page that includes a visualization of how the open documents and workers map to processes. Clicking on any node provides more details.For performance, Chromium attempts to strike a balance between using more processes to improve parallelism and using fewer processes to conserve memory. There are some cases where a new process is always required (e.g., for a cross-site page when Site Isolation is enabled), and other cases where heuristics can determine whether to create a new process or reuse an old one. Generally, process reuse can only happen in suitable cases, such as within a given profile or respecting a process lock. Several factors go into this decision.
example.com
and 50 open tabs to example.org, then a new example.com
tab will share a process with a random existing example.com
tab, while a chromium.org tab will create a 101st process. Note that Chromium on Android does not set this soft process limit, and instead relies on the OS to discard processes.example.com
iframe in a cross-site page will be placed in an existing example.com
process (in any browsing context group), even if the process limit has not been reached. This keeps the process count lower, based on the assumption that most iframes/fenced frames are less resource demanding than top-level documents. Similarly, ServiceWorkers are generally placed in the same process as a document that is likely to rely on them.Chromium assigns a ProcessLock to some or all RenderProcessHosts, to restrict which sites are allowed to load in the process and which data the process has access to. A RenderProcessHost is an object in the browser process that represents a given renderer process, though it can be reused if that renderer process crashes and is restarted. Some ProcessLock cases are used on all platforms (e.g., chrome://
URLs are never allowed to share a process with other sites), while other cases may depend on the mode (e.g., Full Site Isolation requires all processes to be locked, once content has been loaded in the process).
ProcessLocks may have varying granularity, such as a single site (e.g., https://example.com
), a single origin (e.g., https://accounts.example.com
), an entire scheme (e.g., file://
), or a special “allow-any-site” value for processes allowed to host multiple sites (which may have other restrictions, such as whether they are crossOriginIsolated). RenderProcessHosts begin with an “invalid” or unlocked ProcessLock before one is assigned.
ProcessLocks are always assigned before any content is loaded in a renderer process, either at the start of a navigation or at OnResponseStarted time, just before a navigation commits. Note that a process may initially receive an “allow-any-site” lock for some empty document schemes (e.g., about:blank
), which may later be refined to a site-specific lock when the first actual content commits. Once a site-specific lock is assigned, it remains constant for the lifetime of the RenderProcessHost, even if the renderer process itself exits and is recreated.
Note that content may be allowed in a locked process based on its origin (e.g., an about:blank
page with an inherited https://example.com
origin is allowed in a process locked to https://example.com
). Also, some opaque origin cases are allowed into a locked process as well, such as data:
URLs created within that process, or same-site sandboxed iframes.
There are many special cases to consider in Chromium's process model, which may affect invariants or how features are designed.
chrome://settings
are considered part of Chromium and are highly privileged, usually hosted in the chrome://
scheme. They are strictly isolated from non-WebUI pages as well as other types of WebUI pages (based on “site”), on all platforms. They are also generally not allowed to load content from the network (apart from a shrinking list of allowlisted pages), unless it is from a separate unprivileged chrome-untrusted://
document. Additionally, normal web pages are not allowed to navigate to WebUI pages, which makes privilege escalation attacks more difficult.chrome-untrusted://
iframes. Third party NTPs are also possible, which load a “remote” non-WebUI web page with limited privileges. On Android, the NTP is instead a native Android surface with no privileged renderer process. Chrome on Android creates an unused renderer process in the background while the NTP surface is visible, so that the next page can use it.https://example.com/app/
will have an “effective URL” that looks like a chrome-extension://
URL, causing it to be treated differently in the process model. This support may eventually be removed.<webview>
tag and similar cases like MimeHandlerView and ExtensionOptionsGuest embed one WebContents within another. All of these cases use strict site isolation for content they embed. Note that Chrome Apps allow <webview>
tags to load normal web pages and the app's own data:
or chrome-extension://
URLs, but not URLs from other extensions or apps.allow-same-origin
(either iframes or popups) may be same-site with their parent or opener but use an opaque origin. Chromium currently keeps these documents in the same process as their parent or opener, but this may change in bug 510122.data:
URLs: Chromium generally keeps documents with data:
URLs in the same process as the site that created them, since that site has control over their content. The exception is when restoring a previous session, in which case each document with a data:
URL ends up in its own process.file://
URLs as part of the same site. Normal web pages are not allowed to load file://
URLs, and renderer processes are only granted access to particular file://
URLs via file chooser dialogs (e.g., for uploads). These URLs may be further isolated from each other in bug 780770.Several academic papers have covered topics about Chromium's process model.
Security Architecture of the Chromium Browser
Adam Barth, Collin Jackson, Charles Reis, and The Google Chrome Team. Stanford Technical Report, September 2008.
Abstract:
Most current web browsers employ a monolithic architecture that combines “the user” and “the web” into a single protection domain. An attacker who exploits an arbitrary code execution vulnerability in such a browser can steal sensitive files or install malware. In this paper, we present the security architecture of Chromium, the open-source browser upon which Google Chrome is built. Chromium has two modules in separate protection domains: a browser kernel, which interacts with the operating system, and a rendering engine, which runs with restricted privileges in a sandbox. This architecture helps mitigate high-severity attacks without sacrificing compatibility with existing web sites. We define a threat model for browser exploits and evaluate how the architecture would have mitigated past vulnerabilities.
Isolating Web Programs in Modern Browser Architectures
Charles Reis, Steven D. Gribble (both authors at UW + Google). Eurosys, April 2009.
Abstract:
Many of today's web sites contain substantial amounts of client-side code, and consequently, they act more like programs than simple documents. This creates robustness and performance challenges for web browsers. To give users a robust and responsive platform, the browser must identify program boundaries and provide isolation between them.
We provide three contributions in this paper. First, we present abstractions of web programs and program instances, and we show that these abstractions clarify how browser components interact and how appropriate program boundaries can be identified. Second, we identify backwards compatibility tradeoffs that constrain how web content can be divided into programs without disrupting existing web sites. Third, we present a multi-process browser architecture that isolates these web program instances from each other, improving fault tolerance, resource management, and performance. We discuss how this architecture is implemented in Google Chrome, and we provide a quantitative performance evaluation examining its benefits and costs.
Site Isolation: Process Separation for Web Sites within the Browser
Charles Reis, Alexander Moshchuk, and Nasko Oskov, Google. Usenix Security, August 2019.
Abstract:
Current production web browsers are multi-process but place different web sites in the same renderer process, which is not sufficient to mitigate threats present on the web today. With the prevalence of private user data stored on web sites, the risk posed by compromised renderer processes, and the advent of transient execution attacks like Spectre and Meltdown that can leak data via microarchitectural state, it is no longer safe to render documents from different web sites in the same process. In this paper, we describe our successful deployment of the Site Isolation architecture to all desktop users of Google Chrome as a mitigation for process-wide attacks. Site Isolation locks each renderer process to documents from a single site and filters certain cross-site data from each process. We overcame performance and compatibility challenges to adapt a production browser to this new architecture. We find that this architecture offers the best path to protection against compromised renderer processes and same-process transient execution attacks, despite current limitations. Our performance results indicate it is practical to deploy this level of isolation while sufficiently preserving compatibility with existing web content. Finally, we discuss future directions and how the current limitations of Site Isolation might be addressed.