Please see the VRP FAQ page.
We must balance a commitment to openness with a commitment to avoiding unnecessary risk for users of widely-used open source libraries.
Our goal is to open security bugs to the public once the bug is fixed and the fix has been shipped to a majority of users. However, many vulnerabilities affect products besides Chromium, and we don’t want to put users of those products unnecessarily at risk by opening the bug before fixes for the other affected products have shipped.
Therefore, we make all security bugs public within approximately 14 weeks of the fix landing in the Chromium repository. The exception to this is in the event of the bug reporter or some other responsible party explicitly requesting anonymity or protection against disclosing other particularly sensitive data included in the vulnerability report (e.g. username and password pairs).
Vendors of products based on Chromium, distributors of operating systems that bundle Chromium, and individuals and organizations that significantly contribute to fixing security bugs can be added to a list for earlier access to these bugs. You can email us at security@chromium.org to request to join the list if you meet the above criteria. In particular, vendors of anti-malware, IDS/IPS, vulnerability risk assessment, and similar products or services do not meet this bar.
Please note that the safest version of Chrome/Chromium is always the latest stable version — there is no good reason to wait to upgrade, so enterprise deployments should always track the latest stable release. When you do this, there is no need to further assess the risk of Chromium vulnerabilities: we strive to fix vulnerabilities quickly and release often.
Chrome is built with mitigations and hardening which aim to prevent or reduce the impact of security issues. We classify bugs as security issues if they are known to affect a version and configuration of Chrome that we ship to the public. Some classes of bug might present as security issues if Chrome was compiled with different flags, or linked against a different C++ standard library, but do not with the toolchain and configuration that we use to build Chrome. We discuss some of these cases elsewhere in this FAQ.
If we become aware of them, these issues may be triaged as Type=Bug-Security, Security_Impact=None
or as Type=Bug
because they do not affect the production version of Chrome. They may or may not be immediately visible to the public in the bug tracker, and may or may not be identified as security issues. If fixes are landed, they may or may not be merged from HEAD to a release branch. Chrome will only label, fix and merge security issues in Chrome, but attackers can still analyze public issues, or commits in the Chromium project to identify bugs that might be exploitable in other contexts.
Chromium embedders and other downstream projects may build with different compilers, compile options, target operating systems, standard library, or additional software components. It is possible that some issues Chrome classifies as functional issues will manifest as security issues in a product embedding Chromium - it is the responsibility of any such project to understand what code they are shipping, and how it is compiled. We recommend using Chrome's configuration whenever possible.
Many developers of other projects use V8, Chromium, and sub-components of Chromium in their own projects. This is great! We are glad that Chromium and V8 suit your needs.
We want to open up fixed security bugs (as described in the previous answer), and will generally give downstream developers access sooner. However, please be aware that backporting security patches from recent versions to old versions cannot always work. (There are several reasons for this: The patch won‘t apply to old versions; the solution was to add or remove a feature or change an API; the issue may seem minor until it’s too late; and so on.) We believe the latest stable versions of Chromium and V8 are the most stable and secure. We also believe that tracking the latest stable upstream is usually less work for greater benefit in the long run than backporting. We strongly recommend that you track the latest stable branches, and we support only the latest stable branch.
See the severity guidelines for more information. Only security issues are considered under the security vulnerability rewards program. Other types of bugs, which we call “functional bugs”, are not.
Some timing attacks are considered security vulnerabilities, and some are considered privacy vulnerabilities. Timing attacks vary significantly in terms of impact, reliability, and exploitability.
Some timing attacks weaken mitigations like ASLR (e.g. Issue 665930). Others attempt to circumvent the same origin policy, for instance, by using SVG filters to read pixels cross-origin (e.g. Issue 686253 and Issue 615851).
Many timing attacks rely upon the availability of high-resolution timing information Issue 508166; such timing data often has legitimate usefulness in non-attack scenarios making it unappealing to remove.
Timing attacks against the browser‘s HTTP Cache (like Issue 74987) can potentially leak information about which sites the user has previously loaded. The browser could attempt to protect against such attacks (e.g. by bypassing the cache) at the cost of performance and thus user-experience. To mitigate against such timing attacks, end-users can delete browsing history and/or browse sensitive sites using Chrome’s Incognito or Guest browsing modes.
Other timing attacks can be mitigated via clever design changes. For instance, Issue 544765 describes an attack whereby an attacker can probe for the presence of HSTS rules (set by prior site visits) by timing the load of resources with URLs “fixed-up” by HSTS. Prior to Chrome 64, HSTS rules were shared between regular browsing and Incognito mode, making the attack more interesting. The attack was mitigated by changing Content-Security-Policy such that secure URLs will match rules demanding non-secure HTTP urls, a fix that has also proven useful to help to unblock migrations to HTTPS. Similarly, Issue 707071 describes a timing attack in which an attacker could determine what Android applications are installed; the attack was mitigated by introducing randomness in the execution time of the affected API.
If Chrome or any of its components (e.g. updater) can be abused to perform a local privilege escalation, then it may be treated as a valid security vulnerability.
Running any Chrome component with higher privileges than intended is not a security bug and we do not recommend running Chrome as an Administrator on Windows, or as root on POSIX.
Update, August 2019: Please note that this answer has changed. We have updated our threat model to include fingerprinting.
Although we do not consider fingerprinting issues to be security vulnerabilities, we do now consider them to be privacy bugs that we will try to resolve. We distinguish two forms of fingerprinting.
For passive fingerprinting, our ultimate goal is (to the extent possible) to reduce the information content available to below the threshold for usefulness.
For active fingerprinting, our ultimate goal is to establish a privacy budget and to keep web origins below the budget (such as by rejecting some API calls when the origin exceeds its budget). To avoid breaking rich web applications that people want to use, Chrome may increase an origin's budget when it detects that a person is using the origin heavily. As with passive fingerprinting, our goal is to set the default budget below the threshold of usefulness for fingerprinting.
These are both long-term goals. As of this writing (August 2019) we do not expect that Chrome will immediately achieve them.
For background on fingerprinting and the difficulty of stopping it, see Arvind Narayanan's site and Peter Eckersley's discussion of the information theory behind Panopticlick. There is also a pretty good analysis of in-browser fingerprinting vectors.
Malicious sites not yet blocked by Safe Browsing can be reported via https://www.google.com/safebrowsing/report_phish/. Safe Browsing is primarily a blocklist of known-unsafe sites; the feature warns the user if they attempt to navigate to a site known to deliver phishing or malware content. You can learn more about this feature in these references:
In general, it is not considered a security bug if a given malicious site is not blocked by the Safe Browsing feature, unless the site is on the blocklist but is allowed to load anyway. For instance, if a site found a way to navigate through the blocking red warning page without user interaction, that would be a security bug. A malicious site may exploit a security vulnerability (for instance, spoofing the URL in the Location Bar). This would be tracked as a security vulnerability in the relevant feature, not Safe Browsing itself.
Chrome tries to warn users before they open files that might modify their system. What counts as a dangerous file will vary depending on the operating system Chrome is running on, the default set of file handlers, Chrome settings, Enterprise policy and verdicts on both the site and the file from Safe Browsing. Because of this it will often be okay for a user to download and run a file. However, if you can clearly demonstrate how to bypass one of these protections then we’d like to hear about it. You can see if a Safe Browsing check happened by opening chrome://safe-browsing before starting the download.
The file type policy controls some details of which security checks to enable for a given file extension. Most importantly, it controls whether we contact Safe Browsing about a download, and whether we show a warning for all downloads of that file type. Starting in M74, the default for unknown file types has been to contact Safe Browsing. This prevents large-scale abuse from a previously unknown file type. Starting in M105, showing a warning for all downloads of an extension became reserved for exceptionally dangerous file types that can compromise a user without any user interaction with the file (e.g. DLL hijacking). If you discover a new file type that meets that condition, we’d like to hear about it.
The File System Access API maintains a blocklist of directories and files that may be sensitive such as systems file, and if user chooses a file or a directory matching the list on a site using File System Access API, the access is blocked.
The blocklist is designed to help mitigate accidental granting by users by listing well-known, security-sensitive locations, as a defense in-depth strategy. Therefore, the blocklist coverage is not deemed as a security bug, especially as it requires user's explicit selection on a file or a directory from the file picker.
Chrome tries to let users know what they will be saving and downloading before they do so. Often operating systems will obscure a file’s type or extension and there is little we can do about that. Chrome shows information to help users make these decisions, both in Chrome-owned UI and in information that Chrome passes to OS-owned UI. If this information can be manipulated from a web site to mislead a user, then we’d like to hear about it. Example.
Chrome attempts to label files downloaded from the internet with metadata using operating system APIs where these are available – for instance applying the Mark of the Web on Windows. This is often not possible (for instance on non-NTFS file systems on Windows, or for files inside downloaded archives) or disabled by policy. If a web site can cause Chrome to download a file without Chrome then adding this metadata as usual, we’d like to hear about it.
Chrome should not allow filesystem links to be created by initiating a download. Example. Example.
Chrome tries to design its prompts to select safe defaults. If a prompt can accidentally be accepted without the user having an opportunity to make a decision about the prompt then we’d like to know. Examples might include poor defaults so that a user holding down an enter key might accept a dialog they would want to dismiss. Example.
Note that a user navigating to a download will cause a file to be downloaded.
No. The Chrome Privacy team treats privacy issues, such as leaking information from Incognito, fingerprinting, and bugs related to deleting browsing data as functional bugs.
Privacy issues are not considered under the security vulnerability rewards program; the severity guidelines outline the types of bugs that are considered security vulnerabilities in more detail.
Bugs in Incognito mode are tracked as privacy bugs, not security bugs.
The Help Center explains what privacy protections Incognito mode attempts to enforce. In particular, please note that Incognito is not a “do not track” mode, and it does not hide aspects of your identity from web sites. Chrome does offer a way to send Do Not Track request to servers; see chrome://settings/?search=do+not+track
When in Incognito mode, Chrome does not store any new history, cookies, or other state in non-volatile storage. However, Incognito windows will be able to access some previously-stored state, such as browsing history.
No. Chromium once contained a reflected XSS filter called the XSSAuditor that was a best-effort second line of defense against reflected XSS flaws found in web sites. The XSS Auditor was removed in Chrome 78. Consequently, Chromium no longer takes any special action in response to an X-XSS-Protection header.
No. Denial of Service (DoS) issues are treated as abuse or stability issues rather than security vulnerabilities.
DoS issues are not considered under the security vulnerability rewards program; the severity guidelines outline the types of bugs that are considered security vulnerabilities in more detail.
People sometimes report that they can compromise Chrome by installing a malicious DLL in a place where Chrome will load it, by hooking APIs (e.g. Issue 130284), or by otherwise altering the configuration of the device.
We consider these attacks outside Chrome's threat model, because there is no way for Chrome (or any application) to defend against a malicious user who has managed to log into your device as you, or who can run software with the privileges of your operating system user account. Such an attacker can modify executables and DLLs, change environment variables like PATH
, change configuration files, read any data your user account owns, email it to themselves, and so on. Such an attacker has total control over your device, and nothing Chrome can do would provide a serious guarantee of defense. This problem is not special to Chrome — all applications must trust the physically-local user.
There are a few things you can do to mitigate risks from people who have physical control over your computer, in certain circumstances.
There is almost nothing you can do to mitigate risks when using a public computer.
Although the attacker may now be remote, the consequences are essentially the same as with physically-local attacks. The attacker's code, when it runs as your user account on your machine, can do anything you can do. (See also Microsoft's Ten Immutable Laws Of Security.)
Other cases covered by this section include leaving a debugger port open to the world, remote shells, and so forth.
No. Chrome does not attempt to prevent the user from knowingly running script against loaded documents, either by entering script in the Developer Tools console or by typing a JavaScript: URI into the URL bar. Chrome and other browsers do undertake some efforts to prevent paste of script URLs in the URL bar (to limit social-engineering) but users are otherwise free to invoke script against pages using either the URL bar or the DevTools console.
No. Chromium allows users to create bookmarks to JavaScript URLs that will run on the currently-loaded page when the user clicks the bookmark; these are called bookmarklets.
Similarly, the Home button may be configured to invoke a JavaScript URL when clicked.
No. PDF files have the ability to run JavaScript, usually to facilitate field validation during form fill-out. Note that the set of bindings provided to the PDF are more limited than those provided by the DOM to HTML documents, nor do PDFs get any ambient authority based upon the domain from which they are served (e.g. no document.cookie).
No. PDF files have some powerful capabilities including invoking printing or posting form data. To mitigate abuse of these capabiliies, such as beaconing upon document open, we require interaction with the document (a “user gesture”) before allowing their use.
We try to balance the needs of our international userbase while protecting users against confusable homograph attacks. Despite this, there are a list of known IDN display issues we are still working on.
This topic has been moved to the Extensions Security FAQ.
Null pointer dereferences with consistent, small, fixed offsets are not considered security bugs. A read or write to the NULL page results in a non-exploitable crash. If the offset is larger than 32KB, or if there's uncertainty about whether the offset is controllable, it is considered a security bug.
All supported Chrome platforms do not allow mapping memory in at least the first 32KB of address space:
__PAGEZERO
segment for 64-bit binaries.mmap_min_addr
value for supported distributions is at least 64KB.mmap_min_addr
is set to exactly 32KB.mmap_min_addr
value to at least 32KB.std::vector
and other containers are now protected by libc++ hardening on all platforms crbug.com/1335422. Indexing these containers out of bounds is now a safe crash - if a proof-of-concept reliably causes a crash in production builds we consider these to be functional rather than security issues.
No. Guard pages mean that stack overflows are considered unexploitable, and are regarded as denial of service bugs. The only exception is if an attacker can jump over the guard pages allocated by the operating system and avoid accessing them, e.g.:
alloca()
with an attacker-controlled size.Chrome can't guard against local attacks. Enterprise administrators often have full control over the device. Does Chrome assume that enterprise administrators are as privileged and powerful as other local users? It depends:
Chrome administrators can force-install Chrome extensions without permissions prompts, so the same restrictions must apply to the Chrome extension APIs.
Chrome has a long history of policy support with many hundreds of policies. We recognize that there may exist policies or policy combinations that can provide capabilities outside of the guidance provided here. In cases of clear violation of user expectations, we will attempt to remedy these policies and we will apply the guidance laid out in this document to any newly added policies.
See the Web Platform Security guidelines for more information on how enterprise policies should interact with Web Platform APIs.
There are known compatibility problems between Microsoft's EMET anti-exploit toolkit and some versions of Chrome. These can prevent Chrome from running in some configurations. Moreover, the Chrome security team does not recommend the use of EMET with Chrome because its most important security benefits are redundant with or superseded by built-in attack mitigations within the browser. For users, the very marginal security benefit is not usually a good trade-off for the compatibility issues and performance degradation the toolkit can cause.
The topmost portion of the browser window, consisting of the Omnibox (or Location Bar), navigation icons, menu icon, and other indicator icons, is sometimes called the browser chrome (not to be confused with the Chrome Browser itself). Actual security indicators can only appear in this section of the window. There can be no trustworthy security indicators elsewhere.
Furthermore, Chrome can only guarantee that it is correctly representing URLs and their origins at the end of all navigation. Quirks of URL parsing, HTTP redirection, and so on are not security concerns unless Chrome is misrepresenting a URL or origin after navigation has completed.
Browsers present a dilemma to the user since the output is a combination of information coming from both trustworthy sources (the browser itself) and untrustworthy sources (the web page), and the untrustworthy sources are allowed virtually unlimited control over graphical presentation. The only restriction on the page's presentation is that it is confined to the large rectangular area directly underneath the chrome, called the viewport. Things like hover text and URL preview(s), shown in the viewport, are entirely under the control of the web page itself. They have no guaranteed meaning, and function only as the page desires. This can be even more confusing when pages load content that looks like chrome. For example, many pages load images of locks, which look similar to the meaningful HTTPS lock in the Omnibox, but in fact do not convey any meaningful information about the transport security of that page.
When the browser needs to show trustworthy information, such as the bubble resulting from a click on the lock icon, it does so by making the bubble overlap chrome. This visual detail can't be imitated by the page itself since the page is confined to the viewport.
Some types of software intercept HTTPS connections. Examples include anti-virus software, corporate network monitoring tools, and school censorship software. In order for the interception to work, you need to install a private trust anchor (root certificate) onto your computer. This may have happened when you installed your anti-virus software, or when your company's network administrator set up your computer. If that has occurred, your HTTPS connections can be viewed or modified by the software.
Since you have allowed the trust anchor to be installed onto your computer, Chrome assumes that you have consented to HTTPS interception. Anyone who can add a trust anchor to your computer can make other changes to your computer, too, including changing Chrome. (See also Why aren‘t physically-local attacks in Chrome’s threat model?.)
A key guarantee of HTTPS is that Chrome can be relatively certain that it is connecting to the true web server and not an impostor. Some sites request an even higher degree of protection for their users (i.e. you): they assert to Chrome (via Strict Transport Security — HSTS — or by other means) that any server authentication error should be fatal, and that Chrome must close the connection. If you encounter such a fatal error, it is likely that your network is under attack, or that there is a network misconfiguration that is indistinguishable from an attack.
The best thing you can do in this situation is to raise the issue to your network provider (or corporate IT department).
Chrome shows non-recoverable HTTPS errors only in cases where the true server has previously asked for this treatment, and when it can be relatively certain that the current server is not the true server.
To enable certificate chain validation, Chrome has access to two stores of trust anchors (i.e., certificates that are empowered as issuers). One trust anchor store is for authenticating public internet servers, and depending on the version of Chrome being used and the platform it is running on, the Chrome Root Store might be in use. The private store contains certificates installed by the user or the administrator of the client machine. Private intranet servers should authenticate themselves with certificates issued by a private trust anchor.
Chrome’s key pinning feature is a strong form of web site authentication that requires a web server’s certificate chain not only to be valid and to chain to a known-good trust anchor, but also that at least one of the public keys in the certificate chain is known to be valid for the particular site the user is visiting. This is a good defense against the risk that any trust anchor can authenticate any web site, even if not intended by the site owner: if an otherwise-valid chain does not include a known pinned key (“pin”), Chrome will reject it because it was not issued in accordance with the site operator’s expectations.
Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.
We deem this acceptable because the proxy or MITM can only be effective if the client machine has already been configured to trust the proxy’s issuing certificate — that is, the client is already under the control of the person who controls the proxy (e.g. the enterprise’s IT administrator). If the client does not trust the private trust anchor, the proxy’s attempt to mediate the connection will fail as it should.
Key pinning is enabled for Chrome-branded, non-mobile builds when the local clock is within ten weeks of the embedded build timestamp. Key pinning is a useful security measure but it tightly couples client and server configurations and completely breaks when those configurations are out of sync. In order to manage that risk we need to ensure that we can promptly update pinning clients in an emergency and ensure that non-emergency changes can be deployed in a reasonable timeframe.
Each of the conditions listed above helps ensure those properties: Chrome-branded builds are those that Google provides and they all have an auto-update mechanism that can be used in an emergency. However, auto-update on mobile devices is significantly less effective thus they are excluded. Even in cases where auto-update is generally effective, there are still non-trivial populations of stragglers for various reasons. The ten-week timeout prevents those stragglers from causing problems for regular, non-emergency changes and allows stuck users to still, for example, conduct searches and access Chrome's homepage to hopefully get unstuck.
In order to determine whether key pinning is active, try loading https://pinning-test.badssl.com/. If key pinning is active the load will fail with a pinning error.
Just as pinning only applies to publicly-trusted trust anchors, Chrome only evaluates Certificate Transparency (CT) for publicly-trusted trust anchors. Thus private trust anchors, such as for enterprise middle-boxes and AV proxies, do not need to be publicly logged in a CT log.
The full answer is here: we Prefer Secure Origins For Powerful New Features. In short, many web platform features give web origins access to sensitive new sources of information, or significant power over a user's experience with their computer/phone/watch/etc., or over their experience with it. We would therefore like to have some basis to believe the origin meets a minimum bar for security, that the sensitive information is transported over the Internet in an authenticated and confidential way, and that users can make meaningful choices to trust or not trust a web origin.
Note that the reason we require secure origins for WebCrypto is slightly different: An application that uses WebCrypto is almost certainly using it to provide some kind of security guarantee (e.g. encrypted instant messages or email). However, unless the JavaScript was itself transported to the client securely, it cannot actually provide any guarantee. (After all, a MITM attacker could have modified the code, if it was not transported securely.)
See the Web Platform Security guidelines for more information on security guidelines applicable to web platform APIs.
Secure origins are those that match at least one of the following (scheme, host, port) patterns:
That is, secure origins are those that load resources either from the local machine (necessarily trusted) or over the network from a cryptographically-authenticated server. See Prefer Secure Origins For Powerful New Features for more details.
Chrome's primary mechanism for checking certificate revocation status is CRLsets. Additionally, by default, stapled Online Certificate Status Protocol (OCSP) responses are honored.
“Online” certificate revocation status checks using Certificate Revocation List (CRL) or OCSP URLs included in certificates are disabled by default. This is because unless a client, like Chrome, refuses to connect to a website if it cannot get a valid response, online checks offer limited security value.
Unfortunately, there are many widely-prevalent causes for why a client might be unable to get a valid certificate revocation status response to include:
Additional concern with OCSP checks are related to privacy. OCSP requests reveal details of individuals' browsing history to the operator of the OCSP responder (i.e., a third party). These details can be exposed accidentally (e.g., via data breach of logs) or intentionally (e.g., via subpoena). Chrome used to perform revocation checks for Extended Validation certificates, but that behavior was disabled in 2022 for privacy reasons.
For more discussion on challenges with certificate revocation status checking, explained by Adam Langley, see https://www.imperialviolet.org/2014/04/29/revocationagain.html and https://www.imperialviolet.org/2014/04/19/revchecking.html.
The following enterprise policies can be used to change the default revocation checking behavior in Chrome, though these may be removed in the future:
One of the most frequent reports we receive is password disclosure using the Inspect Element feature (see Issue 126398 for an example). People reason that “If I can see the password, it must be a bug.” However, this is just one of the physically-local attacks described in the previous section, and all of those points apply here as well.
The reason the password is masked is only to prevent disclosure via “shoulder-surfing” (i.e. the passive viewing of your screen by nearby persons), not because it is a secret unknown to the browser. The browser knows the password at many layers, including JavaScript, developer tools, process memory, and so on. When you are physically local to the computer, and only when you are physically local to the computer, there are, and always will be, tools for extracting the password from any of these places.
Not at this time. Chrome supports HTTP and HTTPS URIs with username and password information embedded within them for compatibility with sites that require this feature. Notably, Chrome will suppress display of the username and password information after navigation in the URL box to limit the effectiveness of spoofing attacks that may try to mislead the user. For instance, navigating to http://trustedsite.com@evil.example.com
will show an address of http://evil.example.com
after the page loads.
Note: We often receive reports calling this an “open redirect”. However, it has nothing to do with redirection; rather the format of URLs is complex and the userinfo may be misread as a host.
autocomplete='off'
for password fields?Ignoring autocomplete='off'
for password fields allows the password manager to give more power to users to manage their credentials on websites. It is the security team's view that this is very important for user security by allowing users to have unique and more complex passwords for websites. As it was originally implemented, autocomplete=‘off’ for password fields took control away from the user and gave control to the web site developer, which was also a violation of the priority of constituencies. For a longer discussion on this, see the mailing list announcement.
If you have signed into Chrome and subsequently sign out of Chrome, previously saved passwords and other data are not deleted from your device unless you select that option when signing out of Chrome.
If you change your Google password, synced data will no longer be updated in Chrome instances until you provide the new password to Chrome on each device configured to sync. However, previously synced data remains available on each previously-syncing device unless manually removed.
In its default mode, Chrome Sync uses your Google password to protect all the other passwords in the Chrome Password Manager.
In general, it is a bad idea to store the credential that protects an asset in the same place as the asset itself. An attacker who could temporarily compromise the Chrome Password Manager could, by stealing your Google password, obtain continuing access to all your passwords. Imagine you store your valuables in a safe, and you accidentally forget to close the safe. If a thief comes along, they might steal all of your valuables. That’s bad, but imagine if you had also left the combination to the safe inside as well. Now the bad guy has access to all of your valuables and all of your future valuables, too. The password manager is similar, except you probably would not even know if a bad guy accessed it.
To prevent this type of attack, Chrome Password Manager does not save the Google password for the account you sync with Chrome. If you have multiple Google accounts, the Chrome Password Manager will save the passwords for accounts other than the one you are syncing with.
Chrome generally tries to use the operating system's user storage mechanism wherever possible and stores them encrypted on disk, but it is platform specific:
No. If an attacker has control of your login on your device, they can get to your passwords by inspecting Chrome disk files or memory. (See why aren‘t physically-local attacks in Chrome’s threat model).
On some platforms we ask for a password before revealing stored passwords, but this is not considered a robust defense. It’s historically to stop users inadvertently revealing their passwords on screen, for example if they’re screen sharing. We don’t do this on all platforms because we consider such risks greater on some than on others.
See our dedicated Service Worker Security FAQ.
See our dedicated Extensions Security FAQ.
See our Chrome Custom Tabs security FAQ.
Yes - see our updates FAQ.
We always recommend being on the most recent Chrome stable version - see our updates FAQ.
If you want to make a browser based on Chromium, you should stay up to date with Chromium's security fixes. There are adversaries who weaponize fixed Chromium bugs (“n-day vulnerabilities”) to target browsers which haven’t yet absorbed those fixes.
Decide whether your approach is to stay constantly up to date with Chromium releases, or to backport security fixes onto some older version, upgrading Chromium versions less frequently.
Backporting security fixes sounds easier than forward-porting features, but in our experience, this is false. Chromium releases 400+ security bug fixes per year (example query). Some downstream browsers take risks by backporting only Medium+ severity fixes, but that's still over 300 (example query). Most are trivial cherry-picks; but others require rework and require versatile engineers who can make good decisions about any part of a large codebase.
Our recommendation is to stay up-to-date with Chrome's released versions. You should aim to release a version of your browser within just a few days of each Chrome stable release. If your browser is sufficiently widely-used, you can apply for advance notice of fixed vulnerabilities to make this a little easier.
Finally, if you choose the backporting approach, please explain the security properties to your users. Some fraction of security improvements cannot be backported. This can happen for several reasons, for example: because they depend upon architectural changes (e.g. breaking API changes); because the security improvement is a significant new feature; or because the security improvement is the removal of a broken feature.