[go: nahoru, domu]


At Google, we strive to make the internet safer and that includes recognizing and rewarding security improvements that are vital to the health of the entire web. In 2020, we are building on this commitment by launching a new iteration of our Patch Rewards program for third-party open source projects.

Over the last six years, we have rewarded open source projects for security improvements after they have been implemented. While this has led to overall improved security, we want to take this one step further.

Introducing upfront financial help
Starting on January 1, 2020, we’re not only going to reward proactive security improvements after the work is completed, but we will also complement the program with upfront financial support to provide an additional resource for open source developers to prioritize security work. For example, if you are a small open source project and you want to improve security, but don’t have the necessary resources, this new reward can help you acquire additional development capacity.

We will start off with two support levels :
  • Small ($5,000): Meant to motivate and reward a project for fixing a small number of security issues. Examples: improvements to privilege separation or sandboxing, cleanup of integer artimetrics, or more generally fixing vulnerabilities identified in open source software by bug bounty programs such as EU-FOSSA 2 (see ‘Qualifying submissions’ here for more examples).
  • Large ($30,000): Meant to incentivize a larger project to invest heavily in security, e.g. providing support to find additional developers, or implement a significant new security feature (e.g. new compiler mitigations).
Nomination process

Anyone can nominate an open source project for support by filling out http://goo.gle/patchz-nomination. Our Patch Reward Panel will review submissions on a monthly basis and select a number of projects that meet the program criteria. The panel will let submitors know if a project has been chosen and will start working with the project maintainers directly.

Projects in scope

Any open source project can be nominated for support. When selecting projects, the panel will put an emphasis on projects that either are vital to the health of the Internet or are end-user projects with a large user base.

What do we expect in return?

We expect to see security improvements to open source software. Ideally, the project can provide us
with a short blurb or pointers to some of the completed work that was possible because of our support. We don’t want to add bureaucracy, but would like to measure the success of the program.
What about the existing Patch Rewards program?
This is an addition to the existing program, the current Patch Rewards program will continue as it stands today.


At Google, the safety of user data is our paramount concern and we strive to protect it comprehensively. That includes protection from insider risk, which is the possible risk that employees could use their organizational knowledge or access to perform malicious acts. Insider risk also covers the scenario where an attacker has compromised the credentials of someone at Google to facilitate their attack. There are times when it’s necessary for our services and personnel to access user data as part of fulfilling our contractual obligations to you: as part of their role, such as user support; and programmatically, as part of a service. Today, we’re releasing a whitepaper, “Binary Authorization for Borg: how Google verifies code provenance and implements code identity,” that explains one of the mechanisms we use to protect user data from insider risks on Google's cluster management system Borg.

Binary Authorization for Borg is a deploy-time enforcement check

Binary Authorization for Borg, or BAB, is an internal deploy-time enforcement check that reduces insider risk by ensuring that production software and configuration deployed at Google is properly reviewed and authorized, especially when that code has the ability to access user data. BAB ensures that code and configuration deployments meet certain standards prior to being deployed. BAB includes both a deploy-time enforcement service to prevent unauthorized jobs from starting, and an audit trail of the code and configuration used in BAB-enabled jobs.

BAB ensures that Google's official software supply chain process is followed. First, a code change is reviewed and approved before being checked into Google's central source code repository. Next, the code is verifiably built and packaged using Google's central build system. This is done by creating the build in a secure sandbox and recording the package's origin in metadata for verification purposes. Finally, the job is deployed to Borg, with a job-specific identity. BAB rejects any package that lacks proper metadata, that did not follow the proper supply chain process, or that otherwise does not match the identity’s predefined policy.

Binary Authorization for Borg allows for several kinds of security checks

BAB can be used for many kinds of deploy-time security checks. Some examples include:
  • Is the binary built from checked in code?
  • Is the binary built verifiably?
  • Is the binary built from tested code?
  • Is the binary built from code intended to be used in the deployment?
After deployment, a job is continuously verified for its lifetime, to check that jobs that were started (and any that may still be running) conform to updates to their policies.

Binary Authorization for Borg provides other security benefits
Though the primary purpose of BAB is to limit the ability of a potentially malicious insider to run an unauthorized job that could access user data, BAB has other security benefits. BAB provides robust code identity for jobs in Google’s infrastructure, tying a job’s identity to specific code, and ensuring that only the specified code can be used to exercise the job identity’s privileges. This allows for a transition from a job identity—trusting an identity and any of its privileged human users transitively—to a code identity—trusting a specific piece of reviewed code to have specific semantics and which cannot be modified without an approval process.

BAB also dictates a common language for data protection, so that multiple teams can understand and meet the same requirements. Certain processes, such as those for financial reporting, need to meet certain change management requirements for compliance purposes. Using BAB, these checks can be automated, saving time and increasing the scope of coverage.

Binary Authorization for Borg is part of the BeyondProd model
BAB is one of several technologies used at Google to mitigate insider risk, and one piece of how we secure containers and microservices in production. By using containerized systems and verifying their BAB requirements prior to deployment, our systems are easier to debug, more reliable, and have a clearer change management process. More details on how Google has adopted a cloud-native security model are available in another whitepaper we are releasing today, “BeyondProd: A new approach to cloud-native security.”

In summary, implementing BAB, a deploy-time enforcement check, as part of Google’s containerized infrastructure and continuous integration and deployment (CI/CD) process has enabled us to verify that the code and configuration we deploy meet certain standards for security. Adopting BAB has allowed Google to reduce insider risk, prevent possible attacks, and also support the uniformity of our production systems. For more information about BAB, read our whitepaper, “Binary Authorization for Borg: how Google verifies code provenance and implements code identity.”

Additional contributors to this whitepaper include Kevin Chen, Software Engineer; Tim Dierks, Engineering Director; Maya Kaczorowski, Product Manager; Gary O’Connor, Technical Writing; Umesh Shankar, Principal Engineer; Adam Stubblefield, Distinguished Engineer; and Wilfried Teiken, Software Engineer; with special recognition to the entire Binary Authorization for Borg team for their ideation, engineering, and leadership



Today, we announced better password protections in Chrome, gradually rolling out with release M79. Here are the details of how they work.


Warnings about compromised passwords
Google first introduced password breach warnings as a Password Checkup extension early this year. It compares passwords and usernames against over 4 billion credentials that Google knows to have been compromised. You can read more about it here. In October, Google built the Password Checkup feature into the Google Account, making it available from passwords.google.com.

Chrome’s integration is a natural next step to ensure we protect even more users as they browse the web. Here is how it works:
  • Whenever Google discovers a username and password exposed by another company’s data breach, we store a hashed and encrypted copy of the data on our servers with a secret key known only to Google.
  • When you sign in to a website, Chrome will send a hashed copy of your username and password to Google encrypted with a secret key only known to Chrome. No one, including Google, is able to derive your username or password from this encrypted copy.
  • In order to determine if your username and password appears in any breach, we use a technique called private set intersection with blinding that involves multiple layers of encryption. This allows us to compare your encrypted username and password with all of the encrypted breached usernames and passwords, without revealing your username and password, or revealing any information about any other users’ usernames and passwords. In order to make this computation more efficient, Chrome sends a 3-byte SHA256 hash prefix of your username to reduce the scale of the data joined from 4 billion records down to 250 records, while still ensuring your username remains anonymous.
  • Only you discover if your username and password have been compromised. If they have been compromised, Chrome will tell you, and we strongly encourage you to change your password.
You can control this feature in the “Sync and Google Services” section of Chrome Settings. Enterprise admins can control this feature using the Password​Leak​Detection​Enabled policy setting.


Real-time phishing protection: Checking with Safe Browsing’s blocklist in real time.
Chrome’s new real-time phishing protection is also expanding existing technology — in this case it’s Google’s well-established Safe Browsing.

Every day, Safe Browsing discovers thousands of new unsafe sites and adds them to the blocklists shared with the web industry. Chrome checks the URL of each site you visit or file you download against this local list, which is updated approximately every 30 minutes. If you navigate to a URL that appears on the list, Chrome checks a partial URL fingerprint (the first 32 bits of a SHA-256 hash of the URL) with Google for verification that the URL is indeed dangerous. Google cannot determine the actual URL from this information.

However, we’re noticing that some phishing sites slip through our 30-minute refresh window, either by switching domains very quickly or by hiding from Google's crawlers.

That’s where real-time phishing protections come in. These new protections can inspect the URLs of pages visited with Safe Browsing’s servers in real time. When you visit a website, Chrome checks it against a list stored on your computer of thousands of popular websites that are known to be safe. If the website is not on the safe-list, Chrome checks the URL with Google (after dropping any username or password embedded in the URL) to find out if you're visiting a dangerous site. Our analysis has shown that this results in a 30% increase in protections by warning users on malicious sites that are brand new.

We will be initially rolling out this feature for people who have already opted-in to “Make searches and browsing better” setting in Chrome. Enterprises administrators can manage this setting via the Url​Keyed​Anonymized​Data​Collection​Enabled policy settings.


Expanding predictive phishing protection
Your password is the key to your online identity and data. If this key falls into the hands of attackers, they can easily impersonate you and get access to your data. We launched predictive phishing protections to warn users who are syncing history in Chrome when they enter their Google Account password into suspected phishing sites that try to steal their credentials.

With this latest release, we’re expanding this protection to everyone signed in to Chrome, even if you have not enabled Sync. In addition, this feature will now work for all the passwords you have stored in Chrome’s password manager.

If you type one of your protected passwords (this could be a password you stored in Chrome’s password manager, or the Google Account password you used to sign in to Chrome) into an unusual site, Chrome classifies this as a potentially dangerous event.

In such a scenario, Chrome checks the site against a list on your computer of thousands of popular websites that are known to be safe. If the website is not on the safe-list, Chrome checks the URL with Google (after dropping any username or password embedded in the URL). If this check determines that the site is indeed suspicious or malicious, Chrome will immediately show you a warning and encourage you to change your compromised password. If it was your Google Account password that was phished, Chrome also offers to notify Google so we can add additional protections to ensure your account isn't compromised.

By watching for password reuse, Chrome can give heightened security in critical moments while minimizing the data it shares with Google. We think predictive phishing protection will protect hundreds of millions more people.



#!/bin/sh
cat /home/user/foo


What can go wrong if this command runs as root? Does it change anything if foo is a symbolic link to /etc/shadow? How is the output going to be used?

Depending on the answers to the questions above, accessing files this way could be a vulnerability. The vulnerability exists in syscalls that operate on file paths, such as open, rename, chmod, or exec. For a vulnerability to be present, part of the path has to be user controlled and the program that executes the syscall has to be run at a higher privilege level. In a potential exploit, the attacker can substitute the path for a symlink and create, remove, or execute a file. In many cases, it's possible for an attacker to create the symlink before the syscall is executed.

At Google, we have been working on a solution to find these potentially problematic issues at scale: PathAuditor. In this blog post we'll outline the problem and explain how you can avoid it in your code with PathAuditor.

Let’s take a look at a real world example. The tmpreaper utility contained the following code to check if a directory is a mount point:
if ((dst = malloc(strlen(ent->d_name) + 3)) == NULL)
       message (LOG_FATAL, "malloc failed.\n");
strcpy(dst, ent->d_name);
strcat(dst, "/X");
rename(ent->d_name, dst);
if (errno == EXDEV) {
[...]


This code will call rename("/tmp/user/controlled", "/tmp/user/controlled/X"). Under the hood, the kernel will resolve the path twice, once for the first argument and once for the second, then perform some checks if the rename is valid and finally try to move the file from one directory to the other.

However, the problem is that the user can race the kernel code and replace the “/tmp/user/controlled” with a symlink just between the two path resolutions.

A successful attack would look roughly like this:
  • Make “/tmp/user/controlled” a file with controlled content.
  • The kernel resolves that path for the first argument to rename() and sees the file.
  • Replace “/tmp/user/controlled” with a symlink to /etc/cron.
  • The kernel resolves the path again for the second argument and ends up in /etc/cron.
  • If both the tmp and cron directories are on the filesystem, the kernel will move the attacker controlled file to /etc/cron, leading to code execution as root.
Can we find such bugs via automated analysis? Well, yes and no. As shown in the tmpreaper example, exploiting these bugs can require some creativity and it depends on the context if they’re vulnerabilities in the first place. Automated analysis can uncover instances of this access pattern and will gather as much information as it can to help with further investigation. However, it will also naturally produce false positives.

We can’t tell if a call to open(/user/controlled, O_RDONLY) is a vulnerability without looking at the context. It depends on whether the contents are returned to the user or are used in some security sensitive way. A call to chmod(/user/controlled, mode) depending on the mode can be either a DoS or a privilege escalation. Accessing files in sticky directories (like /tmp) can become vulnerabilities if the attacker found an additional bug to delete arbitrary files.

How Pathauditor works

To find issues like this at scale we wrote PathAuditor, a tool that monitors file accesses and logs potential vulnerabilities. PathAuditor is a shared library that can be loaded into processes using LD_PRELOAD. It then hooks all filesystem related libc functions and checks if the access is safe. For that, we traverse the path and check if any component could be replaced by an unprivileged user, for example if a directory is user-writable. If we detect such a pattern, we log it to syslog for manual analysis.

Here's how you can use it to find vulnerabilities in your code:
  • LD_PRELOAD the library to your binary and then analyse its findings in syslog. You can also add the library to /etc/ld.so.preload, which will preload it in all binaries running on the system.
  • It will then gather the PID and the command line of the calling process, arguments of the vulnerable function, and a stack trace -- this provides a starting point for further investigation. At this point, you can use the stack trace to find the code path that triggered the violation and manually analyse what would happen if you would point the path to an arbitrary file or directory.
  • For example, if the code is opening a file and returning the content to the user then you could use it to read arbitrary files. If you control the path of chmod or chown, you might be able to change the permissions of chosen files and so on.
PathAuditor has proved successful at Google and we're excited to share it with the community. The project is still in the early stages and we are actively working on it. We look forward to hearing about any vulnerabilities you discover with the tool, and hope to see pull requests with further improvements.

Try out the PathAuditor tool here.

Marta Rożek was a Google Summer intern in 2019 and contributed to this blog and the PathAuditor tool

Posted by Bram Bonné, Senior Software Engineer, Android Platform Security & Chad Brubaker, Staff Software Engineer, Android Platform Security

banner illustration with several devices and gaming controller

Android is committed to keeping users, their devices, and their data safe. One of the ways that we keep data safe is by protecting network traffic that enters or leaves an Android device with Transport Layer Security (TLS).

Android 7 (API level 24) introduced the Network Security Configuration in 2016, allowing app developers to configure the network security policy for their app through a declarative configuration file. To ensure apps are safe, apps targeting Android 9 (API level 28) or higher automatically have a policy set by default that prevents unencrypted traffic for every domain.

Today, we’re happy to announce that 80% of Android apps are encrypting traffic by default. The percentage is even greater for apps targeting Android 9 and higher, with 90% of them encrypting traffic by default.

Percentage of apps that block cleartext by default.

Percentage of apps that block cleartext by default.

Since November 1 2019, all app (updates as well as all new apps on Google Play) must target at least Android 9. As a result, we expect these numbers to continue improving. Network traffic from these apps is secure by default and any use of unencrypted connections is the result of an explicit choice by the developer.

The latest releases of Android Studio and Google Play’s pre-launch report warn developers when their app includes a potentially insecure Network Security Configuration (for example, when they allow unencrypted traffic for all domains or when they accept user provided certificates outside of debug mode). This encourages the adoption of HTTPS across the Android ecosystem and ensures that developers are aware of their security configuration.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers as part of the pre-launch report.

Example of a warning shown to developers as part of the pre-launch report.

What can I do to secure my app?

For apps targeting Android 9 and higher, the out-of-the-box default is to encrypt all network traffic in transit and trust only certificates issued by an authority in the standard Android CA set without requiring any extra configuration. Apps can provide an exception to this only by including a separate Network Security Config file with carefully selected exceptions.

If your app needs to allow traffic to certain domains, it can do so by including a Network Security Config file that only includes these exceptions to the default secure policy. Keep in mind that you should be cautious about the data received over insecure connections as it could have been tampered with in transit.

<network-security-config>
    <base-config cleartextTrafficPermitted="false" />
    <domain-config cleartextTrafficPermitted="true">
        <domain includeSubdomains="true">insecure.example.com</domain>
        <domain includeSubdomains="true">insecure.cdn.example.com</domain>
    </domain-config>
</network-security-config>

If your app needs to be able to accept user specified certificates for testing purposes (for example, connecting to a local server during testing), make sure to wrap your element inside a element. This ensures the connections in the production version of your app are secure.

<network-security-config>
    <debug-overrides>
        <trust-anchors>
            <certificates src="user"/>
        </trust-anchors>
    </debug-overrides>
</network-security-config>

What can I do to secure my library?

If your library directly creates secure/insecure connections, make sure that it honors the app's cleartext settings by checking isCleartextTrafficPermitted before opening any cleartext connection.

Android’s built-in networking libraries and other popular HTTP libraries such as OkHttp or Volley have built-in Network Security Config support.

Giles Hogben, Nwokedi Idika, Android Platform Security, Android Studio and Pre-Launch Report teams

The Android Security Rewards (ASR) program was created in 2015 to reward researchers who find and report security issues to help keep the Android ecosystem safe. Over the past 4 years, we have awarded over 1,800 reports, and paid out over four million dollars.

Today, we’re expanding the program and increasing reward amounts. We are introducing a top prize of $1 million for a full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices. Additionally, we will be launching a specific program offering a 50% bonus for exploits found on specific developer preview versions of Android, meaning our top prize is now $1.5 million.

As mentioned in a previous blog post, in 2019 Gartner rated the Pixel 3 with Titan M as having the most “strong” ratings in the built-in security section out of all devices evaluated. This is why we’ve created a dedicated prize to reward researchers for exploits found to circumvent the secure elements protections.

In addition to exploits involving Pixel Titan M, we have added other categories of exploits to the rewards program, such as those involving data exfiltration and lockscreen bypass. These rewards go up to $500,000 depending on the exploit category. For full details, please refer to the Android Security Rewards Program Rules page.

Now that we’ve covered some of what’s new, let’s take a look back at some milestones from this year. Here are some highlights from 2019:

  • Total payouts in the last 12 months have been over $1.5 million.
  • Over 100 participating researchers have received an average reward amount of over $3,800 per finding (46% increase from last year). On average, this means we paid out over $15,000 (20% increase from last year) per researcher!
  • The top reward paid out in 2019 was $161,337.

Top Payout

The highest reward paid out to a member of the research community was for a report from Guang Gong (@oldfresher) of Alpha Lab, Qihoo 360 Technology Co. Ltd. This report detailed the first reported 1-click remote code execution exploit chain on the Pixel 3 device. Guang Gong was awarded $161,337 from the Android Security Rewards program and $40,000 by Chrome Rewards program for a total of $201,337. The $201,337 combined reward is also the highest reward for a single exploit chain across all Google VRP programs. The Chrome vulnerabilities leveraged in this report were fixed in Chrome 77.0.3865.75 and released in September, protecting users against this exploit chain.

We’d like to thank all of our researchers for contributing to the security of the Android ecosystem. If you’re interested in becoming a researcher, check out our Bughunter University for information on how to get started.

Starting today November 21, 2019 the new rewards take effect. Any reports that were submitted before November 21, 2019 will be rewarded based on the previously existing rewards table.

Happy bug hunting!



We previously announced that starting with Chrome 76, most latest-generation Chromebooks gained the option to enable a built-in FIDO authenticator backed by hardware-based Titan security. For supported services (e.g. G Suite, Google Cloud Platform), enterprise administrators can now allow end users to use the power button on these devices to protect against certain classes of account takeover attempts. This feature is disabled by default, however, administrators can enable it by changing DeviceSecondFactorAuthentication policy in the Google Admin console.

Before we dive deeper into this capability, let’s first cover the main use cases FIDO technology solves, and then explore how this new enhancement can satisfy an advanced requirement that can help enterprise organizations.

Main use cases
FIDO technology aims to solve three separate use cases for relying parties (or otherwise referred to as Internet services) by helping to:
  1. Prevent phishing during initial login to a service on a new device;
  2. Reverify a user’s identity to a service on a device on which they’ve already logged in to; 
  3. Confirm that the device a user is connecting from is still the original device where they logged in from previously. This is typically needed in the enterprise. 
Security-savvy professionals may interpret the third use case as a special instance of use case #2. However, there are some differences, which we break down a bit further below:
  • In case #2, the problem that FIDO technology tries to solve is re-verifying a user’s identity by unlocking a private key stored on the device.
  • In case #3, FIDO technology helps to determine whether a previously created key is still available on the original device without any proof of who the user is.
How use case #1 works: Roaming security keys 

Because the whole premise of this use case is one in which the user logs in on a brand new device they’ve never authenticated before, this requires the user to have a FIDO security key (removeable, cross-platform, or a roaming authenticator). By this definition, a built-in FIDO authenticator on Chrome OS devices would not be able to satisfy this requirement, because it would not be able to help verify the user’s identity without being set up previously. Upon initial log-in, the user’s identity is verified together with the presence of a security key (such as Google’s Titan Security Key) previously tied to their account.

Titan Security Keys 


Once the user is successfully logged in, trust is conferred from the security key to the device on which the user is logging on, usually by placing a cookie or other token on the device in order for the relying party to “remember” that the user already performed a second factor authenticator on this device. Once this step is completed, it is no longer necessary to require a physical second factor on this device because the presence of the cookie signals to the relying party that this device is to be trusted.

Optionally, some services might require the user to still periodically verify that it’s the correct user in front of the already recognized device (for example, particularly sensitive and regulated services such as financial services companies). In almost all cases, it shouldn’t be necessary for the user to also-in addition to providing their knowledge factor (such as a password) - re-present their second factor when re-authenticating as they’ve already done that during initial bootstrapping.

Note that on Chrome OS devices, your data is encrypted when you’re not logged on, which further protects your data against malicious access.

How use case #2 works: Re-authentication 

Frequently referred to as “re-authentication,” use case #2 allows a relying party to reverify that the same user is still interacting with the service from a previously verified device. This mainly happens when a user performs an action that’s particularly sensitive, such as changing their password or when interacting with regulated services, such as financial services companies. In this case, a built-in biometric authenticator (e.g. a fingerprint sensor or PIN on Android devices) can be registered, which offers users a more convenient way to re-verify their identity to the service in question. In fact, we have recently enabled this use case on Android devices for some Google services.

Additionally, there are security benefits to this particular solution, as the relying party doesn’t only have to trust a previously issued cookie, but can now both verify that the right user is present (by means of a biometric) and that a particular private key is available on this particular device. Sometimes this promise is made based on key material stored in hardware (e.g. Titan security in Pixel Slate), which can be a strong indicator that the relying party is interacting with the right user on the right device.

How use case #3 works: Built-in device authenticator

The challenge of verifying that a device a user has previously logged in on is still the device from which they’re interacting with the relying party, is what the built-in FIDO authenticator on most latest-generation Chromebooks is able to help solve.

Earlier we noted that upon initial log-in, relying parties regularly place cookies or tokens on a user’s device, so they can remember that a user has previously authenticated. Under some circumstances, such as when there’s malware present on a device, it might be possible for these tokens to be exfiltrated. Asking for the “touch of a built-in authenticator” at regular intervals helps the relying party know that the user is still interacting from a legitimate device which has previously been issued a token. It also helps verify that the token has not been exfiltrated to a different device since FIDO authenticators offer increased protection against the exfiltration of the private key. This is because it’s usually housed in the hardware itself. For example, in the case of most latest-generation Chromebooks (e.g. Pixel Slate), it’s protected by hardware-based Titan security.

Pixel Slate devices are built with hardware-based Titan security 

In the case of our implementation on Chrome OS, the FIDO keys are also scoped to the specific logged in user, meaning that every user on the device essentially gets their own FIDO authenticator that can’t be accessed across user boundaries. We expect this use case to be particularly useful in enterprise environments, which is why the feature is not enabled by default. Administrators can enable it in the Google Admin console.

We still highly recommend users to have a primary FIDO security key, such as Titan Security Key or an Android phone. This should be used in conjunction with a “FIDO re-authentication” policy, which is supported by G Suite.

Enabling the built-in FIDO authenticator in the Google Admin console

Even though it’s technically possible to register the built-in FIDO authenticator on a Chrome OS device as a “security key” with services, it’s best to avoid this instance as users can run an increased risk of account lockout if they ever need to sign in to the service from a different machine.

Supported Chromebooks
Starting with Chrome 76, most latest-generation Chromebooks gained the option to enable a built-in FIDO authenticator backed by hardware-based Titan security. To see if your Chromebook can be enabled with this capability, you can navigate to chrome://system and check the “tpm-version” entry. If “vendor” equals “43524f53”, then your Chromebook is backed by Titan security.

Navigating to chrome://system on your Chromebook

Summary

In summary, we believe that this new enhancement can provide value to enterprise organizations that want to confirm that the device a user is connecting from is still the original device from which a user logged in from in the past. Most users, however, should be using roaming FIDO security keys, such as Titan Security Key, their Android phone, or security keys from other vendors, in order to avoid account lockouts.

Memory safety errors, like use-after-frees and out-of-bounds reads/writes, are a leading source of vulnerabilities in C/C++ applications. Despite investments in preventing and detecting these errors in Chrome, over 60% of high severity vulnerabilities in Chrome are memory safety errors. Some memory safety errors don’t lead to security vulnerabilities but simply cause crashes and instability.

Chrome uses state-of-the-art techniques to prevent these errors, including:

  • Coverage-guided fuzzing with AddressSanitizer (ASan)
  • Unit and integration testing with ASan
  • Defensive programming, like custom libraries to perform safe math or provide bounds checked containers
  • Mandatory code review

Chrome also makes use of sandboxing and exploit mitigations to complicate exploitation of memory errors that go undetected by the methods above.

AddressSanitizer is a compiler instrumentation that finds memory errors occurring on the heap, stack, or in globals. ASan is highly effective and one of the lowest overhead instrumentations available that detects the errors that it does; however, it still incurs an average 2-3x performance and memory overhead. This makes it suitable for use with unit tests or fuzzing, but not deployment to end users. Chrome used to deploy SyzyASAN instrumented binaries to detect memory errors. SyzyASAN had a similar overhead so it was only deployed to a small subset of users on the canary channel. It was discontinued after the Windows toolchain switched to LLVM.

GWP-ASan, also known by its recursive backronym, GWP-ASan Will Provide Allocation Sanity, is a sampling allocation tool designed to detect heap memory errors occurring in production with negligible overhead. Because of its negligible overhead we can deploy GWP-ASan to the entire Chrome user base to find memory errors happening in the real world that are not caught by fuzzing or testing with ASan. Unlike ASan, GWP-ASan can not find memory errors on the stack or in globals.

GWP-ASan is currently enabled for all Windows and macOS users for allocations made using malloc() and PartitionAlloc. It is only enabled for a small fraction of allocations and processes to reduce performance and memory overhead to a negligible amount. At the time of writing it has found over sixty bugs (many are still restricted view). About 90% of the issues GWP-ASan has found are use-after-frees. The remaining are out-of-bounds reads and writes.

To learn more, check out our full write up on GWP-ASan here.


Fighting against bad actors in the ecosystem is a top priority for Google, but we know there are others doing great work to find and protect against attacks. Our research partners in the mobile security world have built successful teams and technology, helping us in the fight. Today, we’re excited to take this collaboration to the next level, announcing a partnership between Google, ESET, Lookout, and Zimperium. It’s called the App Defense Alliance and together, we’re working to stop bad apps before they reach users’ devices.
The Android ecosystem is thriving with over 2.5 billion devices, but this popularity also makes it an attractive target for abuse. This is true of all global platforms: where there is software with worldwide proliferation, there are bad actors trying to attack it for their gain. Working closely with our industry partners gives us an opportunity to collaborate with some truly talented researchers in our field and the detection engines they’ve built. This is all with the goal of, together, reducing the risk of app-based malware, identifying new threats, and protecting our users.
What will the App Defense Alliance do?
Our number one goal as partners is to ensure the safety of the Google Play Store, quickly finding potentially harmful applications and stopping them from being published
As part of this Alliance, we are integrating our Google Play Protect detection systems with each partner’s scanning engines. This will generate new app risk intelligence as apps are being queued to publish. Partners will analyze that dataset and act as another, vital set of eyes prior to an app going live on the Play Store.
Who are the partners?
All of our partners work in the world of endpoint protection, and offer specific products to protect mobile devices and the mobile ecosystem. Like Google Play Protect, our partners’ technologies use a combination of machine learning and static/dynamic analysis to detect abusive behavior. Multiple heuristic engines working in concert will increase our efficiency in identifying potentially harmful apps.
We hand-picked these partners based on their successes in finding potential threats and their dedication to improving the ecosystem. These partners are regularly recognized in analyst reports for their work.
Industry collaboration is key
Knowledge sharing and industry collaboration are important aspects in securing the world from attacks. We believe working together is the ultimate way we will get ahead of bad actors. We’re excited to work with these partners to arm the Google Play Store against bad apps.
Want to learn more about the App Defense Alliance’s work? Visit us here.


Security begins with secure infrastructure. To have higher confidence in the security and integrity of the infrastructure, we need to anchor our trust at the foundation - in a special-purpose chip.

Today, along with our partners, we are excited to announce OpenTitan - the first open source silicon root of trust (RoT) project. OpenTitan will deliver a high-quality RoT design and integration guidelines for use in data center servers, storage, peripherals, and more. Open sourcing the silicon design makes it more transparent, trustworthy, and ultimately, secure.

The OpenTitan logo



Anchoring trust in silicon

Silicon RoT can help ensure that the hardware infrastructure and the software that runs on it remain in their intended, trustworthy state by verifying that the critical system components boot securely using authorized and verifiable code. Silicon RoT can provide many security benefits by helping to:
  • Ensure that a server or a device boots with the correct firmware and hasn't been infected by a low-level malware.
  • Provide a cryptographically unique machine identity, so an operator can verify that a server or a device is legitimate.
  • Protect secrets like encryption keys in a tamper-resistant way even for people with physical access (e.g., while a server or a device is being shipped).
  • Provide authoritative, tamper-evident audit records and other runtime security services.
The silicon RoT technology can be used in server motherboards, network cards, client devices (e.g., laptops, phones), consumer routers, IoT devices, and more. For example, Google has relied on a custom-made RoT chip, Titan, to help ensure that machines in Google’s data centers boot from a known trustworthy state with verified code; it is our system root of trust. Recognizing the importance of anchoring the trust in silicon, together with our partners we want to spread the benefits of reliable silicon RoT chips to our customers and the rest of the industry. We believe that the best way to accomplish that is through open source silicon.

Raising the transparency and security bar

Similar to open source software, open source silicon can:
  1. Enhance trust and security through design and implementation transparency. Issues can be discovered early, and the need for blind trust is reduced.
  2. Enable and encourage innovation through contributions to the open source design.
  3. Provide implementation choice and preserve a set of common interfaces and software compatibility guarantees through a common, open reference design.
The OpenTitan project is managed by the lowRISC CIC, an independent not-for-profit company with a full-stack engineering team based in Cambridge, UK, and is supported by a coalition of like-minded partners, including ETH Zurich, G+D Mobile Security, Google, Nuvoton Technology, and Western Digital.


The founding partners of the OpenTitan project
OpenTitan is an active engineering project staffed by a team of engineers representing a coalition of partners who bring ideas and expertise from many perspectives. We are transparently building the logical design of a silicon RoT, including an open source microprocessor (the lowRISC Ibex, a RISC-V-based design), cryptographic coprocessors, a hardware random number generator, a sophisticated key hierarchy, memory hierarchies for volatile and non-volatile storage, defensive mechanisms, IO peripherals, secure boot, and more. With OpenTitan, a coalition of partners have come together to deliver a more open, transparent, and high-quality RoT.



A comparison of the major design components of a traditional RoT and an OpenTitan RoT


The OpenTitan project is rooted in three key principles:
  • Transparency - anyone can inspect, evaluate, and contribute to OpenTitan’s design and documentation to help build more transparent, trustworthy silicon RoT for all.
  • High quality - we are building a high-quality logically-secure silicon design, including reference firmware, verification collateral, and technical documentation.
  • Flexibility - adopters can reduce costs and reach more customers by using a vendor- and platform-agnostic silicon RoT design that can be integrated into data center servers, storage, peripheral and other devices.

Participating in the OpenTitan project

OpenTitan will be helpful for chip manufacturers, platform providers, and security-conscious enterprise organizations that want to enhance their infrastructure with silicon-based security. Visit our GitHub repository today.

If you are interested in actively collaborating on OpenTitan to help make secure open source silicon a reality, we encourage you to contact the OpenTitan team. If you would like your product to be considered for a pilot OpenTitan RoT integration, the team would be excited to hear from you.



Intro
This is the final post in a series of four, in which we set out to revisit various BeyondCorp topics and share lessons that were learnt along the internal implementation path at Google.

The first post in this series focused on providing necessary context for how Google adopted BeyondCorp, Google’s implementation of the zero trust security model. The second post focused on managing devices - how we decide whether or not a device should be trusted and why that distinction is necessary. The third post focused on tiered access - how to define access tiers and rules and how to simplify troubleshooting when things go wrong.

This post introduces the concept of gated services, how to identify and, subsequently, migrate them and the associated lessons we learned along the way.

High level architecture for BeyondCorp

Identifying and gating services

How do you identify and categorize all the services that should be gated?

Google began as a web-based company, and as it matured in the modern era, most internal business applications were developed with a web-first approach. These applications were hosted on similar internal architecture as our external services, with the exception that they could only be accessed on corporate office networks. Thus, identifying services to be gated by BeyondCorp was made easier for us due to the fact that most internal services were already properly inventoried and hosted via standard, central solutions. Migration, in many cases, was as simple as a DNS change. Solid IT asset inventory systems and maintenance are critical to migrating to a zero trust security model.

Enforcement of zero trust access policies began with services which we determined would not be meaningfully impacted by the change in access requirements. For most services, requirements could be gathered via typical access log analysis or consulting with service owners. Services which could not be readily gated by default ACL requirements required service owners to develop strict access groups and/or eliminate risky workflows before they could be migrated.

How do you know which trust tier is needed for every service?
As discussed in our previous blog post, Google makes internal services available based on device trust tiers. Today, those services are accessible by the highest trust tier by default.

When the intent of the change is to restrict access to a service to a specific group or team, service owners are free to propose access changes to add or remove restrictions to their service. Access changes which are deemed to be sufficiently low risk can be automatically approved. In all other cases, such as where the owning team wants to expose a service to a risky device tier, they must work with security engineers to follow the principle of least privilege and devise solutions.

What do you do with services that are incompatible with BeyondCorp ideals?

It may not always be possible to gate an application by the preferred zero trust solution. Services that cannot be easily gated typically fall into these categories:
  • Type 1: "Non-proxyable protocols", e.g. non-HTTP/HTTPS traffic.
  • Type 2: Low latency requirements or localized high throughput traffic.
  • Type 3: Administrative and emergency access networks.
The typical first step in finding a solution for these cases is finding a way to remove the need for that service altogether. In many cases, this was made possible by deprecating or replacing systems which could not be made compatible with the BeyondCorp implementation.

When that was not an option, we found that no single solution would work for all critical requirements:
  • Solutions for the "Type 1" traffic have generally involved maintaining a specialized client tunneling which strongly enforces authentication and authorization decisions on the client and the server end of the connection. This is usually client/server type traffic which is similar to HTTP traffic in that connectivity is typically multi-point to point.
  • Solutions to the "Type 2" problems generally rely on moving BeyondCorp-compatible compute resources locally or developing a solution tightly integrated with network access equipment to selectively forward "local" traffic without permanently opening network holes.
  • As for “Type 3,” it would be ideal to completely eliminate all privileged internal networks. However, the reality is that some privileged networking will likely always be required to maintain the network itself and also to provide emergency access during outages.
It should be noted that server-to-server traffic in secure production data center environments does not necessarily rely on BeyondCorp, although many systems are integrated regardless, due to the Service-Oriented Design benefits that BeyondCorp inherently provides. 

How do you prioritize gating?

Prioritization starts by identifying all the services that are currently accessible via internal IP-access alone and migrating the most critical services to BeyondCorp, while working to slowly ratchet down permissions via exception management processes. Criticality of the service may also depend on the number and type of users, sensitivity of data handled, security and privacy risks enabled by the service.

Migration logistics

Most services required integration testing with the BeyondCorp proxy. Service teams were encouraged to stand up "test" services which were used to test functionality behind the BeyondCorp proxy. Most services that performed their own access control enforcement were reconfigured to instead rely on BeyondCorp for all user/group authentication and authorization. Service teams have been encouraged to develop their own "fine-grained" discretionary access controls in the services by leveraging session data provided by the BeyondCorp proxy.

Lessons learnt

Allow coarse gating and exceptions

Inventory: It's easy to overlook the importance of keeping a good inventory of services, devices, owners and security exceptions. The journey to a BeyondCorp world should start by solving organizational challenges when managing and maintaining data quality in inventory systems. In short, knowing how a service works, who should access it, and what makes that acceptable are the central tenets of managing BeyondCorp. Fine-grained access control is severely complicated when this insight is missing.

Legacy protocols: Most large enterprises will inevitably need to support workflows and protocols which cannot be migrated to a BeyondCorp world (in any reasonable amount of time). Exception management and service inventory become crucial at this stage while stakeholders develop solutions.
Run highly reliable systems
The BeyondCorp initiative would not be sustainable at Google’s scale without the involvement of various Site Reliability Engineering (SRE) teams across the inventory systems, BeyondCorp infrastructure and client side solutions. The ability to successfully achieve wide-spread adoption of changes this large can be hampered by perceived (or in some cases, actual) reliability issues. Understanding the user workflows that might be impacted, working with key stakeholders and ensuring the transition is smooth and trouble-free for all users helps protect against backlash and avoids users finding undesirable workarounds. By applying our reliability engineering practices, those teams helped to ensure that the components of our implementation all have availability and latency targets, operational robustness, etc. These are compatible with our business needs and intended user experiences.

Put employees in control as much as possible

Employees cover a broad range of job functions with varying requirements of technology and tools. In addition to communicating changes to our employees early, we provide them with self-service solutions for handling exceptions or addressing issues affecting their devices. By putting our employees in control, we help to ensure that security mechanisms do not get in their way, helping with the acceptance and scaling processes.

Summary

Throughout this series of blog posts, we set out to revisit and demystify BeyondCorp, Google’s internal implementation of a zero trust security model. The four posts had different focus areas - setting context, devices, tiered access and, finally, services (this post).

If you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers. Lastly, stay tuned for an upcoming BeyondCorp webinar on Cloud OnAir in a few months where you will be able to learn more and ask us questions. We hope that these blog posts, research papers, and webinars will help you on your journey to enable zero trust access.

Thank you to the editors of the BeyondCorp blog post series, Puneet Goel (Product Manager), Lior Tishbi (Program Manager), and Justin McWilliams (Engineering Manager).


The Linux kernel is responsible for enforcing much of Android’s security model, which is why we have put a lot of effort into hardening the Android Linux kernel against exploitation. In Android 9, we introduced support for Clang’s forward-edge Control-Flow Integrity (CFI) enforcement to protect the kernel from code reuse attacks that modify stored function pointers. This year, we have added backward-edge protection for return addresses using Clang’s Shadow Call Stack (SCS).
Google’s Pixel 3 and 3a phones have kernel SCS enabled in the Android 10 update, and Pixel 4 ships with this protection out of the box. We have made patches available to all supported versions of the Android kernel and also maintain a patch set against upstream Linux. This post explains how kernel SCS works, the benefits and trade-offs, how to enable the feature, and how to debug potential issues.

Return-oriented programming

As kernel memory protections increasingly make code injection more difficult, attackers commonly use control flow hijacking to exploit kernel bugs. Return-oriented programming (ROP) is a technique where the attacker gains control of the kernel stack to overwrite function return addresses and redirect execution to carefully selected parts of existing kernel code, known as ROP gadgets. While address space randomization and stack canaries can make this attack more challenging, return addresses stored on the stack remain vulnerable to many overwrite flaws. The general availability of tools for automatically generating this type of kernel exploit makes protecting against it increasingly important.

Shadow Call Stack

One method of protecting return addresses is to store them in a separately allocated shadow stack that’s not vulnerable to traditional buffer overflows. This can also help protect against arbitrary overwrite attacks.
Clang added the Shadow Call Stack instrumentation pass for arm64 in version 7. When enabled, each non-leaf function that pushes the return address to the stack will be instrumented with code that also saves the address to a shadow stack. A pointer to the current task’s shadow stack is always kept in the x18 register, which is reserved for this purpose. Here’s what instrumentation looks like in a typical kernel function:

SCS doesn’t require error handling as it uses the return address from the shadow stack unconditionally. Compatibility with stack unwinding for debugging purposes is maintained by keeping a copy of the return address in the normal stack, but this value is never used for control flow decisions.
Despite requiring a dedicated register, SCS has minimal performance overhead. The instrumentation itself consists of one load and one store instruction per function, which results in a performance impact that’s within noise in our benchmarking. Allocating a shadow stack for each thread does increase the kernel’s memory usage but as only return addresses are stored, the stack size defaults to 1kB. Therefore, the overhead is a fraction of the memory used for the already small regular kernel stacks.
SCS patches are available for Android kernels 4.14 and 4.19, and for upstream Linux. It can be enabled using the following configuration options:

CONFIG_SHADOW_CALL_STACK=y
# CONFIG_SHADOW_CALL_STACK_VMAP is not set
# CONFIG_DEBUG_STACK_USAGE is not set

By default, shadow stacks are not virtually allocated to minimize memory overhead, but CONFIG_SHADOW_CALL_STACK_VMAP can be enabled for better stack exhaustion protection. With CONFIG_DEBUG_STACK_USAGE, the kernel will also print out shadow stack usage in addition to normal stack usage which can be helpful when debugging issues.

Alternatives

Signing return addresses using ARMv8.3 Pointer Authentication (PAC) is an alternative to shadow stacks. PAC has similar security properties and comparable performance to SCS but without the memory allocation overhead. Unfortunately, PAC requires hardware support, which means it cannot be used on existing devices, but may be a viable option for future devices. For x86, Intel’s Control-flow Enforcement Technology (CET) extension will offer a native shadow stack support, but also requires compatible hardware.

Conclusion

We have improved Linux kernel code reuse attack protections on Pixel devices running Android 10. Pixel 3, 3a, and 4 kernels have both CFI and SCS enabled and we have made patches available to all Android OEMs.

The Chrome Security team values having multiple lines of defense. Web browsers are complex, and malicious web pages may try to find and exploit browser bugs to steal data. Additional lines of defense, like sandboxes, make it harder for attackers to access your computer, even if bugs in the browser are exploited. With Site Isolation, Chrome has gained a new line of defense that helps protect your accounts on the Web as well.

Site Isolation ensures that pages from different sites end up in different sandboxed processes in the browser. Chrome can thus limit the entire process to accessing data from only one site, making it harder for an attacker to steal cross-site data. We started isolating all sites for desktop users back in Chrome 67, and now we’re excited to enable it on Android for sites that users log into in Chrome 77. We've also strengthened Site Isolation on desktop to help defend against even fully compromised processes.

Site Isolation helps defend against two types of threats. First, attackers may try to use advanced "side channel" attacks to leak sensitive data from a process through unexpected means. For example, Spectre attacks take advantage of CPU performance features to access data that should be off limits. With Site Isolation, it is harder for the attacker to get cross-site data into their process in the first place.

Second, even more powerful attackers may discover security bugs in the browser, allowing them to completely hijack the sandboxed process. On desktop platforms, Site Isolation can now catch these misbehaving processes and limit their access to cross-site data. We're working to bring this level of hijacked process protection to Android in the future as well.

Thanks to this extra line of defense, Chrome can now help keep your web accounts even more secure. We are still making improvements to get the full benefits of Site Isolation, but this change gives Chrome a solid foundation for protecting your data.

If you’d like to learn more, check out our technical write up on the Chromium blog.




Securing access to online accounts is critical for safeguarding private, financial, and other sensitive data online. Phishing - where an attacker tries to trick you into giving them your username and password - is one of the most common causes of data breaches. To protect user accounts, we’ve long made it a priority to offer users many convenient forms of 2-Step Verification (2SV), also known as two-factor authentication (2FA), in addition to Google’s automatic protections. These measures help to ensure that users are not relying solely on passwords for account security.

For users at higher risk (e.g., IT administrators, executives, politicians, activists) who need more effective protection against targeted attacks, security keys provide the strongest form of 2FA. To make this phishing-resistant security accessible to more people and businesses, we recently built this capability into Android phones, expanded the availability of Titan Security Keys to more regions (Canada, France, Japan, the UK), and extended Google’s Advanced Protection Program to the enterprise.

Starting tomorrow, you will have an additional option: Google’s new USB-C Titan Security Key, compatible with your Android, Chrome OS, macOS, and Windows devices.



USB-C Titan Security Key

We partnered with Yubico to manufacture the USB-C Titan Security Key. We have had a long-standing working and customer relationship with Yubico that began in 2012 with the collaborative effort to create the FIDO Universal 2nd Factor (U2F) standard, the first open standard to enable phishing-resistant authentication. This is the same security technology that we use at Google to protect access to internal applications and systems.

USB-C Titan Security Keys are built with a hardware secure element chip that includes firmware engineered by Google to verify the key’s integrity. This is the same secure element chip and firmware that we use in our existing USB-A/NFC and Bluetooth/NFC/USB Titan Security Key models manufactured in partnership with Feitian Technologies.

USB-C Titan Security Keys will be available tomorrow individually for $40 on the Google Store in the United States. USB-A/NFC and Bluetooth/NFC/USB Titan Security Keys will also become available individually in addition to the existing bundle. Bulk orders are available for enterprise organizations in select countries.


We highly recommend all users at a higher risk of targeted attacks to get Titan Security Keys and enroll into the Advanced Protection Program (APP), which provides Google’s industry-leading security protections to defend against evolving methods that attackers use to gain access to your accounts and data. You can also use Titan Security Keys for any site where FIDO security keys are supported for 2FA, including your personal or work Google Account, 1Password, Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more.

Update (04/06/2020): Mixed image autoupgrading was originally scheduled for Chrome 81, but will be delayed until at least Chrome 84. Check the Chrome Platform Status entry for the latest information about when mixed images will be autoupgraded and blocked if they fail to load over https://. Sites with mixed images will continue to trigger the “Not Secure” warning.


Today we’re announcing that Chrome will gradually start ensuring that https:// pages can only load secure https:// subresources. In a series of steps outlined below, we’ll start blocking mixed content (insecure http:// subresources on https:// pages) by default. This change will improve user privacy and security on the web, and present a clearer browser security UX to users.
In the past several years, the web has made great progress in transitioning to HTTPS: Chrome users now spend over 90% of their browsing time on HTTPS on all major platforms. We’re now turning our attention to making sure that HTTPS configurations across the web are secure and up-to-date.
HTTPS pages commonly suffer from a problem called mixed content, where subresources on the page are loaded insecurely over http://. Browsers block many types of mixed content by default, like scripts and iframes, but images, audio, and video are still allowed to load, which threatens users’ privacy and security. For example, an attacker could tamper with a mixed image of a stock chart to mislead investors, or inject a tracking cookie into a mixed resource load. Loading mixed content also leads to a confusing browser security UX, where the page is presented as neither secure nor insecure but somewhere in between.
In a series of steps starting in Chrome 79, Chrome will gradually move to blocking all mixed content by default. To minimize breakage, we will autoupgrade mixed resources to https://, so sites will continue to work if their subresources are already available over https://. Users will be able to enable a setting to opt out of mixed content blocking on particular websites, and below we’ll describe the resources available to developers to help them find and fix mixed content.
Timeline
Instead of blocking all mixed content all at once, we’ll be rolling out this change in a series of steps.
  • In Chrome 79, releasing to stable channel in December 2019, we’ll introduce a new setting to unblock mixed content on specific sites. This setting will apply to mixed scripts, iframes, and other types of content that Chrome currently blocks by default. Users can toggle this setting by clicking the lock icon on any https:// page and clicking Site Settings. This will replace the shield icon that shows up at the right side of the omnibox for unblocking mixed content in previous versions of desktop Chrome.



Accessing Site settings, from which users will be able to unblock mixed content loads in Chrome 79.
  • In Chrome 80, mixed audio and video resources will be autoupgraded to https://, and Chrome will block them by default if they fail to load over https://. Chrome 80 will be released to early release channels in January 2020. Users can unblock affected audio and video resources with the setting described above.
  • Also in Chrome 80, mixed images will still be allowed to load, but they will cause Chrome to show a “Not Secure” chip in the omnibox. We anticipate that this is a clearer security UI for users and that it will motivate websites to migrate their images to HTTPS. Developers can use the upgrade-insecure-requests or block-all-mixed-content Content Security Policy directives to avoid this warning. 



Omnibox treatment for websites that load mixed images in Chrome 80. 

  • In Chrome 81, mixed images will be autoupgraded to https://, and Chrome will block them by default if they fail to load over https://. Chrome 81 will be released to early release channels in February 2020.
Resources for developers
Developers should migrate their mixed content to https:// immediately to avoid warnings and breakage. Here are some resources:
  • Use Content Security Policy and Lighthouse’s mixed content audit to discover and fix mixed content on your site.
  • See this guide for general advice on migrating servers to HTTPS.
  • Check with your CDN, web host, or content management system to see if they have special tools for debugging mixed content. For example, Cloudflare offers a tool to rewrite mixed content to https://, and WordPress plugins are available as well.


[Cross-posted from the Chromium blog]

Update (04/06/2020): The removal of legacy TLS versions was originally scheduled for Chrome 81, but is being delayed until at least Chrome 84. Chrome will continue to show the “Not Secure” indicator for sites using TLS 1.0 or 1.1, and Chrome 81 Beta will show the full page interstitial warning for affected sites. Our hope is that this will help alert affected site owners ahead of the delayed removal. Check the Chrome Platform Status entry for the latest information about the removal of TLS 1.0 and 1.1 support.

Last October we announced our plans to remove support for TLS 1.0 and 1.1 in Chrome 81. In this post we’re announcing a pre-removal phase in which we’ll introduce a gentler warning UI, and previewing the UI that we’ll use to block TLS 1.0 and 1.1 in Chrome 81. Site administrators should immediately enable TLS 1.2 or later to avoid these UI treatments.
While legacy TLS usage has decreased, we still see over 0.5% of page loads using these deprecated versions. To ease the transition to the final removal of support and to reduce user surprise when outdated configurations stop working, Chrome will discontinue support in two steps: first, showing new security indicators for sites using these deprecated versions; and second, blocking connections to these sites with a full page warning.
Pre-removal warning
Starting January 13, 2020, for Chrome 79 and higher, we will show a “Not Secure” indicator for sites using TLS 1.0 or 1.1 to alert users to the outdated configuration:
The new security indicator and connection security information that will be shown to users who visit a site using TLS 1.0 or 1.1 starting in January 2020.

When a site uses TLS 1.0 or 1.1, Chrome will downgrade the security indicator and show a more detailed warning message inside Page Info. This change will not block users from visiting or using the page, but will alert them to the downgraded security of the connection.
Note that Chrome already shows warnings in DevTools to alert site owners that they are using a deprecated version of TLS.
Removal UI
In Chrome 81, which will be released to the Stable channel in March 2020, we will begin blocking connections to sites using TLS 1.0 or 1.1, showing a full page interstitial warning:
The full screen interstitial warning that will be shown to users who visit a site using TLS 1.0 or 1.1 starting in Chrome 81. Final warning subject to change.

Site administrators should immediately enable TLS 1.2 or later. Depending on server software (such as Apache or nginx), this may be a configuration change or a software update. Additionally, we encourage all sites to revisit their TLS configuration. In our original announcement, we outlined our current criteria for modern TLS.
Enterprise deployments can preview the final removal of TLS 1.0 and 1.1 by setting the SSLVersionMin policy to “tls1.2”. This will prevent clients from connecting over these protocol versions. For enterprise deployments that need more time, this same policy can be used to re-enable TLS 1.0 or TLS 1.1 and disable the warning UIs until January 2021.