[go: nahoru, domu]


[Updated on 12/5/16 with instructions for developers]
Developers: Read more about how to update your sites here.

To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.

Chrome currently indicates HTTP connections with a neutral indicator. This doesn’t reflect the true lack of security for HTTP connections. When you load a website over HTTP, someone else on the network can look at or modify the site before it gets to you.


A substantial portion of web traffic has transitioned to HTTPS so far, and HTTPS usage is consistently increasing. We recently hit a milestone with more than half of Chrome desktop page loads now served over HTTPS. In addition, since the time we released our HTTPS report in February, 12 more of the top 100 websites have changed their serving default from HTTP to HTTPS.


Studies show that users do not perceive the lack of a “secure” icon as a warning, but also that users become blind to warnings that occur too frequently. Our plan to label HTTP sites more clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Starting January 2017, Chrome 56 will label HTTP pages with password or credit card form fields as "not secure," given their particularly sensitive nature.


In following releases, we will continue to extend HTTP warnings, for example, by labelling HTTP pages as “not secure” in Incognito mode, where users may have higher expectations of privacy. Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.

We will publish updates to this plan as we approach future releases, but don’t wait to get started moving to HTTPS. HTTPS is easier and cheaper than ever before, and enables both the best performance the web offers and powerful new features that are too sensitive for HTTP. Check out our set-up guides to get started.


[Cross-posted from the Android Developers Blog]

Over the course of the summer, we previewed a variety of security enhancements in Android 7.0 Nougat: an increased focus on security with our vulnerability rewards program, a new Direct Boot mode, re-architected mediaserver and hardened media stack, apps that are protected from accidental regressions to cleartext traffic, an update to the way Android handles trusted certificate authorities, strict enforcement of verified boot with error correction, and updates to the Linux kernel to reduce the attack surface and increase memory protection. Phew!

Now that Nougat has begun to roll out, we wanted to recap these updates in a single overview and highlight a few new improvements.
Direct Boot and encryption

In previous versions of Android, users with encrypted devices would have to enter their PIN/pattern/password by default during the boot process to decrypt their storage area and finish booting. With Android 7.0 Nougat, we’ve updated the underlying encryption scheme and streamlined the boot process to speed up rebooting your phone. Now your phone’s main features, like the phone app and your alarm clock, are ready right away before you even type your PIN, so people can call you and your alarm clock can wake you up. We call this feature Direct Boot.

Under the hood, file-based encryption enables this improved user experience. With this new encryption scheme, the system storage area, as well as each user profile storage area, are all encrypted separately. Unlike with full-disk encryption, where all data was encrypted as a single unit, per-profile-based encryption enables the system to reboot normally into a functional state using just device keys. Essential apps can opt-in to run in a limited state after reboot, and when you enter your lock screen credential, these apps then get access your user data to provide full functionality.

File-based encryption better isolates and protects individual users and profiles on a device by encrypting data at a finer granularity. Each profile is encrypted using a unique key that can only be unlocked by your PIN or password, so that your data can only be decrypted by you.

Encryption support is getting stronger across the Android ecosystem as well. Starting with Marshmallow, all capable devices were required to support encryption. Many devices, like Nexus 5X and 6P also use unique keys that are accessible only with trusted hardware, such as the ARM TrustZone. Now with 7.0 Nougat, all new capable Android devices must also have this kind of hardware support for key storage and provide brute force protection while verifying your lock screen credential before these keys can be used. This way, all of your data can only be decrypted on that exact device and only by you.


The media stack and platform hardening

In Android Nougat, we’ve both hardened and re-architected mediaserver, one of the main system services that processes untrusted input. First, by incorporating integer overflow sanitization, part of Clang’s UndefinedBehaviorSanitizer, we prevent an entire class of vulnerabilities, which comprise the majority of reported libstagefright bugs. As soon as an integer overflow is detected, we shut down the process so an attack is stopped. Second, we’ve modularized the media stack to put different components into individual sandboxes and tightened the privileges of each sandbox to have the minimum privileges required to perform its job. With this containment technique, a compromise in many parts of the stack grants the attacker access to significantly fewer permissions and significantly reduced exposed kernel attack surface.

In addition to hardening the mediaserver, we’ve added a large list of protections for the platform, including:
App security improvements

Android Nougat is the safest and easiest version of Android for application developers to use.
  • Apps that want to share data with other apps now must explicitly opt-in by offering their files through a Content Provider, like FileProvider. The application private directory (usually /data/data/) is now set to Linux permission 0700 for apps targeting API Level 24+.
  • To make it easier for apps to control access to their secure network traffic, user-installed certificate authorities and those installed through Device Admin APIs are no longer trusted by default for apps targeting API Level 24+. Additionally, all new Android devices must ship with the same trusted CA store.
  • With Network Security Config, developers can more easily configure network security policy through a declarative configuration file. This includes blocking cleartext traffic, configuring the set of trusted CAs and certificates, and setting up a separate debug configuration.
We’ve also continued to refine app permissions and capabilities to protect you from potentially harmful apps.
  • To improve device privacy, we have further restricted and removed access to persistent device identifiers such as MAC addresses.
  • User interface overlays can no longer be displayed on top of permissions dialogs. This “clickjacking” technique was used by some apps to attempt to gain permissions improperly.
  • We’ve reduced the power of device admin applications so they can no longer change your lockscreen if you have a lockscreen set, and device admin will no longer be notified of impending disable via onDisableRequested(). These were tactics used by some ransomware to gain control of a device.
System Updates

Lastly, we've made significant enhancements to the OTA update system to keep your device up-to-date much more easily with the latest system software and security patches. We've made the install time for OTAs faster, and the OTA size smaller for security updates. You no longer have to wait for the optimizing apps step, which was one of the slowest parts of the update process, because the new JIT compiler has been optimized to make installs and updates lightning fast.

The update experience is even faster for new Android devices running Nougat with updated firmware. Like they do with Chromebooks, updates are applied in the background while the device continues to run normally. These updates are applied to a different system partition, and when you reboot, it will seamlessly switch to that new partition running the new system software version.

We’re constantly working to improve Android security and Android Nougat brings significant security improvements across all fronts. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at security@android.com.



For more than nine years, Safe Browsing has helped webmasters via Search Console with information about how to fix security issues with their sites. This includes relevant Help Center articles, example URLs to assist in diagnosing the presence of harmful content, and a process for webmasters to request reviews of their site after security issues are addressed. Over time, Safe Browsing has expanded its protection to cover additional threats to user safety such as Deceptive Sites and Unwanted Software.

To help webmasters be even more successful in resolving issues, we’re happy to announce that we’ve updated the information available in Search Console in the Security Issues report.


The updated information provides more specific explanations of six different security issues detected by Safe Browsing, including malware, deceptive pages, harmful downloads, and uncommon downloads. These explanations give webmasters more context and detail about what Safe Browsing found. We also offer tailored recommendations for each type of issue, including sample URLs that webmasters can check to identify the source of the issue, as well as specific remediation actions webmasters can take to resolve the issue.

We on the Safe Browsing team definitely recommend registering your site in Search Console even if it is not currently experiencing a security issue. We send notifications through Search Console so webmasters can address any issues that appear as quickly as possible.

Our goal is to help webmasters provide a safe and secure browsing experience for their users. We welcome any questions or feedback about the new features on the Google Webmaster Help Forum, where Top Contributors and Google employees are available to help.

For more information about Safe Browsing’s ongoing work to shine light on the state of web security and encourage safer web security practices, check out our summary of trends and findings on the Safe Browsing Transparency Report. If you’re interested in the tools Google provides for webmasters and developers dealing with hacked sites, this video provides a great overview.



In the past, we’ve posted about innovations in fuzzing, a software testing technique used to discover coding errors and security vulnerabilities. The topics have included AddressSanitizer, ClusterFuzz, SyzyASAN, ThreadSanitizer and others.

Today we'd like to talk about libFuzzer (part of the LLVM project), an engine for in-process, coverage-guided, white-box fuzzing:

  • By in-process, we mean that we don’t launch a new process for every test case, and that we mutate inputs directly in memory.
  • By coverage-guided, we mean that we measure code coverage for every input, and accumulate test cases that increase overall coverage.
  • By white-box, we mean that we use compile-time instrumentation of the source code.

LibFuzzer makes it possible to fuzz individual components of Chrome. This means you don’t need to generate an HTML page or network payload and launch the whole browser, which adds overhead and flakiness to testing. Instead, you can fuzz any function or internal API directly. Based on our experience, libFuzzer-based fuzzing is extremely efficient, more reliable, and usually thousands of times faster than traditional out-of-process fuzzing.

Our goal is to have fuzz testing for every component of Chrome where fuzzing is applicable, and we hope all Chromium developers and external security researchers will contribute to this effort.

How to write a fuzz target

With libFuzzer, you need to write only one function, which we call a target function or a fuzz target. It accepts a data buffer and length as input and then feeds it into the code we want to test. And... that’s it!

The fuzz targets are not specific to libFuzzer. Currently, we also run them with AFL, and we expect to use other fuzzing engines in the future.
Sample Fuzzer

extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
 std::string buf;
 woff2::WOFF2StringOut out(&buf);
 out.SetMaxSize(30 * 1024 * 1024);
 woff2::ConvertWOFF2ToTTF(data, size, &out);
 return 0;
}

See also the build rule.
Sample Bug

==9896==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62e000022836 at pc 0x000000499c51 bp 0x7fffa0dc1450 sp 0x7fffa0dc0c00
WRITE of size 41994 at 0x62e000022836 thread T0
SCARINESS: 45 (multi-byte-write-heap-buffer-overflow)
   #0 0x499c50 in __asan_memcpy
   #1 0x4e6b50 in Read third_party/woff2/src/buffer.h:86:7
   #2 0x4e6b50 in ReconstructGlyf third_party/woff2/src/woff2_dec.cc:500
   #3 0x4e6b50 in ReconstructFont third_party/woff2/src/woff2_dec.cc:917
   #4 0x4e6b50 in woff2::ConvertWOFF2ToTTF(unsigned char const*, unsigned long, woff2::WOFF2Out*) third_party/woff2/src/woff2_dec.cc:1282
   #5 0x4dbfd6 in LLVMFuzzerTestOneInput testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc:15:3

Check out our documentation for additional information.

Integrating LibFuzzer with ClusterFuzz

ClusterFuzz is Chromium’s infrastructure for large scale fuzzing. It automates crash detection, report deduplication, test minimization, and other tasks. Once you commit a fuzz target into the Chromium codebase (examples), ClusterFuzz will automatically pick it up and fuzz it with libFuzzer and AFL. 

ClusterFuzz supports most of the libFuzzer features like dictionaries, seed corpus and custom options for different fuzzers. Check out our Efficient Fuzzer Guide to learn how to use them.

Besides the initial seed corpus, we store, minimize, and synchronize the corpora for every fuzzer and across all bots. This allows us to continuously increase code coverage over time and find interesting bugs along the way.

ClusterFuzz uses the following memory debugging tools with libFuzzer-based fuzzers:
  • AddressSanitizer (ASan): 500 GCE VMs
  • MemorySanitizer (MSan): 100 GCE VMs
  • UndefinedBehaviorSanitizer (UBSan): 100 GCE VMs

Sample Fuzzer Statistics


It’s important to track and analyze performance of fuzzers. So, we have this dashboard to track fuzzer statistics, that is accessible to all chromium developers:


Overall statistics for the last 30 days:
  • 120 fuzzers
  • 112 bugs filed
  • Aaaaaand…. 14,366,371,459,772 unique test inputs!

Analysis of the bugs found so far

Looking at the 324 bugs found so far, we can say that ASan and MSan have been very effective memory tools for finding security vulnerabilities. They give us comparable numbers of crashes, though ASan crashes usually are more severe than MSan ones. LSan (part of ASan) and UBSan have a great impact for Stability - another one of our 4 core principles.


Extending Chrome’s Vulnerability Reward Program

Under Chrome's Trusted Researcher Program, we invite submission of fuzzers. We run them for you on ClusterFuzz and automatically nominate bugs they find for reward payments.

Today we're pleased to announce that the invite-only Trusted Researcher Program is being replaced with the Chrome Fuzzer Program which encourages fuzzer submissions from all, and also covers libFuzzer-based fuzzers! Full guidelines are listed on Chrome’s Vulnerability Reward Program page.



As part of Google’s ongoing effort to protect users from unwanted software, we have been zeroing in on the deceptive installation tactics and actors that play a role in unwanted software delivery. This software includes unwanted ad injectors that insert unintended ads into webpages and browser settings hijackers that change search settings without user consent.

Every week, Google Safe Browsing generates over 60 million warnings to help users avoid installing unwanted software--that’s more than 3x the number of warnings we show for malware. Many of these warnings appear when users unwittingly download software bundles laden with several additional applications, a business model known as pay-per-install that earns up to $1.50 for each successful install. Recently, we finished the first in-depth investigation with NYU Tandon School of Engineering into multiple pay-per-install networks and the unwanted software families purchasing installs. The full report, which you can read here, will be presented next week at the USENIX Security Symposium.

Over a year-long period, we found four of the largest pay-per-install networks routinely distributed unwanted ad injectors, browser settings hijackers, and scareware flagged by over 30 anti-virus engines. These bundles were deceptively promoted through fake software updates, phony content lockers, and spoofed brands--techniques openly discussed on underground forums as ways to trick users into unintentionally downloading software and accepting the installation terms. While not all software bundles lead to unwanted software, critically, it takes only one deceptive party in a chain of web advertisements, pay-per-install networks, and application developers for abuse to manifest.
Behind the scenes of unwanted software distribution
Software bundle installation dialogue. Accepting the express install option will cause eight other programs to be installed with no indication of each program’s functionality.

If you have ever encountered an installation dialog like the one above, then you are already familiar with the pay-per-install distribution model. Behind the scenes there are a few different players:
  • Advertisers: In pay-per-install lingo, advertisers are software developers, including unwanted software developers, paying for installs via bundling. In our example above, these advertisers include Plus-HD and Vuupc among others. The cost per install ranges anywhere from $0.10 in South America to $1.50 in the United States. Unwanted software developers will recoup this loss via ad injection, selling search traffic, or levying subscription fees. During our investigation, we identified 1,211 advertisers paying for installs.
  • Affiliate networks: Affiliate networks serve as middlemen between advertisers looking to buy installs and popular software packages willing to bundle additional applications in return for a fee. These affiliate networks provide the core technology for tracking successful installs and billing. Additionally, they provide tools that attempt to thwart Google Safe Browsing or anti-virus detection. We spotted at least 50 affiliate networks fueling this business.
  • Publishers: Finally, popular software applications re-package their binaries to include several advertiser offers. Publishers are then responsible for getting users to download and install their software through whatever means possible: download portals, organic page traffic, or often times deceptive ads. Our study uncovered 2,518 publishers distributing through 191,372 webpages.
This decentralized model encourages advertisers to focus solely on monetizing users upon installation and for publishers to maximize conversion, irrespective of the final user experience. It takes only one bad actor anywhere in the distribution chain for unwanted installs to manifest.


What gets bundled?

We monitored the offers bundled by four of the largest pay-per-install affiliate networks on a daily basis for over a year. In total, we collected 446K offers related to 843 unique software packages. The most commonly bundled software included unwanted ad injectors, browser settings hijackers, and scareware purporting to fix urgent issues with a victim’s machine for $30-40. Here’s an example of an ad injector impersonating an anti-virus alert to scam users into fixing non-existent system issues:


Deceptive practices

Taken as a whole, we found 59% of weekly offers bundled by pay-per-install affiliate networks were flagged by at least one anti-virus engine as potentially unwanted. In response, software bundles will first fingerprint a user’s machine prior to installation to detect the presence of “hostile” anti-virus engines. Furthermore, in response to protections provide by Google Safe Browsing, publishers have resorted to increasingly convoluted tactics to try and avoid detection, like the defunct technique shown below of password protecting compressed binaries:


Paired with deceptive promotional tools like fake video codecs, software updates, or misrepresented brands, there are a multitude of deceptive behaviors currently pervasive to software bundling.


Cleaning up the ecosystem

We are constantly improving Google Safe Browsing defenses and the Chrome Cleanup Tool to protect users from unwanted software installs. When it comes to our ads policy, we take quick action to block and remove advertisers who misrepresent downloads or distribute software that violates Google’s unwanted software policy.

Additionally, Google is pushing for real change from businesses involved in the pay-per-install market to address the deceptive practices of some participants. As part of this, Google recently hosted a Clean Software Summit bringing together members of the anti-virus industry, bundling platforms, and the Clean Software Alliance. Together, we laid the groundwork for an industry-wide initiative to provide users with clear choices when installing software and to block deceptive actors pushing unwanted installs. We continue to advocate on behalf of users to ensure they remain safe while downloading software online.



Earlier this year, we launched a new section of our Transparency Report dedicated to HTTPS encryption. This report shows how much traffic is encrypted for Google products and popular sites across the web. Today, we’re adding two Google products to the report: YouTube and Calendar. The traffic for both products is currently more than 90% encrypted via HTTPS.

Case study: YouTube
As we’ve implemented HTTPS across products over the years, we’ve worked through a wide variety of technical obstacles. Below are some of the challenges we faced during YouTube’s two year road to HTTPS:

  • Lots of traffic! Our CDN, the Google Global Cache, serves a massive amount of video, and migrating it all to HTTPS is no small feat. Luckily, hardware acceleration for AES is widespread, so we were able to encrypt virtually all video serving without adding machines. (Yes, HTTPS is fast now.)
  • Lots of devices! You can watch YouTube videos on everything from flip phones to smart TVs. We A/B tested HTTPS on every device to ensure that users would not be negatively impacted. We found that HTTPS improved quality of experience on most clients: by ensuring content integrity, we virtually eliminated many types of streaming errors.
  • Lots of requests! Mixed content—any insecure request made in a secure context—poses a challenge for any large website or app. We get an alert when an insecure request is made from any of our clients and eventually will block all mixed content using Content Security Policy on the web, App Transport Security on iOS, and uses CleartextTraffic on Android. Ads on YouTube have used HTTPS since 2014.

We're also proud to be using HTTP Secure Transport Security (HSTS) on youtube.com to cut down on HTTP to HTTPS redirects. This improves both security and latency for end users. Our HSTS lifetime is one year, and we hope to preload this soon in web browsers.

97% for YouTube is pretty good, but why isn't YouTube at 100%? In short, some devices do not fully support modern HTTPS. Over time, to keep YouTube users as safe as possible, we will gradually phase out insecure connections.


We know that any non-secure HTTP traffic could be vulnerable to attackers. All websites and apps should be protected with HTTPS — if you’re a developer that hasn’t yet migrated, get started today.



For many years, we’ve worked to increase the use of encryption between our users and Google. Today, the vast majority of these connections are encrypted, and our work continues on this effort.

To further protect users, we've taken another step to strengthen how we use encryption for data in transit by implementing HTTP Strict Transport Security—HSTS for short—on the www.google.com domain. HSTS prevents people from accidentally navigating to HTTP URLs by automatically converting insecure HTTP URLs into secure HTTPS URLs. Users might navigate to these HTTP URLs by manually typing a protocol-less or HTTP URL in the address bar, or by following HTTP links from other websites.

Preparing for launch

Ordinarily, implementing HSTS is a relatively basic process. However, due to Google's particular complexities, we needed to do some extra prep work that most other domains wouldn't have needed to do. For example, we had to address mixed content, bad HREFs, redirects to HTTP, and other issues like updating legacy services which could cause problems for users as they try to access our core domain.

This process wasn’t without its pitfalls. Perhaps most memorably, we accidentally broke Google’s Santa Tracker just before Christmas last year (don’t worry — we fixed it before Santa and his reindeer made their trip).

Deployment and next steps

We’ve turned on HSTS for www.google.com, but some work remains on our deployment checklist.

In the immediate term, we’re focused on increasing the duration that the header is active (‘max-age’). We've initially set the header’s max-age to one day; the short duration helps mitigate the risk of any potential problems with this roll-out. By increasing the max-age, however, we reduce the likelihood that an initial request to www.google.com happens over HTTP. Over the next few months, we will ramp up the max-age of the header to at least one year.

Encrypting data in transit helps keep our users and their data secure. We’re excited to be implementing HSTS and will continue to extend it to more domains and Google products in the coming months.



[Cross-posted from the Android Developers Blog]

Android relies heavily on the Linux kernel for enforcement of its security model. To better protect the kernel, we’ve enabled a number of mechanisms within Android. At a high level these protections are grouped into two categories—memory protections and attack surface reduction.
Memory Protections
One of the major security features provided by the kernel is memory protection for userspace processes in the form of address space separation. Unlike userspace processes, the kernel’s various tasks live within one address space and a vulnerability anywhere in the kernel can potentially impact unrelated portions of the system’s memory. Kernel memory protections are designed to maintain the integrity of the kernel in spite of vulnerabilities.
Mark Memory As Read-Only/No-Execute
This feature segments kernel memory into logical sections and sets restrictive page access permissions on each section. Code is marked as read only + execute. Data sections are marked as no-execute and further segmented into read-only and read-write sections. This feature is enabled with config option CONFIG_DEBUG_RODATA. It was put together by Kees Cook and is based on a subset of Grsecurity’s KERNEXEC feature by Brad Spengler and Qualcomm’s CONFIG_STRICT_MEMORY_RWX feature by Larry Bassel and Laura Abbott. CONFIG_DEBUG_RODATA landed in the upstream kernel for arm/arm64 and has been backported to Android’s 3.18+ arm/arm64 common kernel.
Restrict Kernel Access to User Space
This feature improves protection of the kernel by preventing it from directly accessing userspace memory. This can make a number of attacks more difficult because attackers have significantly less control over kernel memory that is executable, particularly with CONFIG_DEBUG_RODATA enabled. Similar features were already in existence, the earliest being Grsecurity’s UDEREF. This feature is enabled with config option CONFIG_CPU_SW_DOMAIN_PAN and was implemented by Russell King for ARMv7 and backported to Android’s 4.1 kernel by Kees Cook.
Improve Protection Against Stack Buffer Overflows
Much like its predecessor, stack-protector, stack-protector-strong protects against stack buffer overflows, but additionally provides coverage for more array types, as the original only protected character arrays. Stack-protector-strong was implemented by Han Shan and added to the gcc 4.9 compiler.

Attack Surface Reduction
Attack surface reduction attempts to expose fewer entry points to the kernel without breaking legitimate functionality. Reducing attack surface can include removing code, removing access to entry points, or selectively exposing features.
Remove Default Access to Debug Features
The kernel’s perf system provides infrastructure for performance measurement and can be used for analyzing both the kernel and userspace applications. Perf is a valuable tool for developers, but adds unnecessary attack surface for the vast majority of Android users. In Android Nougat, access to perf will be blocked by default. Developers may still access perf by enabling developer settings and using adb to set a property: “adb shell setprop security.perf_harden 0”.
The patchset for blocking access to perf may be broken down into kernel and userspace sections. The kernel patch is by Ben Hutchings and is derived from Grsecurity’s CONFIG_GRKERNSEC_PERF_HARDEN by Brad Spengler. The userspace changes were contributed by Daniel Micay. Thanks to Wish Wu and others for responsibly disclosing security vulnerabilities in perf.
Restrict App Access to IOCTL Commands
Much of Android security model is described and enforced by SELinux. The ioctl() syscall represented a major gap in the granularity of enforcement via SELinux. Ioctl command whitelisting with SELinux was added as a means to provide per-command control over the ioctl syscall by SELinux.
Most of the kernel vulnerabilities reported on Android occur in drivers and are reached using the ioctl syscall, for example CVE-2016-0820. Some ioctl commands are needed by third-party applications, however most are not and access can be restricted without breaking legitimate functionality. In Android Nougat, only a small whitelist of socket ioctl commands are available to applications. For select devices, applications’ access to GPU ioctls has been similarly restricted.
Require SECCOMP-BPF
Seccomp provides an additional sandboxing mechanism allowing a process to restrict the syscalls and syscall arguments available using a configurable filter. Restricting the availability of syscalls can dramatically cut down on the exposed attack surface of the kernel. Since seccomp was first introduced on Nexus devices in Lollipop, its availability across the Android ecosystem has steadily improved. With Android Nougat, seccomp support is a requirement for all devices. On Android Nougat we are using seccomp on the mediaextractor and mediacodec processes as part of the media hardening effort.

Ongoing Efforts
There are other projects underway aimed at protecting the kernel:

  • The Kernel Self Protection Project is developing runtime and compiler defenses for the upstream kernel.
  • Further sandbox tightening and attack surface reduction with SELinux is ongoing in AOSP.
  • Minijail provides a convenient mechanism for applying many containment and sandboxing features offered by the kernel, including seccomp filters and namespaces.
  • Projects like kasan and kcov help fuzzers discover the root cause of crashes and to intelligently construct test cases that increase code coverage—ultimately resulting in a more efficient bug hunting process.
Due to these efforts and others, we expect the security of the kernel to continue improving. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at security@android.com.


[Cross-posted from the Android Developers Blog]
In Android Nougat, we’ve changed how Android handles trusted certificate authorities (CAs) to provide safer defaults for secure app traffic. Most apps and users should not be affected by these changes or need to take any action. The changes include:
  • Safe and easy APIs to trust custom CAs.
  • Apps that target API Level 24 and above no longer trust user or admin-added CAs for secure connections, by default.
  • All devices running Android Nougat offer the same standardized set of system CAs—no device-specific customizations.
For more details on these changes and what to do if you’re affected by them, read on.

Safe and easy APIs

Apps have always been able customize which certificate authorities they trust. However, we saw apps making mistakes due to the complexities of the Java TLS APIs. To address this we improved the APIs for customizing trust.

User-added CAs

Protection of all application data is a key goal of the Android application sandbox. Android Nougat changes how applications interact with user- and admin-supplied CAs. By default, apps that target API level 24 will—by design—not honor such CAs unless the app explicitly opts in. This safe-by-default setting reduces application attack surface and encourages consistent handling of network and file-based application data.

Customizing trusted CAs

Customizing the CAs your app trusts on Android Nougat is easy using the Network Security Config. Trust can be specified across the whole app or only for connections to certain domains, as needed. Below are some examples for trusting a custom or user-added CA, in addition to the system CAs. For more examples and details, see the full documentation.

Trusting custom CAs for debugging

To allow your app to trust custom CAs only for local debugging, include something like this in your Network Security Config. The CAs will only be trusted while your app is marked as debuggable.
<network-security-config>  
      <debug-overrides>  
           <trust-anchors>  
                <!-- Trust user added CAs while debuggable only -->
                <certificates src="user" />  
           </trust-anchors>  
      </domain-config>  
 </network-security-config>

Trusting custom CAs for a domain

To allow your app to trust custom CAs for a specific domain, include something like this in your Network Security Config.
<network-security-config>  
      <domain-config>  
           <domain includeSubdomains="true">internal.example.com</domain>  
           <trust-anchors>  
                <!-- Only trust the CAs included with the app  
                     for connections to internal.example.com -->  
                <certificates src="@raw/cas" />  
           </trust-anchors>  
      </domain-config>  
 </network-security-config>

Trusting user-added CAs for some domains

To allow your app to trust user-added CAs for multiple domains, include something like this in your Network Security Config.
<network-security-config>  
      <domain-config>  
           <domain includeSubdomains="true">userCaDomain.com</domain>  
           <domain includeSubdomains="true">otherUserCaDomain.com</domain>  
           <trust-anchors>  
                  <!-- Trust preinstalled CAs -->  
                  <certificates src="system" />  
                  <!-- Additionally trust user added CAs -->  
                  <certificates src="user" />  
           </trust-anchors>  
      </domain-config>  
 </network-security-config>

Trusting user-added CAs for all domains except some

To allow your app to trust user-added CAs for all domains, except for those specified, include something like this in your Network Security Config.
<network-security-config>  
      <base-config>  
           <trust-anchors>  
                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           </trust-anchors>  
      </base-config>  
      <domain-config>  
           <domain includeSubdomains="true">sensitive.example.com</domain>  
           <trust-anchors>  
                <!-- Only allow sensitive content to be exchanged  
             with the real server and not any user or  
    admin configured MiTMs -->  
                <certificates src="system" />  
           <trust-anchors>  
      </domain-config>  
 </network-security-config>

Trusting user-added CAs for all secure connections

To allow your app to trust user-added CAs for all secure connections, add this in your Network Security Config.
<network-security-config>  
      <base-config>  
            <trust-anchors>  
                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           </trust-anchors>  
      </base-config>  
 </network-security-config>

Standardized set of system-trusted CAs

To provide a more consistent and more secure experience across the Android ecosystem, beginning with Android Nougat, compatible devices trust only the standardized system CAs maintained in AOSP.
Previously, the set of preinstalled CAs bundled with the system could vary from device to device. This could lead to compatibility issues when some devices did not include CAs that apps needed for connections as well as potential security issues if CAs that did not meet our security requirements were included on some devices.

What if I have a CA I believe should be included on Android?

First, be sure that your CA needs to be included in the system. The preinstalled CAs are only for CAs that meet our security requirements because they affect the secure connections of most apps on the device. If you need to add a CA for connecting to hosts that use that CA, you should instead customize your apps and services that connect to those hosts. For more information, see the Customizing trusted CAs section above.
If you operate a CA that you believe should be included in Android, first complete the Mozilla CA Inclusion Process and then file a feature request against Android to have the CA added to the standardized set of system CAs.


Quantum computers are a fundamentally different sort of computer that take advantage of aspects of quantum physics to solve certain sorts of problems dramatically faster than conventional computers can. While they will, no doubt, be of huge benefit in some areas of study, some of the problems that they are effective at solving are the ones that we use to secure digital communications. Specifically, if large quantum computers can be built then they may be able to break the asymmetric cryptographic primitives that are currently used in TLS, the security protocol behind HTTPS.

Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although Google, IBM, Microsoft, Intel and others are working on it. (Adiabatic quantum computers, like the D-Wave computer that Google operates with NASA, can have large numbers of quantum bits, but currently solve fundamentally different problems.)

However, a hypothetical, future quantum computer would be able to retrospectively decrypt any internet communication that was recorded today, and many types of information need to remain confidential for decades. Thus even the possibility of a future quantum computer is something that we should be thinking about today.
Experimenting with Post-quantum cryptography in Chrome
The study of cryptographic primitives that remain secure even against quantum computers is called “post-quantum cryptography”. Today we're announcing an experiment in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm on top of the existing one, we are able to experiment without affecting user security. The post-quantum algorithm might turn out to be breakable even with today's computers, in which case the elliptic-curve algorithm will still provide the best security that today’s technology can offer. Alternatively, if the post-quantum algorithm turns out to be secure then it'll protect the connection even against a future, quantum computer.

Our aims with this experiment are to highlight an area of research that Google believes to be important and to gain real-world experience with the larger data structures that post-quantum algorithms will likely require.

We're indebted to Erdem Alkim, Léo Ducas, Thomas Pöppelmann and Peter Schwabe, the researchers who developed “New Hope”, the post-quantum algorithm that we selected for this experiment. Their scheme looked to be the most promising post-quantum key-exchange when we investigated in December 2015. Their work builds upon earlier work by Bos, Costello, Naehrig and Stebila, and also on work by Lyubashevsky, Peikert and Regev.

We explicitly do not wish to make our selected post-quantum algorithm a de-facto standard. To this end we plan to discontinue this experiment within two years, hopefully by replacing it with something better. Since we selected New Hope, we've noted two promising papers in this space, which are welcome. Additionally, Google researchers, in collaboration with researchers from NXP, Microsoft, Centrum Wiskunde & Informatica and McMaster University, have just published another paper in this area. Practical research papers, such as these, are critical if cryptography is to have real-world impact.

This experiment is currently enabled in Chrome Canary and you can tell whether it's being used by opening the recently introduced Security Panel and looking for “CECPQ1”, for example on https://play.google.com/store. Not all Google domains will have it enabled and the experiment may appear and disappear a few times if any issues are found.

While it's still very early days for quantum computers, we're excited to begin preparing for them, and to help ensure our users' data will remain secure long into the future.