[go: nahoru, domu]

Posted by Megan Ruthven, Software Engineer

[Cross-posted from the Android Developers Blog]

In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system.

This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps

To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.

Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.

  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.

Difference between a regular and DOI app download on the same device.


Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.



Encryption is a foundational technology for the web. We’ve spent a lot of time working through the intricacies of making encrypted apps easy to use and in the process, realized that a generic, secure way to discover a recipient's public keys for addressing messages correctly is important. Not only would such a thing be beneficial across many applications, but nothing like this exists as a generic technology.

A solution would need to reliably scale to internet size while providing a way to establish secure communications through untrusted servers. It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more.

The result is Key Transparency, which we’re making available as an open-source prototype today.

Why Key Transparency is useful

Existing methods of protecting users against server compromise require users to manually verify recipients’ accounts in-person. This simply hasn’t worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won’t use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge.

One of our goals with Key Transparency was to simplify this process and create infrastructure that allows making it usable by non-experts. The relationship between online personas and public keys should be automatically verifiable and publicly auditable. Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible. This also ensures that senders will always use the same keys that account owners are verifying.

Key Transparency is a general-use, transparent directory that makes it easy for developers to create systems of all kinds with independently auditable account data. It can be used in a variety of scenarios where data needs to be encrypted or authenticated. It can be used to make security features that are easy for people to understand while supporting important user needs like account recovery.

Looking ahead
It’s still very early days for Key Transparency. With this first open source release, we’re continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone.

We’d also like to thank our many collaborators during Key Transparency’s multi-year development, including the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google.

Our goal is to evolve Key Transparency into an open-source, generic, scalable, and interoperable directory of public keys with an ecosystem of mutually auditing directories. We welcome your apps, input, and contributions to this new technology at KeyTransparency.org.


Last year we helped launch USENIX Enigma, a conference focused on bringing together security and privacy experts from academia, industry, and public service. After a successful first run, we’re supporting Enigma again this year and looking forward to more great talks and an ever-engaging hallway track!

Our speakers this year include practitioners from many tech industry leaders, researchers and professors at universities from around the world, and civil servants working at agencies in the U.S. and abroad. In addition to sessions focused specifically on software security, spam and abuse, and usability, the program will cover interdisciplinary topics, like the intersection of security and artificial intelligence, neuroscience, and society.

I’m also very proud to have some of my Google colleagues speaking at Enigma:

  • Sunny Consolvo will present results from a qualitative study of the privacy and security practices and challenges of survivors of intimate partner abuse. Sunny will also share how technology creators can better support the victims and survivors of such abuse.
  • Damian Menscher has been battling botnets for a decade at Google and has witnessed the evolution of Distributed Denial-of-Service (DDoS) scaling and attack ingenuity. Damian will describe his operation of a DDoS honeypot, and share specific things he learned while protecting KrebsOnSecurity.com.
  • Emily Schechter will be talking about driving HTTPS adoption on the web. She’ll go over some of the unexpected speed bumps major web sites have encountered, share how Chrome approaches feature changes that encourage HTTPS usage, and discuss what’s next to get to a default encrypted web.
As we did last year, my program co-chair, David Brumley, and I are developing a program full of short, thought-provoking presentations followed by lively discussion. Our program committee has worked closely with each speaker to help them craft the best version of their talk. Everyone is excited to share them, and there’s still time to register! I hope to see many of you in Oakland at USENIX Enigma later this month.

PS: Warning! Self-serving Google notice ahead… We’re hiring! We believe that most security researchers do what they do because they love what they do. What we offer is a place to do what you love—but in the open, on real-world problems, and without distraction. Please reach out to us if you’re interested.


We’re excited to announce the release of Project Wycheproof, a set of security tests that check cryptographic software libraries for known weaknesses. We’ve developed over 80 test cases which have uncovered more than 40 security bugs (some tests or bugs are not open sourced today, as they are being fixed by vendors). For example, we found that we could recover the private key of widely-used DSA and ECDHC implementations. We also provide ready-to-use tools to check Java Cryptography Architecture providers such as Bouncy Castle and the default providers in OpenJDK.

The main motivation for the project is to have an achievable goal. That’s why we’ve named it after the Mount Wycheproof, the smallest mountain in the world. The smaller the mountain the easier it is to climb it!

In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long. Good implementation guidelines, however, are hard to come by: understanding how to implement cryptography securely requires digesting decades' worth of academic literature. We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means.

These observations have prompted us to develop Project Wycheproof, a collection of unit tests that detect known weaknesses or check for expected behaviors of some cryptographic algorithm. Our cryptographers have surveyed the literature and implemented most known attacks. As a result, Project Wycheproof provides tests for most cryptographic algorithms, including RSA, elliptic curve crypto, and authenticated encryption.

Our first set of tests are written in Java, because Java has a common cryptographic interface. This allowed us to test multiple providers with a single test suite. While this interface is somewhat low level, and should not be used directly, we still apply a "defense in depth" argument and expect that the implementations are as robust as possible. For example, we consider weak default values to be a significant security flaw. We are converting as many tests into sets of test vectors to simplify porting the tests to other languages.

While we are committed to develop as many tests as possible and external contributions are welcome — if you want to contribute, please read CONTRIBUTING before sending us pull requests — Project Wycheproof is by no means complete. Passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tries to detect. Cryptographers constantly discover new weaknesses in cryptographic protocols. Nevertheless, with Project Wycheproof developers and users now can check their libraries against a large number of known attacks without having to sift through hundreds of academic papers or become cryptographers themselves.

For more information about the tests and what you can do with them, please visit our homepage on GitHub.


[Cross-posted from the Google Testing Blog and the Google Open Source Blog]

We are happy to announce OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software.

Open source software is the backbone of the many apps, sites, services, and networked things that make up “the internet.” It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it.

Recent security stories confirm that errors like buffer overflow and use-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's where fuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly.

In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimes even logical bugs.

OSS-Fuzz’s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer) and provides a massive distributed execution environment powered by ClusterFuzz.
Early successes

Our initial trials with OSS-Fuzz have had good results. An example is the FreeType library, which is used on over a billion devices to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, was an early adopter of OSS-Fuzz. Recently the FreeType fuzzer found a new heap buffer overflow only a few hours after the source change:
ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa 
READ of size 2 at 0x615000000ffa thread T0
SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds)
   #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31
OSS-Fuzz automatically notified the maintainer, who fixed the bug; then OSS-Fuzz automatically confirmed the fix. All in one day! You can see the full list of fixed and disclosed bugs found by OSS-Fuzz so far.

Contributions and feedback are welcome

OSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software.

OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a general heuristic that we are intentionally leaving open to interpretation at this early stage. See more details and instructions on how to apply here.

Once a project is signed up for OSS-Fuzz, it is automatically subject to the 90-day disclosure deadline for newly reported bugs in our tracker (see details here). This matches industry’s best practices and improves end-user security and stability by getting patches to users faster.

Help us ensure this program is truly serving the open source community and the internet which relies on this critical software, contribute and leave your feedback on GitHub.


[Cross-posted from the Android Developers Blog]

Encryption protects your data if your phone falls into someone else's hands. The new Google Pixel and Pixel XL are encrypted by default to offer strong data protection, while maintaining a great user experience with high I/O performance and long battery life. In addition to encryption, the Pixel phones debuted running the Android Nougat release, which has even more security improvements.

This blog post covers the encryption implementation on Google Pixel devices and how it improves the user experience, performance, and security of the device.
File-Based Encryption Direct Boot experience
One of the security features introduced in Android Nougat was file-based encryption. File-based encryption (FBE) means different files are encrypted with different keys that can be unlocked independently. FBE also separates data into device encrypted (DE) data and credential encrypted (CE) data.

Direct boot uses file-based encryption to allow a seamless user experience when a device reboots by combining the unlock and decrypt screen. For users, this means that applications like alarm clocks, accessibility settings, and phone calls are available immediately after boot.

Enhanced with TrustZone® security

Modern processors provide a means to execute code in a mode that remains secure even if the kernel is compromised. On ARM®-based processors this mode is known as TrustZone. Starting in Android Nougat, all disk encryption keys are stored encrypted with keys held by TrustZone software.

This secures encrypted data in two ways:

  • TrustZone enforces the Verified Boot process. If TrustZone detects that the operating system has been modified, it won't decrypt disk encryption keys; this helps to secure device encrypted (DE) data.
  • TrustZone enforces a waiting period between guesses at the user credential, which gets longer after a sequence of wrong guesses. With 1624 valid four-point patterns and TrustZone's ever-growing waiting period, trying all patterns would take more than four years. This improves security for all users, especially those who have a shorter and more easily guessed pattern, PIN, or password.

Encryption on Pixel phones

Protecting different folders with different keys required a distinct approach from full-disk encryption (FDE). The natural choice for Linux-based systems is the industry-standard eCryptFS. However, eCryptFS didn't meet our performance requirements. Fortunately one of the eCryptFS creators, Michael Halcrow, worked with the ext4 maintainer, Ted Ts'o, to add encryption natively to ext4, and Android became the first consumer of this technology. ext4 encryption performance is similar to full-disk encryption, which is as performant as a software-only solution can be.


Additionally, Pixel phones have an inline hardware encryption engine, which gives them the ability to write encrypted data at line speed to the flash memory. To take advantage of this, we modified ext4 encryption to use this hardware by adding a key reference to the bio structure, within the ext4 driver before passing it to the block layer. (The bio structure is the basic container for block I/O in the Linux kernel.) We then modified the inline encryption block driver to pass this to the hardware. As with ext4 encryption, keys are managed by the Linux keyring. To see our implementation, take a look at the source code for the Pixel kernel.


While this specific implementation of file-based encryption using ext4 with inline encryption benefits Pixel users, FBE is available in AOSP and ready to use, along with the other features mentioned in this post.


We’ve previously made several announcements about Google Chrome's deprecation plans for SHA-1 certificates. This post provides an update on the final removal of support.

The SHA-1 cryptographic hash algorithm first showed signs of weakness over eleven years ago and recent research points to the imminent possibility of attacks that could directly impact the integrity of the Web PKI. To protect users from such attacks, Chrome will stop trusting certificates that use the SHA-1 algorithm, and visiting a site using such a certificate will result in an interstitial warning.
Release schedule
We are planning to remove support for SHA-1 certificates in Chrome 56, which will be released to the stable channel around the end of January 2017. The removal will follow the Chrome release process, moving from Dev to Beta to Stable; there won't be a date-based change in behaviour.

Website operators are urged to check for the use of SHA-1 certificates and immediately contact their CA for a SHA-256 based replacement if any are found.
SHA-1 use in private PKIs
Previous posts made a distinction between certificates which chain to a public CA and those which chain to a locally installed trust anchor, such as those of a private PKI within an enterprise. We recognise there might be rare cases where an enterprise wishes to make their own risk management decision to continue using SHA-1 certificates.

Starting with Chrome 54 we provide the EnableSha1ForLocalAnchors policy that allows certificates which chain to a locally installed trust anchor to be used after support has otherwise been removed from Chrome. Features which require a secure origin, such as geolocation, will continue to work, however pages will be displayed as “neutral, lacking security”. Without this policy set, SHA-1 certificates that chain to locally installed roots will not be trusted starting with Chrome 57, which will be released to the stable channel in March 2017. Note that even without the policy set, SHA-1 client certificates will still be presented to websites requesting client authentication.

Since this policy is intended only to allow additional time to complete the migration away from SHA-1, it will eventually be removed in the first Chrome release after January 1st 2019.

As Chrome makes use of certificate validation libraries provided by the host OS when possible, this option will have no effect if the underlying cryptographic library disables support for SHA-1 certificates; at that point, they will be unconditionally blocked. We may also remove support before 2019 if there is a serious cryptographic break of SHA-1. Enterprises are encouraged to make every effort to stop using SHA-1 certificates as soon as possible and to consult with their security team before enabling the policy.


Since launching in 2007, the Safe Browsing team has been dedicated to our mission of protecting users from phishing, malware, and unwanted software on the web. Our coverage currently extends to more than two billion internet-connected devices, including Chrome users on Android. As part of our commitment to keep our users both protected and informed, we’ve recently launched several improvements to the way we share information.

Today, we’re happy to announce a new site for Safe Browsing that makes it easier for users to quickly report malicious sites, access our developer documentation, and find our policies. Our new site also serves as a central hub for our tools, including the Transparency Report, Search Console, and Safe Browsing Alerts for Network Administrators.

The new Safe Browsing website will be a platform for consolidated policy and help content. We’re excited to make this new, single source of information available to users, developers, and webmasters.



Since 2005, Safe Browsing has been protecting users from harm on the Internet, and has evolved over the years to adapt to the changing nature of threats and user harm.

Today, sites in violation of Google’s Malware, Unwanted Software, Phishing, and Social Engineering Policies show warnings until Google verifies that the site is no longer harmful. The verification can be triggered automatically, or at the request of the webmaster via the Search Console.

However, over time, we’ve observed that a small number of websites will cease harming users for long enough to have the warnings removed, and will then revert to harmful activity.

As a result of this gap in user protection, we have adjusted our policies to reduce risks borne by end-users. Starting today, Safe Browsing will begin to classify these types of sites as “Repeat Offenders.” With regards to Safe Browsing-related policies, Repeat Offenders are websites that repeatedly switch between compliant and policy-violating behavior for the purpose of having a successful review and having warnings removed. Please note that websites that are hacked will not be classified as Repeat Offenders; only sites that purposefully post harmful content will be subject to the policy.

Once Safe Browsing has determined that a site is a Repeat Offender, the webmaster will be unable to request additional reviews via the Search Console for 30 days, and warnings will continue to show to users. When a site is established as a Repeat Offender, the webmaster will be notified via email to their registered Search Console email address.

We continuously update our policies and practices to address evolving threats. This is yet another change to help protect users from harm online.


Security has always been critical to the web, but challenges involved in site migration have inhibited HTTPS adoption for several years. In the interest of a safer web for all, at Google we’ve worked alongside many others across the online ecosystem to better understand and address these challenges, resulting in real change. A web with ubiquitous HTTPS is not the distant future. It’s happening now, with secure browsing becoming standard for users of Chrome.

Today, we’re adding a new section to the HTTPS Report Card in our Transparency Report that includes data about how HTTPS usage has been increasing over time. More than half of pages loaded and two-thirds of total time spent by Chrome desktop users occur via HTTPS, and we expect these metrics to continue their strong upward trajectory.
Percentage of pages loaded over HTTPS in Chrome



As the remainder of the web transitions to HTTPS, we’ll continue working to ensure that migrating to HTTPS is a no-brainer, providing business benefit beyond increased security. HTTPS currently enables the best performance the web offers and powerful features that benefit site conversions, including both new features such as service workers for offline support and web push notifications, and existing features such as credit card autofill and the HTML5 geolocation API that are too powerful to be used over non-secure HTTP. As with all major site migrations, there are certain steps webmasters should take to ensure that search ranking transitions are smooth when moving to HTTPS. To help with this, we’ve posted two FAQs to help sites transition correctly, and will continue to improve our web fundamentals guidance.


We’ve seen many sites successfully transition with negligible effect on their search ranking and traffic. Brian Wood, Director of Marketing SEO at Wayfair, a large retail site, commented: “We were able to migrate Wayfair.com to HTTPS with no meaningful impact to Google rankings or Google organic search traffic. We are very pleased to say that all Wayfair sites are now fully HTTPS.” CNET, a large tech news site, had a similar experience: “We successfully completed our move of CNET.com to HTTPS last month,” said John Sherwood, Vice President of Engineering & Technology at CNET. “Since then, there has been no change in our Google rankings or Google organic search traffic.”


Webmasters that include ads on their sites also should carefully monitor ad performance and revenue during large site migrations. The portion of Google ad traffic served over HTTPS has increased dramatically over the past 3 years. All ads that come from any Google source always support HTTPS, including AdWords, AdSense, or DoubleClick Ad Exchange; ads sold directly, such as those through DoubleClick for Publishers, still need to be designed to be HTTPS-friendly. This means there will be no change to the Google-sourced ads that appear on a site after migrating to HTTPS. Many publishing partners have seen this in practice after a successful HTTPS transition. Jason Tollestrup, Director of Programmatic Advertising for the Washington Post, “saw no material impact to AdX revenue with the transition to SSL.”


As migrating to HTTPS becomes even easier, we’ll continue working towards a web that’s secure by default. Don’t hesitate to start planning your HTTPS migration today!