[go: nahoru, domu]



We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.

The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.

As we have for 2014 and 2015, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.


What was new?

In short — a lot. Here’s a quick rundown:

Previously by-invitation only, we opened up Chrome's Fuzzer Program to submissions from the public. The program allows researchers to run fuzzers at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.


On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.


We increased our presence at events around the world, like pwn2own and Pwnfest. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.


Stories that stood out

As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
  • We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing Jasminder Web Services Point, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.

Jasminder (back right) and his team
  • Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of Clallam County, Washington, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said: “Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
  • VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)

Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond. 

*Josh Armour (VRP Program Manager), Andrew Whalley (Chrome VRP), and Quan To (Android VRP) contributed mightily to help lead these Google-wide efforts.


In support of our work to implement HTTPS across all of our products (https://www.google.com/transparencyreport/https/) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.

As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (https://pki.goog/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.

The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.

We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority. This change will enable us to begin the process of migrating to our new, independent infrastructure.

Google Trust Services now operates the following Root Certificates:

Public Key Fingerprint (SHA1) Valid Until
GTS Root R1 RSA 4096, SHA-384 e1:c9:50:e6:ef:22:f8:4c:56:45:72:8b:92:20:60:d7:d5:a7:a3:e8 Jun 22, 2036
GTS Root R2 RSA 4096, SHA-384 d2:73:96:2a:2a:5e:39:9f:73:3f:e1:c7:1e:64:3f:03:38:34:fc:4d Jun 22, 2036
GTS Root R3 ECC 384, SHA-384 30:d4:24:6f:07:ff:db:91:89:8a:0b:e9:49:66:11:eb:8c:5e:46:e5 Jun 22, 2036
GTS Root R4 ECC 384, SHA-384 2a:1d:60:27:d9:4a:b1:0a:1c:4d:91:5c:cd:33:a0:cb:3e:2d:54:cb Jun 22, 2036
GS Root R2 RSA 2048, SHA-1 75:e0:ab:b6:13:85:12:27:1c:04:f8:5f:dd:de:38:e4:b7:24:2e:fe Dec 15, 2021
GS Root R4 ECC 256, SHA-256 69:69:56:2e:40:80:f4:24:a1:e7:19:9f:14:ba:f3:ee:58:ab:6a:bb Jan 19, 2038


Due to timing issues involved in establishing an independently trusted Root Certificate Authority, we have also secured the option to cross sign our CAs using:


Public Key Fingerprint (SHA1) Valid Until
GS Root R3 RSA 2048, SHA-256 d6:9b:56:11:48:f0:1c:77:c5:45:78:c1:09:26:df:5b:85:69:76:ad Mar 18, 2029
GeoTrust RSA 2048, SHA-1 de:28:f4:a4:ff:e5:b9:2f:a3:c5:03:d1:a3:49:a7:f9:96:2a:82:12 May 21, 2022


If you are building products that intend to connect to a Google property moving forward you need to at minimum include the above Root Certificates. With that said even though we now operate our own roots, we may still choose to operate subordinate CAs under third-party operated roots.


For this reason if you are developing code intended to connect to a Google property, we still recommend you include a wide set of trustworthy roots. Google maintains a sample PEM file at (https://pki.goog/roots.pem) which is periodically updated to include the Google Trust Services owned and operated roots as well as other roots that may be necessary now, or in the future to communicate with and use Google Products and Services.

Posted by Rahul Mishra, Android Security Program Manager
[Cross-posted from the Android Developers Blog]

In April 2016, the Android Security team described how the Google Play App Security Improvement (ASI) program has helped developers fix security issues in 100,000 applications. Since then, we have detected and notified developers of 11 new security issues and provided developers with resources and guidance to update their apps. Because of this, over 90,000 developers have updated over 275,000 apps!
ASI now notifies developers of 26 potential security issues. To make this process more transparent, we introduced a new page where developers can find information about all these security issues in one place. This page includes links to help center articles containing instructions and additional support contacts. Developers can use this page as a resource to learn about new issues and keep track of all past issues.

Developers can also refer to our security best practices documents and security checklist, which are aimed at improving the understanding of general security concepts and providing examples that can help tackle app-specific issues.

How you can help:
For feedback or questions, please reach out to us through the Google PlayDeveloper Help Center.
security+asi@android.com.

Posted by Megan Ruthven, Software Engineer

[Cross-posted from the Android Developers Blog]

In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system.

This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps

To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.

Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.

  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.

Difference between a regular and DOI app download on the same device.


Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.



Encryption is a foundational technology for the web. We’ve spent a lot of time working through the intricacies of making encrypted apps easy to use and in the process, realized that a generic, secure way to discover a recipient's public keys for addressing messages correctly is important. Not only would such a thing be beneficial across many applications, but nothing like this exists as a generic technology.

A solution would need to reliably scale to internet size while providing a way to establish secure communications through untrusted servers. It became clear that if we combined insights from Certificate Transparency and CONIKS we could build a system with the properties we wanted and more.

The result is Key Transparency, which we’re making available as an open-source prototype today.

Why Key Transparency is useful

Existing methods of protecting users against server compromise require users to manually verify recipients’ accounts in-person. This simply hasn’t worked. The PGP web-of-trust for encrypted email is just one example: over 20 years after its invention, most people still can't or won’t use it, including its original author. Messaging apps, file sharing, and software updates also suffer from the same challenge.

One of our goals with Key Transparency was to simplify this process and create infrastructure that allows making it usable by non-experts. The relationship between online personas and public keys should be automatically verifiable and publicly auditable. Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible. This also ensures that senders will always use the same keys that account owners are verifying.

Key Transparency is a general-use, transparent directory that makes it easy for developers to create systems of all kinds with independently auditable account data. It can be used in a variety of scenarios where data needs to be encrypted or authenticated. It can be used to make security features that are easy for people to understand while supporting important user needs like account recovery.

Looking ahead
It’s still very early days for Key Transparency. With this first open source release, we’re continuing a conversation with the crypto community and other industry leaders, soliciting feedback, and working toward creating a standard that can help advance security for everyone.

We’d also like to thank our many collaborators during Key Transparency’s multi-year development, including the CONIKS team, Open Whisper Systems, as well as the security engineering teams at Yahoo! and internally at Google.

Our goal is to evolve Key Transparency into an open-source, generic, scalable, and interoperable directory of public keys with an ecosystem of mutually auditing directories. We welcome your apps, input, and contributions to this new technology at KeyTransparency.org.


Last year we helped launch USENIX Enigma, a conference focused on bringing together security and privacy experts from academia, industry, and public service. After a successful first run, we’re supporting Enigma again this year and looking forward to more great talks and an ever-engaging hallway track!

Our speakers this year include practitioners from many tech industry leaders, researchers and professors at universities from around the world, and civil servants working at agencies in the U.S. and abroad. In addition to sessions focused specifically on software security, spam and abuse, and usability, the program will cover interdisciplinary topics, like the intersection of security and artificial intelligence, neuroscience, and society.

I’m also very proud to have some of my Google colleagues speaking at Enigma:

  • Sunny Consolvo will present results from a qualitative study of the privacy and security practices and challenges of survivors of intimate partner abuse. Sunny will also share how technology creators can better support the victims and survivors of such abuse.
  • Damian Menscher has been battling botnets for a decade at Google and has witnessed the evolution of Distributed Denial-of-Service (DDoS) scaling and attack ingenuity. Damian will describe his operation of a DDoS honeypot, and share specific things he learned while protecting KrebsOnSecurity.com.
  • Emily Schechter will be talking about driving HTTPS adoption on the web. She’ll go over some of the unexpected speed bumps major web sites have encountered, share how Chrome approaches feature changes that encourage HTTPS usage, and discuss what’s next to get to a default encrypted web.
As we did last year, my program co-chair, David Brumley, and I are developing a program full of short, thought-provoking presentations followed by lively discussion. Our program committee has worked closely with each speaker to help them craft the best version of their talk. Everyone is excited to share them, and there’s still time to register! I hope to see many of you in Oakland at USENIX Enigma later this month.

PS: Warning! Self-serving Google notice ahead… We’re hiring! We believe that most security researchers do what they do because they love what they do. What we offer is a place to do what you love—but in the open, on real-world problems, and without distraction. Please reach out to us if you’re interested.