[go: nahoru, domu]



One of Google’s core security principles is to engage the community, to better protect our users and build relationships with security researchers. We had this principle in mind as we launched our Chromium and Google Web Vulnerability Reward Programs. We didn’t know what to expect, but in the three years since launch, we’ve rewarded (and fixed!) more than 2,000 security bug reports and also received recognition for setting leading standards for response time.

The collective creativity of the wider security community has surpassed all expectations, and their expertise has helped make Chrome even safer for hundreds of millions of users around the world. Today we’re delighted to announce we’ve now paid out in excess of $2,000,000 (USD) across Google’s security reward initiatives. Broken down, this total includes more than $1,000,000 (USD) for the Chromium VRP / Pwnium rewards, and in excess of $1,000,000 (USD) for the Google Web VRP rewards.

Today, the Chromium program is raising reward levels significantly. In a nutshell, bugs previously rewarded at the $1,000 level will now be considered for reward at up to $5,000. In many cases, this will be a 5x increase in reward level! We’ll issue higher rewards for bugs we believe present a more significant threat to user safety, and when the researcher provides an accurate analysis of exploitability and severity. We will continue to pay previously announced bonuses on top, such as those for providing a patch or finding an issue in a critical piece of open source software.

Interested Chromium researchers should familiarize themselves with our documentation on how to report a security bug well and how we determine higher reward eligibility.

These Chromium reward level increases follow on from similar increases under the Google Web program. With all these new levels, we’re excited to march towards new milestones and a more secure web.







[Cross-posted from the Official Google Blog]

Two of the biggest threats online are malicious software (known as malware) that can take control of your computer, and phishing scams that try to trick you into sharing passwords or other private information.

So in 2006 we started a Safe Browsing program to find and flag suspect websites. This means that when you are surfing the web, we can now warn you when a site is unsafe. We're currently flagging up to 10,000 sites a day—and because we share this technology with other browsers there are about 1 billion users we can help keep safe.

But we're always looking for new ways to protect users' security. So today we're launching a new section on our Transparency Report that will shed more light on the sources of malware and phishing attacks. You can now learn how many people see Safe Browsing warnings each week, where malicious sites are hosted around the world, how quickly websites become reinfected after their owners clean malware from their sites, and other tidbits we’ve surfaced.


Sharing this information also aligns well with our Transparency Report, which already gives information about government requests for user data, government requests to remove content, and current disruptions to our services.

To learn more, explore the new Safe Browsing information on this page. Webmasters and network administrators can find recommendations for dealing with malware infections, including resources like Google Webmaster Tools and Safe Browsing Alerts for Network Administrators.



[Update June 13: This post is available in Farsi on the Google Persian Blog.]

For almost three weeks, we have detected and disrupted multiple email-based phishing campaigns aimed at compromising the accounts owned by tens of thousands of Iranian users. These campaigns, which originate from within Iran, represent a significant jump in the overall volume of phishing activity in the region. The timing and targeting of the campaigns suggest that the attacks are politically motivated in connection with the Iranian presidential election on Friday.


Our Chrome browser previously helped detect what appears to be the same group using SSL certificates to conduct attacks that targeted users within Iran. In this case, the phishing technique we detected is more routine: users receive an email containing a link to a web page that purports to provide a way to perform account maintenance. If the user clicks the link, they see a fake Google sign-in page that will steal their username and password.

Protecting our users’ accounts is one of our top priorities, so we notify targets of state-sponsored attacks and other suspicious activity, and we take other appropriate actions to limit the impact of these attacks on our users. Especially if you are in Iran, we encourage you to take extra steps to protect your account. Watching out for phishing, using a modern browser like Chrome and enabling 2-step verification can make you significantly more secure against these and many other types of attacks. Also, before typing your Google password, always verify that the URL in the address bar of your browser begins with https://accounts.google.com/. If the website's address does not match this text, please don’t enter your Google password.



Our vulnerability reward programs have been very successful in helping us fix more bugs and better protect our users, while also strengthening our relationships with security researchers. Since introducing our reward program for web properties in November 2010, we’ve received over 1,500 qualifying vulnerability reports that span across Google’s services, as well as software written by companies we have acquired. We’ve paid $828,000 to more than 250 individuals, some of whom have doubled their total by donating their rewards to charity. For example, one of our bug finders decided to support a school project in East Africa.

In recognition of the difficulty involved in finding bugs in our most critical applications, we’re once again rolling out updated rules and significant reward increases for another group of bug categories:
  • Cross-site scripting (XSS) bugs on https://accounts.google.com now receive a reward of $7,500 (previously $3,133.7). Rewards for XSS bugs in other highly sensitive services such as Gmail and Google Wallet have been bumped up to $5,000 (previously $1,337), with normal Google properties increasing to $3,133.70 (previously $500).
  • The top reward for significant authentication bypasses / information leaks is now $7,500 (previously $5,000).
As always, happy bug hunting! If you do find a security problem, please let us know.





We recently discovered that attackers are actively targeting a previously unknown and unpatched vulnerability in software belonging to another company. This isn’t an isolated incident -- on a semi-regular basis, Google security researchers uncover real-world exploitation of publicly unknown (“zero-day”) vulnerabilities. We always report these cases to the affected vendor immediately, and we work closely with them to drive the issue to resolution. Over the years, we’ve reported dozens of actively exploited zero-day vulnerabilities to affected vendors, including XML parsing vulnerabilities, universal cross-site scripting bugs, and targeted web application attacks.

Often, we find that zero-day vulnerabilities are used to target a limited subset of people. In many cases, this targeting actually makes the attack more serious than a broader attack, and more urgent to resolve quickly. Political activists are frequent targets, and the consequences of being compromised can have real safety implications in parts of the world.

Our standing recommendation is that companies should fix critical vulnerabilities within 60 days -- or, if a fix is not possible, they should notify the public about the risk and offer workarounds. We encourage researchers to publish their findings if reported issues will take longer to patch. Based on our experience, however, we believe that more urgent action -- within 7 days -- is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.

Seven days is an aggressive timeline and may be too short for some vendors to update their products, but it should be enough time to publish advice about possible mitigations, such as temporarily disabling a service, restricting access, or contacting the vendor for more information. As a result, after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves. By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management.



Protecting the security and privacy of our users is one of our most important tasks at Google, which is why we utilize encryption on almost all connections made to Google.

This encryption needs to be updated at times to make it even stronger, so this year our SSL services will undergo a series of certificate upgrades—specifically, all of our SSL certificates will be upgraded to 2048-bit keys by the end of 2013. We will begin switching to the new 2048-bit certificates on August 1st, to ensure adequate time for a careful rollout before the end of the year. We’re also going to change the root certificate that signs all of our SSL certificates because it has a 1024-bit key.

Most client software won’t have any problems with either of these changes, but we know that some configurations will require some extra steps to avoid complications. This is more often true of client software embedded in devices such as certain types of phones, printers, set-top boxes, gaming consoles, and cameras.

For a smooth upgrade, client software that makes SSL connections to Google (e.g. HTTPS) must:
  • Perform normal validation of the certificate chain;
  • Include a properly extensive set of root certificates contained. We have an example set which should be sufficient for connecting to Google in our FAQ. (Note: the contents of this list may change over time, so clients should have a way to update themselves as changes occur);
  • Support Subject Alternative Names (SANs).
Also, clients should support the Server Name Indication (SNI) extension because clients may need to make an extra API call to set the hostname on an SSL connection. Any client unsure about SNI support can be tested against https://googlemail.com—this URL should only validate if you are sending SNI.

On the flip side, here are some examples of improper validation practices that could very well lead to the inability of client software to connect to Google using SSL after the upgrade:
  • Matching the leaf certificate exactly (e.g. by hashing it)
  • Matching any other certificate (e.g. Root or Intermediate signing certificate) exactly
  • Hard-coding the expected Root certificate, especially in firmware. This is sometimes done based on assumptions like the following:
    • The Root Certificate of our chain will not change on short notice.
    • Google will always use Thawte as its Root CA.
    • Google will always use Equifax as its Root CA.
    • Google will always use one of a small number of Root CAs.
    • The certificate will always contain exactly the expected hostname in the Common Name field and therefore clients do not need to worry about SANs.
    • The certificate will always contain exactly the expected hostname in a SAN and therefore clients don't need to worry about wildcards.
Any software that contains these improper validation practices should be changed. More detailed information can be found in this document, and you can also check out our FAQ if you have specific questions.



This January, Google and SyScan announced a secure coding competition open to students from all over the world. While Google’s Summer of Code and Code-in encourage students to contribute to open source projects, Hardcode was a call for students who wanted to showcase their skills both in software development and security. Given the scope of today’s online threats, we think it’s incredibly important to practice secure coding habits early on. Hundreds of students from 25 countries and across five continents signed up to receive updates and information about the competition, and over 100 teams participated.


During the preliminary online round, teams built applications on Google App Engine that were judged for both functionality and security. Five teams were then selected to participate in the final round at the SyScan 2013 security conference in Singapore, where they had to do the following: fix security bugs from the preliminary round, collaborate to develop an API standard to allow their applications to interoperate, implement the API, and finally, try to hack each other’s applications. To add to the challenge, many of the students balanced the competition with all of their school commitments.


We’re extremely impressed with the caliber of the contestants’ work. Everyone had a lot of fun, and we think these students have a bright future ahead of them. We are pleased to announce the final results of the 2013 Hardcode Competition:

1st Place: Team 0xC0DEBA5E
Vienna University of Technology, Austria (SGD $20,000)
Daniel Marth (http://proggen.org/)
Lukas Pfeifhofer (https://www.devlabs.pro/)
Benedikt Wedenik

2nd Place: Team Gridlock
Loyola School, Jamshedpur, India (SGD $15,000)
Aviral Dasgupta (http://www.aviraldg.com/)

3rd Place: Team CeciliaSec
University of California, Santa Barbara, California, USA (SGD $10,000)
Nathan Crandall
Dane Pitkin
Justin Rushing

Runner-up: Team AppDaptor
The Hong Kong Polytechnic University, Hong Kong (SGD $5,000)
Lau Chun Wai (http://www.cwlau.com/)

Runner-up: Team DesiCoders
Birla Institute of Technology & Science, Pilani, India (SGD $5,000)
Yash Agarwal
Vishesh Singhal (http://visheshsinghal.blogspot.com)

Honorable Mention: Team Saviors of Middle Earth (withdrew due to school commitments)
Walt Whitman High School, Maryland, USA
Wes Kendrick
Marc Rosen (https://github.com/maz)
William Zhang

A big congratulations to this very talented group of students!



If you use Chrome, you shouldn’t have to work hard to know what Chrome extensions you have installed and enabled. That’s why last December we announced that Chrome (version 25 and beyond) would disable silent extension installation by default. In addition to protecting users from unauthorized installations, these measures resulted in noticeable performance improvements in Chrome and improved user experience.

To further safeguard you while browsing the web, we recently added new measures to protect you and your computer. These measures will identify software that violates Chrome’s standard mechanisms for deploying extensions, flagging such binaries as malware. Within a week, you will start seeing Safe Browsing malicious download warnings when attempting to download malware identified by this criteria.

This kind of malware commonly tries to get around silent installation blockers by misusing Chrome’s central management settings that are intended be used to configure instances of Chrome internally within an organization. In doing so, the installed extensions are enabled by default and cannot be uninstalled or disabled by the user from within Chrome. Other variants include binaries that directly manipulate Chrome preferences in order to silently install and enable extensions bundled with these binaries. Our recent measures expand our capabilities to detect and block these types of malware.

Application developers should adhere to Chrome’s standard mechanisms for extension installation, which include the Chrome Web Store, inline installation, and the other deployment options described in the extensions development documentation.




We launched Google Public DNS three years ago to help make the Internet faster and more secure. Today, we are taking a major step towards this security goal: we now fully support DNSSEC (Domain Name System Security Extensions) validation on our Google Public DNS resolvers. Previously, we accepted and forwarded DNSSEC-formatted messages but did not perform validation. With this new security feature, we can better protect people from DNS-based attacks and make DNS more secure overall by identifying and rejecting invalid responses from DNSSEC-protected domains.

DNS translates human-readable domain names into IP addresses so that they are accessible by computers. Despite its critical role in Internet applications, the lack of security protection for DNS up to this point meant that a significantly large portion of today’s Internet attacks target the name resolution process, attempting to return the IP addresses of malicious websites to DNS queries. Probably the most common DNS attack is DNS cache poisoning, which tries to “pollute” the cache of DNS resolvers (such as Google Public DNS or those provided by most ISPs) by injecting spoofed responses to upstream DNS queries.

To counter cache poisoning attacks, resolvers must be able to verify the authenticity of the response. DNSSEC solves the problem by authenticating DNS responses using digital signatures and public key cryptography. Each DNS zone maintains a set of private/public key pairs, and for each DNS record, a unique digital signature is generated and encrypted using the private key. The corresponding public key is then authenticated via a chain of trust by keys of upper-level zones. DNSSEC effectively prevents response tampering because in practice, signatures are almost impossible to forge without access to private keys. Also, the resolvers will reject responses without correct signatures.

DNSSEC is a critical step towards securing the Internet. By validating data origin and data integrity, DNSSEC complements other Internet security mechanisms, such as SSL. It is worth noting that although we have used web access in the examples above, DNS infrastructure is widely used in many other Internet applications, including email.

Currently Google Public DNS is serving more than 130 billion DNS queries on average (peaking at 150 billion) from more than 70 million unique IP addresses each day. However, only 7% of queries from the client side are DNSSEC-enabled (about 3% requesting validation and 4% requesting DNSSEC data but no validation) and about 1% of DNS responses from the name server side are signed. Overall, DNSSEC is still at an early stage and we hope that our support will help expedite its deployment.

Effective deployment of DNSSEC requires action from both DNS resolvers and authoritative name servers. Resolvers, especially those of ISPs and other public resolvers, need to start validating DNS responses. Meanwhile, domain owners have to sign their domains. Today, about 1/3 of top-level domains have been signed, but most second-level domains remain unsigned. We encourage all involved parties to push DNSSEC deployment and further protect Internet users from DNS-based network intrusions.

For more information about Google Public DNS, please visit: https://developers.google.com/speed/public-dns. In particular, more details about our DNSSEC support can be found in the FAQ and Security pages. Additionally, general specifications of the DNSSEC standard can be found in RFCs 4033, 4034, 4035, and 5155.

Update March 21: We've been listening to your questions and would like to clarify that validation is not yet enabled for non-DNSSEC aware clients. As a first step, we launched DNSSEC validation as an opt-in feature and will only perform validation if clients explicitly request it. We're going to work to minimize the impact of any DNSSEC misconfigurations that could cause connection breakages before we enable validation by default for all clients that have not explicitly opted out.

Update May 6: We've enabled DNSSEC validation by default. That means all clients are now protected and responses to all queries will be validated unless clients explicitly opt out.



We created a new Help for hacked sites informational series to help all levels of site owners understand how they can recover their hacked site. The series includes over a dozen articles and 80+ minutes of informational videos—from the basics of what it means for a site to be hacked to diagnosing specific malware infection types.


“Help for hacked sites” overview: How and why a site is hacked

Over 25% of sites that are hacked may remain compromised
In StopBadware and Commtouch’s 2012 survey of more than 600 webmasters of hacked sites, 26% of site owners reported that their site was still compromised while 2% completely abandoned their site. We hope that by adding our educational resources to the great tools and information already available from the security community, more hacked sites can restore their unique content and make it safely available to users. The fact remains, however, that the process to recovery requires fairly advanced system administrator skills and knowledge of source code. Without help from others—perhaps their hoster or a trusted expert—many site owners may still struggle to recover.

StopBadware and Commtouch’s 2012 survey results for “What action did you take/are you taking to fix the compromised site?”

Hackers’ tactics are difficult for site owners to detect
Cybercriminals employ various tricks to avoid the site owner’s detection, making recovery difficult for the average site owner. One technique is adding “hidden text” to the site’s page so users don’t see the damage, but search engines still process the content. Often the case for sites hacked with spam, hackers abuse a good site to help their site (commonly pharmaceutical or poker sites) rank in search results.

Both pages are the same, but the page on the right highlights the “hidden text”—in this case, white text on a white background. As explained in Step 5: Assess the damage (hacked with spam), hackers employ these types of tricks to avoid human detection.

In cases of sites hacked to distribute malware, Google provides verified site owners with a sample of infected URLs, often with their malware infection type, such as Server configuration (using the server’s configuration file to redirect users to malicious content). In Help for hacked sites, Lucas Ballard, a software engineer on our Safe Browsing team, explains how to locate and clean this malware infection type.


Lucas Ballard covers the malware infection type Server configuration.

Reminder to keep your site secure
I realize that reminding you to keep your site secure is a bit like my mother yelling “don’t forget to bring a coat!” as I leave her sunny California residence. Like my mother, I can’t help myself. Please remember to:
  • Be vigilant about keeping software updated
  • Understand the security practices of all applications, plugins, third-party software, etc., before you install them on your server
  • Remove unnecessary or unused software
  • Enforce creation of strong passwords
  • Keep all devices used to log in to your web server secure (updated operating system and browser)
  • Make regular, automated backups