[go: nahoru, domu]


Iulia Ion, Software Engineer - Rob Reeder, Research Scientist - Sunny Consolvo, User Experience Researcher

Today, you can find more online security tips in a few seconds than you could use in a lifetime. While this collection of best practices is rich, it’s not always useful; it can be difficult to know which ones to prioritize, and why.

Questions like ‘Why do people make some security choices (and not others)?’ and ‘How effectively does the security community communicate its best practices?’ are at the heart of a new paper called, “...no one can hack my mind”: Comparing Expert and Non-Expert Security Practices” that we’ll present this week at the Symposium on Usable Privacy and Security.

This paper outlines the results of two surveys—one with 231 security experts, and another with 294 web-users who aren’t security experts—in which we asked both groups what they do to stay safe online. We wanted to compare and contrast responses from the two groups, and better understand differences and why they may exist.

Experts’ and non-experts’ top 5 security practices

Here are experts’ and non-experts’ top security practices, according to our study. We asked each participant to list 3 practices:
Common ground: careful password management

Clearly, careful password management is a priority for both groups. But, they differ on their approaches.

Security experts rely heavily on password managers, services that store and protect all of a user’s passwords in one place. Experts reported using password managers, for at least some of their accounts, three-times more frequently than non-experts.

As one expert said, “Password managers change the whole calculus because they make it possible to have both strong and unique passwords.”

On the other hand, only 24% of non-experts reported using password managers for at least some of their accounts, compared to 73% of experts. Our findings suggested this was due to lack of education about the benefits of password managers and/or a perceived lack of trust in these programs. “I try to remember my passwords because no one can hack my mind,” one non-expert told us.


Key differences: software updates and antivirus software

Despite some overlap, experts’ and non-experts’ top answers were remarkably different.

35% of experts and only 2% of non-experts said that installing software updates was one of their top security practices. Experts recognize the benefits of updates—“Patch, patch, patch,” said one expert—while non-experts not only aren’t clear on them, but are concerned about the potential risks of software updates. A non-expert told us: “I don’t know if updating software is always safe. What [if] you download malicious software?” and “Automatic software updates are not safe in my opinion, since it can be abused to update malicious content.”

Meanwhile, 42% of non-experts vs. only 7% of experts said that running antivirus software was one of the top three three things they do to stay safe online. Experts acknowledged the benefits of antivirus software, but expressed concern that it might give users a false sense of security since it’s not a bulletproof solution.


Next Steps

In the immediate term, we encourage everyone to read the full research paper, borrow experts’ top practices, and also check out our tips for keeping your information safe on Google.

More broadly, our findings highlight fundamental misunderstandings about basic online security practices. Software updates, for example, are the seatbelts of online security; they make you safer, period. And yet, many non-experts not only overlook these as a best practice, but also mistakenly worry that software updates are a security risk.

No practice on either list—expert or non-expert—makes users less secure. But, there is clearly room to improve how security best practices are prioritized and communicated to the vast majority of (non expert) users. We’re looking forward to tackling that challenge.






























Google Legal
Tim Willis, Hacker Philanthropist, Chrome Security Team










  • Rules are dangerously broad and vague. The proposed rules are not feasible and would require Google to request thousands - maybe even tens of thousands - of export licenses. Since Google operates in many different countries, the controls could cover our communications about software vulnerabilities, including: emails, code review systems, bug tracking systems, instant messages - even some in-person conversations! BIS’ own FAQ states that information about a vulnerability, including its causes, wouldn’t be controlled, but we believe that it sometimes actually could be controlled information.
  • You should never need a license when you report a bug to get it fixed. There should be standing license exceptions for everyone when controlled information is reported back to manufacturers for the purposes of fixing a vulnerability. This would provide protection for security researchers that report vulnerabilities, exploits, or other controlled information to any manufacturer or their agent.
  • Global companies should be able to share information globally. If we have information about intrusion software, we should be able to share that with our engineers, no matter where they physically sit.
  • Clarity is crucial. We acknowledge that we have a team of lawyers here to help us out, but navigating these controls shouldn’t be that complex and confusing. If BIS is going to implement the proposed controls, we recommend providing a simple, visual flowchart for everyone to easily understand when they need a license.
  • These controls should be changed ASAP. The only way to fix the scope of the intrusion software controls is to do it at the annual meeting of Wassenaar Arrangement members in December 2015.

We’re committed to working with BIS to make sure that both white hat security researchers’ interests and Google users’ interests are front of mind. The proposed BIS rule for public comment is available here, and comments can also be sent directly to publiccomments@bis.doc.gov. If BIS publishes another proposed rule on intrusion software, we’ll make sure to come back and update this blog post with details.



Last year, we announced our increased focus on unwanted software (UwS), and published our unwanted software policy. This work is the direct result of our users falling prey to UwS, and how badly it was affecting their browsing experience. Since then, Google Safe Browsing’s ability to detect deceptive software has steadily improved.

In the coming weeks, these detection improvements will become more noticeable in Chrome: users will see more warnings (like the one below) about unwanted software than ever before.

We want to be really clear that Google Safe Browsing’s mandate remains unchanged: we’re exclusively focused on protecting users from malware, phishing, unwanted software, and similar harm. You won’t see Safe Browsing warnings for any other reasons.

Unwanted software is being distributed on web sites via a variety of sources, including ad injectors as well as ad networks lacking strict quality guidelines. In many cases, Safe Browsing within your browser is your last line of defense.

Google Safe Browsing has protected users from phishing and malware since 2006, and from unwanted software since 2014. We provide this protection across browsers (Chrome, Firefox, and Safari) and across platforms (Windows, Mac OS X, Linux, and Android). If you want to help us improve the defenses for everyone using a browser that integrates Safe Browsing, please consider checking the box that appears on all of our warning pages:
Safe Browsing’s focus is solely on protecting people and their data from badness. And nothing else.





Since 2010, our security reward programs have helped make Google products safer for everyone. Last year, we paid more than 1.5 million dollars to security researchers that found vulnerabilities in Chrome and other Google Products.

Today, we're expanding our program to include researchers that will find, fix, and prevent vulnerabilities on Android, specifically. Here are some details about the new Android Security Rewards program:

  • For vulnerabilities affecting Nexus phones and tablets available for sale on Google Play (currently Nexus 6 and Nexus 9), we will pay for each step required to fix a security bug, including patches and tests. This makes Nexus the first major line of mobile devices to offer an ongoing vulnerability rewards program.
  • In addition to rewards for vulnerabilities, our program offers even larger rewards to security researchers that invest in tests and patches that will make the entire ecosystem stronger.
  • The largest rewards are available to researchers that demonstrate how to work around Android’s platform security features, like ASLR, NX, and the sandboxing that is designed to prevent exploitation and protect users.

Android will continue to participate in Google’s Patch Rewards Program which pays for contributions that improve the security of Android (and other open source projects). We’ve also sponsored mobile pwn2own for the last 2 years, and we plan to continue to support this and other competitions to find vulnerabilities in Android.

As we have often said, open security research is a key strength of the Android platform. The more security research that's focused on Android, the stronger it will become.

Happy hunting.








Click infographic for larger version





  • With a single guess, an attacker would have a 19.7% chance of guessing English-speaking users’ answers to the question "What is your favorite food?" (it was ‘pizza’, by the way) 
  • With ten guesses, an attacker would have a nearly 24% chance of guessing Arabic-speaking users’ answer to the question "What’s your first teacher’s name?"
  • With ten guesses, an attacker would have a 21% chance of guessing Spanish-speaking users’ answers to the question, "What is your father’s middle name?"
  • With ten guesses, an attacker would have a 39% chance of guessing Korean-speaking users’ answers to the question "What is your city of birth?" and a 43% chance of guessing their favorite food.

Many different users also had identical answers to secret questions that we’d normally expect to be highly secure, such as "What’s your phone number?" or "What’s your frequent flyer number?". We dug into this further and found that 37% of people intentionally provide false answers to their questions thinking this will make them harder to guess. However, this ends up backfiring because people choose the same (false) answers, and actually increase the likelihood that an attacker can break in.

Difficult Answers Aren’t Usable

Surprise, surprise: it’s not easy to remember where your mother went to elementary school, or what your library card number is! Difficult secret questions and answers are often hard to use. Here are some specific findings:

  • 40% of our English-speaking US users couldn’t recall their secret question answers when they needed to. These same users, meanwhile, could recall reset codes sent to them via SMS text message more than 80% of the time and via email nearly 75% of the time.
  • Some of the potentially safest questions—"What is your library card number?" and "What is your frequent flyer number?"—have only 22% and 9% recall rates, respectively.
  • For English-speaking users in the US the easier question, "What is your father’s middle name?" had a success rate of 76% while the potentially safer question "What is your first phone number?" had only a 55% success rate.

Why not just add more secret questions?


Of course, it’s harder to guess the right answer to two (or more) questions, as opposed to just one. However, adding questions comes at a price too: the chances that people recover their accounts drops significantly. We did a subsequent analysis to illustrate this idea (Google never actually asks multiple security questions).

According to our data, the ‘easiest’ question and answer is "What city were you born in?"—users recall this answer more than 79% of the time. The second easiest example is "What is your father’s middle name?", remembered by users 74% of the time. If an attacker had ten guesses, they’d have a 6.9% and 14.6% chance of guessing correct answers for these questions, respectively.

But, when users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark. The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.

The Next Question: What To Do?

Secret questions have long been a staple of authentication and account recovery online. But, given these findings its important for users and site owners to think twice about these.

We strongly encourage Google users to make sure their Google account recovery information is current. You can do this quickly and easily with our Security Checkup. For years, we’ve only used security questions for account recovery as a last resort when SMS text or back-up email addresses don’t work and we will never use these as stand-alone proof of account ownership.

In parallel, site owners should use other methods of authentication, such as backup codes sent via SMS text or secondary email addresses, to authenticate their users and help them regain access to their accounts. These are both safer, and offer a better user experience.




Today, we’re releasing the results of a study performed with the University of California, Berkeley and Santa Barbara that examines the ad injector ecosystem, in-depth, for the first time. We’ve summarized our key findings below, as well as Google’s broader efforts to protect users from unwanted software. The full report, which you can read here, will be presented later this month at the IEEE Symposium on Security & Privacy.

Ad injectors’ businesses are built on a tangled web of different players in the online advertising economy. This complexity has made it difficult for the industry to understand this issue and help fix it. We hope our findings raise broad awareness of this problem and enable the online advertising industry to work together and tackle it.

How big is the problem?
This is what users might see if their browsers were infected with ad injectors. None of the ads displayed appear without an ad injector installed.

To pursue this research, we custom-built an ad injection “detector” for Google sites. This tool helped us identify tens of millions of instances of ad injection “in the wild” over the course of several months in 2014, the duration of our study.

More detail is below, but the main point is clear: deceptive ad injection is a significant problem on the web today. We found 5.5% of unique IPs—millions of users—accessing Google sites that included some form of injected ads.

How ad injectors work
The ad injection ecosystem comprises a tangled web of different players. Here is a quick snapshot.
  • Software: It all starts with software that infects your browser. We discovered more than 50,000 browser extensions and more than 34,000 software applications that took control of users’ browsers and injected ads. Upwards of 30% of these packages were outright malicious and simultaneously stole account credentials, hijacked search queries, and reported a user’s activity to third parties for tracking. In total, we found 5.1% of page views on Windows and 3.4% of page views on Mac that showed tell-tale signs of ad injection software.
  • Distribution: Next, this software is distributed by a network of affiliates that work to drive as many installs as possible via tactics like: marketing, bundling applications with popular downloads, outright malware distribution, and large social advertising campaigns. Affiliates are paid a commision whenever a user clicks on an injected ad. We found about 1,000 of these businesses, including Crossrider, Shopper Pro, and Netcrawl, that use at least one of these tactics.
  • Injection Libraries: Ad injectors source their ads from about 25 businesses that provide ‘injection libraries’. Superfish and Jollywallet are by far the most popular of these, appearing in 3.9% and 2.4% of Google views, respectively. These companies manage advertising relationships with a handful of ad networks and shopping programs and decide which ads to display to users. Whenever a user clicks on an ad or purchases a product, these companies make a profit, a fraction of which they share with affiliates.
  • Ads: The ad injection ecosystem profits from more than 3,000 victimized advertisers—including major retailers like Sears, Walmart, Target, Ebay—who unwittingly pay for traffic to their sites. Because advertisers are generally only able to measure the final click that drives traffic to their sites, they’re often unaware of many preceding twists and turns, and don’t know they are receiving traffic via unwanted software and malware. Ads originate from ad networks that translate unwanted software installations into profit: 77% of all injected ads go through one of three ad networks—dealtime.com, pricegrabber.com, and bizrate.com. Publishers, meanwhile, aren’t being compensated for these ads.
Examples of injected ads ‘in the wild’

How Google fights deceptive ad injectors 
We pursued this research to raise awareness about the ad injection economy so that the broader ads ecosystem can better understand this complex issue and work together to tackle it.

Based on our findings, we took the following actions:
  • Keeping the Chrome Web Store clean: We removed 192 deceptive Chrome extensions that affected 14 million users with ad injection from the Chrome Web Store. These extensions violated Web Store policies that extensions have a narrow and easy-to-understand purpose. We’ve also deployed new safeguards in the Chrome Web Store to help protect users from deceptive ad injection extensions.
  • Protecting Chrome users: We improved protections in Chrome to flag unwanted software and display familiar red warnings when users are about to download deceptive software. These same protections are broadly available via the Safe Browsing API. We also provide a tool for users already affected by ad injectors and other unwanted software to clean up their Chrome browser.
  • Informing advertisers: We reached out to the advertisers affected by ad injection to alert each of the deceptive practices and ad networks involved. This reflects a broader set of Google Platforms program policies and the DoubleClick Ad Exchange (AdX) Seller Program Guidelines that prohibit programs overlaying ad space on a given site without permission of the site owner.

Most recently, we updated our AdWords policies to make it more difficult for advertisers to promote unwanted software on AdWords. It's still early, but we've already seen encouraging results since making the change: the number of 'Safe Browsing' warnings that users receive in Chrome after clicking AdWords ads has dropped by more than 95%. This suggests it's become much more difficult for users to download unwanted software, and for bad advertisers to promote it. Our blog post from March outlines various policies—for the Chrome Web Store, AdWords, Google Platforms program, and the DoubleClick Ad Exchange (AdX)—that combat unwanted ad injectors, across products.

We’re also constantly improving our Safe Browsing technology, which protects more than one billion Chrome, Safari, and Firefox users across the web from phishing, malware, and unwanted software. Today, Safe Browsing shows people more than 5 million warnings per day for all sorts of malicious sites and unwanted software, and discovers more than 50,000 malware sites and more than 90,000 phishing sites every month.

Considering the tangle of different businesses involved—knowingly, or unknowingly—in the ad injector ecosystem, progress will only be made if we raise our standards, together. We strongly encourage all members of the ads ecosystem to review their policies and practices so we can make real improvement on this issue.



Would you enter your email address and password on this page?
This looks like a fairly standard login page, but it’s not. It’s what we call a “phishing” page, a site run by people looking to receive and steal your password. If you type your password here, attackers could steal it and gain access to your Google Account—and you may not even know it. This is a common and dangerous trap: the most effective phishing attacks can succeed 45 percent of the time, nearly 2 percent of messages to Gmail are designed to trick people into giving up their passwords, and various services across the web send millions upon millions of phishing emails, every day.

To help keep your account safe, today we’re launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you’ve installed it, Password Alert will show you a warning if you type your Google password into a site that isn’t a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.

Here's how it works for consumer accounts. Once you’ve installed and initialized Password Alert, Chrome will remember a “scrambled” version of your Google Account password. It only remembers this information for security purposes and doesn’t share it with anyone. If you type your password into a site that isn't a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you’re at risk of being phished so you can update your password and protect yourself.
Password Alert is also available to Google for Work customers, including Google Apps and Drive for Work. Your administrator can install Password Alert for everyone in the domains they manage, and receive alerts when Password Alert detects a possible problem. This can help spot malicious attackers trying to break into employee accounts and also reduce password reuse. Administrators can find more information in the Help Center.
We work to protect users from phishing attacks in a variety of ways. We’re constantly improving our Safe Browsing technology, which protects more than 1 billion people on Chrome, Safari and Firefox from phishing and other dangerous sites via bright, red warnings. We also offer tools like 2-Step Verification and Security Key that people can use to protect their Google Accounts and stay safe online. And of course, you can also take a Security Checkup at any time to make sure the safety and security information associated with your account is current. 

To get started with Password Alert, visit the Chrome Web Store or the FAQ.







We noticed that the attack was carried out in multiple phases. The first phase appeared to be a testing stage and was conducted from March 3rd to March 6th. The initial test target was 114.113.156.119:56789 and the number of requests was artificially limited. From March 4rd to March 6th, the request limitations were removed.

The next phase was conducted between March 10th and 13th and targeted the following IP address at first: 203.90.242.126. Passive DNS places hosts under the sinajs.cn domain at this IP address. On March 13th, the attack was extended to include d1gztyvw1gvkdq.cloudfront.net. At first, requests were made over HTTP and then upgraded to to use HTTPS. On March 14th, the attack started for real and targeted d3rkfw22xppori.cloudfront.net both via HTTP as well as HTTPS. Attacks against this specific host were carried out until March 17th.

On March 18th, the number of hosts under attack was increased to include the following: d117ucqx7my6vj.cloudfront.net, d14qqseh1jha6e.cloudfront.net, d18yee9du95yb4.cloudfront.net, d19r410x06nzy6.cloudfront.net, d1blw6ybvy6vm2.cloudfront.net. This is also the first time we find truncated injections in which the Javascript is cut-off and non functional. At some point during this phase of the attack, the cloudfront hosts started serving 302 redirects to greatfire.org as well as other domains. Substitution of Javascript ceased completely on March 20th but injections into HTML pages continued. Whereas Javascript replacement breaks the functionality of the original content, injection into HTML does not. Here HTML is modified to include both a reference to the original content as well as the attack Javascript as shown below:

<html>
<head>
<meta name="referrer" content="never"/>
<title> </title>
</head>
<body>
      <iframe src="http://pan.baidu.com/s/1i3[...]?t=Zmh4cXpXJApHIDFMcjZa" style="position:absolute; left:0; top:0; height:100%; width:100%; border:0px;" scrolling="yes"></iframe>
</body>
<script type="text/javascript">
[... regular attack Javascript ...]

In this technique, the web browser fetches the same HTML page twice but due to the presence of the query parameter t, no injection happens on the second request. The attacked domains also changed and now consisted of: dyzem5oho3umy.cloudfront.net, d25wg9b8djob8m.cloudfront.net and d28d0hakfq6b4n.cloudfront.net. About 10 hours after this new phase started, we see 302 redirects to a different domain served from the targeted servers.

The attack against the cloudfront hosts stops on March 25th. Instead, resources hosted on github.com were now under attack. The first new target was github.com/greatfire/wiki/wiki/nyt/ and was quickly followed by github.com/greatfire/ as well as github.com/greatfire/wiki/wiki/dw/.

On March 26th, a packed and obfuscated attack Javascript replaced the plain version and started targeting the following resources: github.com/greatfire/ and github.com/cn-nytimes/. Here we also observed some truncated injections. The attack against github seems to have stopped on April 7th, 2015 and marks the last time we saw injections during our measurement period.

From the beginning of March until the attacks stopped in April, we saw 19 unique Javascript replacement payloads as represented by their MD5 sum in the pie chart below.

For the HTML injections, the payloads were unique due to the injected URL so we are not showing their respective MD5 sums. However, the injected Javascript was very similar to the payloads referenced above.

Our systems saw injected content on the following eight baidu.com domains and corresponding IP addresses:

  • cbjs.baidu.com (123.125.65.120)
  • eclick.baidu.com (123.125.115.164)
  • hm.baidu.com (61.135.185.140)
  • pos.baidu.com (115.239.210.141)
  • cpro.baidu.com (115.239.211.17)
  • bdimg.share.baidu.com (211.90.25.48)
  • pan.baidu.com (180.149.132.99)
  • wapbaike.baidu.com (123.125.114.15)

The sizes of the injected Javascript payloads ranged from 995 to 1325 bytes.

We hope this report helps to round out the overall facts known about this attack. It also demonstrates that collectively there is a lot of visibility into what happens on the web. At the HTTP level seen by Safe Browsing, we cannot confidently attribute this attack to anyone. However, it makes it clear that hiding such attacks from detailed analysis after the fact is difficult.

Had the entire web already moved to encrypted traffic via TLS, such an injection attack would not have been possible. This provides further motivation for transitioning the web to encrypted and integrity-protected communication. Unfortunately, defending against such an attack is not easy for website operators. In this case, the attack Javascript requests web resources sequentially and slowing down responses might have helped with reducing the overall attack traffic. Another hope is that the external visibility of this attack will serve as a deterrent in the future.




Since 2008 we’ve been working to make sure all of our services use strong HTTPS encryption by default. That means people using products like Search, Gmail, YouTube, and Drive will automatically have an encrypted connection to Google. In addition to providing a secure connection on our own products, we’ve been big proponents of the idea of “HTTPS Everywhere,” encouraging webmasters to prevent and fix security breaches on their sites, and using HTTPS as a signal in our search ranking algorithm.

This year, we’re working to bring this “HTTPS Everywhere” mission to our ads products as well, to support all of our advertiser and publisher partners. Here are some of the specific initiatives we’re working on:

  • We’ve moved all YouTube ads to HTTPS as of the end of 2014.
  • Search on Google.com is already encrypted for a vast majority of users and we are working towards encrypting search ads across our systems.
  • By June 30, 2015, the vast majority of mobile, video, and desktop display ads served to the Google Display Network, AdMob, and DoubleClick publishers will be encrypted.
  • Also by June 30, 2015, advertisers using any of our buying platforms, including AdWords and DoubleClick, will be able to serve HTTPS-encrypted display ads to all HTTPS-enabled inventory.

Of course we’re not alone in this goal. By encrypting ads, the advertising industry can help make the internet a little safer for all users. Recently, the Interactive Advertising Bureau (IAB) published a call to action to adopt HTTPS ads, and many industry players are also working to meet HTTPS requirements. We’re big supporters of these industry-wide efforts to make HTTPS everywhere a reality.

Our HTTPS Everywhere ads initiatives will join some of our other efforts to provide a great ads experience online for our users, like “Why this Ad?”, “Mute This Ad” and TrueView skippable ads. With these security changes to our ads systems, we’re one step closer to ensuring users everywhere are safe and secure every time they choose to watch a video, map out a trip in a new city, or open their favorite app.