[go: nahoru, domu]

Webmaster level: advanced

Recently the Google Public DNS team, in collaboration with Akamai, reached an important milestone: Google Public DNS now propagates client location information to Akamai nameservers. This effort significantly improves the accuracy of approximately 30% of the location-sensitive DNS responses returned by Google Public DNS. In other words, client requests to Akamai hosted content can be routed to closer servers with lower latency and greater data transfer throughput. Overall, Google Public DNS resolvers serve 400 billion responses per day and more than 50% of them are location-sensitive.

DNS is often used by Content Distribution Networks (CDNs) such as Akamai to achieve location-based load balancing by constructing responses based on clients’ IP addresses. However, CDNs usually see the DNS resolvers’ IP address instead of the actual clients’ and are therefore forced to assume that the resolvers are close to the clients. Unfortunately, the assumption is not always true. Many resolvers, especially those open to the Internet at large, are not deployed at every single local network.

To solve this issue, a group of DNS and content providers, including Google, proposed an approach to allow resolvers to forward the client’s subnet to CDN nameservers in an extension field in the DNS request. The subnet is a portion of the client’s IP address, truncated to preserve privacy. The approach is officially named edns-client-subnet or ECS.

This solution requires that both resolvers and CDNs adopt the new DNS extension. Google Public DNS resolvers automatically probe to discover ECS-aware nameservers and have observed the footprint of ECS support from CDNs expanding steadily over the past years. By now, more than 4000 nameservers from approximately 300 content providers support ECS. The Google-Akamai collaboration marks a significant milestone in our ongoing efforts to ensure DNS contributes to keeping the Internet fast. We encourage more CDNs to join us by supporting the ECS option.

For more information about Google Public DNS, please visit our website. For CDN operators, please also visit “A Faster Internet” for more technical details.


Webmaster Level: intermediate to advanced

App deep links are the new kid on the block in organic search, and they’re picking up speed faster than you can say “schema.org ViewAction”! For signed-in users, 15% of Google searches on Android now return deep links to apps through App Indexing. And over just the past quarter, we've seen the number of clicks on app deep links jump by 10x.

We’ve gotten a lot of feedback from developers and seen a lot of implementations gone right and others that were good learning experiences since we opened up App Indexing back in June. We’d like to share with you four key steps to monitor app performance and drive user engagement:

1. Give your app developer access to Webmaster Tools

App indexing is a team effort between you (as a webmaster) and your app development team. We show information in Webmaster Tools that is key for your app developers to do their job well. Here’s what’s available right now:

  • Errors in indexed pages within apps
  • Weekly clicks and impressions from app deep link via Google search
  • Stats on your sitemap (if that’s how you implemented the app deep links)
...and we plan to add a lot more in the coming months!

We’ve noticed that very few developers have access to Webmaster Tools. So if you want your app development team to get all of the information they need to fix app-related issues, it’s essential for them to have access to Webmaster Tools.

Any verified site owner can add a new user. Pick restricted or full permissions, depending on the level of access you’d like to give:

2. Understand how your app is doing in search results

How are users engaging with your app from search results? We’ve introduced two new ways for you to track performance for your app deep links:

  • We now send a weekly clicks and impressions update to the Message center in your Webmaster Tools account.
  • You can now track how much traffic app deep links drive to your app using referrer information - specifically, the referrer extra in the ACTION_VIEW intent. We're working to integrate this information with Google Analytics for even easier access. Learn how to track referrer information on our Developer site.

3. Make sure key app resources can be crawled

Blocked resources are one of the top reasons for the “content mismatch” errors you see in Webmaster Tools’ Crawl Errors report. We need access to all the resources necessary to render your app page. This allows us to assess whether your associated web page has the same content as your app page.

To help you find and fix these issues, we now show you the specific resources we can’t access that are critical for rendering your app page. If you see a content mismatch error for your app, look out for the list of blocked resources in “Step 5” of the details dialog:

4. Watch out for Android App errors

To help you identify errors when indexing your app, we’ll send you messages for all app errors we detect, and will also display most of them in the “Android apps” tab of the Crawl errors report.

In addition to the currently available “Content mismatch” and “Intent URI not supported” error alerts, we’re introducing three new error types:

  • APK not found: we can’t find the package corresponding to the app.
  • No first-click free: the link to your app does not lead directly to the content, but requires login to access.
  • Back button violation: after following the link to your app, the back button did not return to search results.

In our experience, the majority of errors are usually caused by a general setting in your app (e.g. a blocked resource, or a region picker that pops up when the user tries to open the app from search). Taking care of that generally resolves it for all involved URIs.

Good luck in the pursuit of appiness! As always, if you have questions, feel free to drop by our Webmaster help forum.

reCAPTCHA protects the websites you love from spam and abuse. So, when you go online—say, for some last-minute holiday shopping—you won't be competing with robots and abusive scripts to access sites. For years, we’ve prompted users to confirm they aren’t robots by asking them to read distorted text and type it into a box, like this:
But, we figured it would be easier to just directly ask our users whether or not they are robots—so, we did! We’ve begun rolling out a new API that radically simplifies the reCAPTCHA experience. We’re calling it the “No CAPTCHA reCAPTCHA” and this is how it looks:
On websites using this new API, a significant number of users will be able to securely and easily verify they’re human without actually having to solve a CAPTCHA. Instead, with just a single click, they’ll confirm they are not a robot.
A brief history of CAPTCHAs 

While the new reCAPTCHA API may sound simple, there is a high degree of sophistication behind that modest checkbox. CAPTCHAs have long relied on the inability of robots to solve distorted text. However, our research recently showed that today’s Artificial Intelligence technology can solve even the most difficult variant of distorted text at 99.8% accuracy. Thus distorted text, on its own, is no longer a dependable test.

To counter this, last year we developed an Advanced Risk Analysis backend for reCAPTCHA that actively considers a user’s entire engagement with the CAPTCHA—before, during, and after—to determine whether that user is a human. This enables us to rely less on typing distorted text and, in turn, offer a better experience for users.  We talked about this in our Valentine’s Day post earlier this year.

The new API is the next step in this steady evolution. Now, humans can just check the box and in most cases, they’re through the challenge.

Are you sure you’re not a robot?

However, CAPTCHAs aren't going away just yet. In cases when the risk analysis engine can't confidently predict whether a user is a human or an abusive agent, it will prompt a CAPTCHA to elicit more cues, increasing the number of security checkpoints to confirm the user is valid.
Making reCAPTCHAs mobile-friendly

This new API also lets us experiment with new types of challenges that are easier for us humans to use, particularly on mobile devices. In the example below, you can see a CAPTCHA based on a classic Computer Vision problem of image labeling. In this version of the CAPTCHA challenge, you’re asked to select all of the images that correspond with the clue. It's much easier to tap photos of cats or turkeys than to tediously type a line of distorted text on your phone.
Adopting the new API on your site

As more websites adopt the new API, more people will see "No CAPTCHA reCAPTCHAs".  Early adopters, like Snapchat, WordPress, Humble Bundle, and several others are already seeing great results with this new API. For example, in the last week, more than 60% of WordPress’ traffic and more than 80% of Humble Bundle’s traffic on reCAPTCHA encountered the No CAPTCHA experience—users got to these sites faster. To adopt the new reCAPTCHA for your website, visit our site to learn more.

Humans, we'll continue our work to keep the Internet safe and easy to use. Abusive bots and scripts, it’ll only get worse—sorry we’re (still) not sorry.


Webmaster level: All

Have you ever tapped on a Google Search result on your mobile phone, only to find yourself looking at a page where the text was too small, the links were tiny, and you had to scroll sideways to see all the content? This usually happens when the website has not been optimized to be viewed on a mobile phone.

This can be a frustrating experience for our mobile searchers. Starting today, to make it easier for people to find the information that they’re looking for, we’re adding a “mobile-friendly” label to our mobile search results.
Mobile-friendly search result
This change will be rolling out globally over the next few weeks. A page is eligible for the “mobile-friendly” label if it meets the following criteria as detected by Googlebot:
  • Avoids software that is not common on mobile devices, like Flash
  • Uses text that is readable without zooming
  • Sizes content to the screen so users don't have to scroll horizontally or zoom
  • Places links far enough apart so that the correct one can be easily tapped
If you want to make sure that your page meets the mobile-friendly criteria: 
The tools and documentation above are currently available in English. They will be available in additional languages within the next few weeks.

We see these labels as a first step in helping mobile users to have a better mobile web experience. We are also experimenting with using the mobile-friendly criteria as a ranking signal.

If you have any questions or want to help others make mobile-friendly sites, visit our Webmaster Help Forum. We hope to see many more mobile-friendly websites in the future. Let’s make the web better for all users!

Webmaster Level: intermediate

Mobile is growing at a fantastic pace - in usage, not just in screen size. To keep you informed of issues mobile users might be seeing across your website, we've added the Mobile Usability feature to Webmaster Tools.

The new feature shows mobile usability issues we’ve identified across your website, complete with graphs over time so that you see the progress that you've made.

A mobile-friendly site is one that you can easily read & use on a smartphone, by only having to scroll up or down. Swiping left/right to search for content, zooming to read text and use UI elements, or not being able to see the content at all make a site harder to use for users on mobile phones. To help, the Mobile Usability reports show the following issues: Flash content, missing viewport (a critical meta-tag for mobile pages), tiny fonts, fixed-width viewports, content not sized to viewport, and clickable links/buttons too close to each other.

We strongly recommend you take a look at these issues in Webmaster Tools, and think about how they might be resolved; sometimes it's just a matter of tweaking your site's template! More information on how to make a great mobile-friendly website can be found in our Web Fundamentals website (with more information to come soon).

If you have any questions, feel free to join us in our webmaster help forums (on your phone too)!

Webmaster level: All

We recently announced that our indexing system has been rendering web pages more like a typical modern browser, with CSS and JavaScript turned on. Today, we're updating one of our technical Webmaster Guidelines in light of this announcement.

For optimal rendering and indexing, our new guideline specifies that you should allow Googlebot access to the JavaScript, CSS, and image files that your pages use. This provides you optimal rendering and indexing for your site. Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings.

Updated advice for optimal indexing

Historically, Google indexing systems resembled old text-only browsers, such as Lynx, and that’s what our Webmaster Guidelines said. Now, with indexing based on page rendering, it's no longer accurate to see our indexing systems as a text-only browser. Instead, a more accurate approximation is a modern web browser. With that new perspective, keep the following in mind:

  • Just like modern browsers, our rendering engine might not support all of the technologies a page uses. Make sure your web design adheres to the principles of progressive enhancement as this helps our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported.
  • Pages that render quickly not only help users get to your content easier, but make indexing of those pages more efficient too. We advise you follow the best practices for page performance optimization, specifically:
  • Make sure your server can handle the additional load for serving of JavaScript and CSS files to Googlebot.

Testing and troubleshooting

In conjunction with the launch of our rendering-based indexing, we also updated the Fetch and Render as Google feature in Webmaster Tools so webmasters could see how our systems render the page. With it, you'll be able to identify a number of indexing issues: improper robots.txt restrictions, redirects that Googlebot cannot follow, and more.

And, as always, if you have any comments or questions, please ask in our Webmaster Help forum.

Webmaster level: intermediate-advanced

Submitting sitemaps can be an important part of optimizing websites. Sitemaps enable search engines to discover all pages on a site and to download them quickly when they change. This blog post explains which fields in sitemaps are important, when to use XML sitemaps and RSS/Atom feeds, and how to optimize them for Google.

Sitemaps and feeds

Sitemaps can be in XML sitemap, RSS, or Atom formats. The important difference between these formats is that XML sitemaps describe the whole set of URLs within a site, while RSS/Atom feeds describe recent changes. This has important implications:

  • XML sitemaps are usually large; RSS/Atom feeds are small, containing only the most recent updates to your site.
  • XML sitemaps are downloaded less frequently than RSS/Atom feeds.

For optimal crawling, we recommend using both XML sitemaps and RSS/Atom feeds. XML sitemaps will give Google information about all of the pages on your site. RSS/Atom feeds will provide all updates on your site, helping Google to keep your content fresher in its index. Note that submitting sitemaps or feeds does not guarantee the indexing of those URLs.

Example of an XML sitemap:

<?xml version="1.0" encoding="utf-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
 <url>
   <loc>http://example.com/mypage</loc>
   <lastmod>2011-06-27T19:34:00+01:00</lastmod>
   <!-- optional additional tags -->
 </url>
 <url>
   ...
 </url>
</urlset>

Example of an RSS feed:

<?xml version="1.0" encoding="utf-8"?>
<rss>
 <channel>
   <!-- other tags -->
   <item>
     <!-- other tags -->
     <link>http://example.com/mypage</link>
     <pubDate>Mon, 27 Jun 2011 19:34:00 +0100</pubDate>
   </item>
   <item>
     ...
   </item>
 </channel>
</rss>

Example of an Atom feed:

<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
 <!-- other tags -->
 <entry>
   <link href="http://example.com/mypage" />
   <updated>2011-06-27T19:34:00+01:00</updated>
   <!-- other tags -->
 </entry>
 <entry>
   ...
 </entry>
</feed>

“other tags” refer to both optional and required tags by their respective standards. We recommend that you specify the required tags for Atom/RSS as they will help you to appear on other properties that might use these feeds, in addition to Google Search.

Best practices

Important fields

XML sitemaps and RSS/Atom feeds, in their core, are lists of URLs with metadata attached to them. The two most important pieces of information for Google are the URL itself and its last modification time:

URLs

URLs in XML sitemaps and RSS/Atom feeds should adhere to the following guidelines:

  • Only include URLs that can be fetched by Googlebot. A common mistake is including URLs disallowed by robots.txt — which cannot be fetched by Googlebot, or including URLs of pages that don't exist.
  • Only include canonical URLs. A common mistake is to include URLs of duplicate pages. This increases the load on your server without improving indexing.
Last modification time

Specify a last modification time for each URL in an XML sitemap and RSS/Atom feed. The last modification time should be the last time the content of the page changed meaningfully. If a change is meant to be visible in the search results, then the last modification time should be the time of this change.

  • XML sitemap uses  <lastmod>
  • RSS uses <pubDate>
  • Atom uses <updated>

Be sure to set or update last modification time correctly:

  • Specify the time in the correct format: W3C Datetime for XML sitemaps, RFC3339 for Atom and RFC822 for RSS.
  • Only update modification time when the content changed meaningfully.
  • Don’t set the last modification time to the current time whenever the sitemap or feed is served.

XML sitemaps

XML sitemaps should contain URLs of all pages on your site. They are often large and update infrequently. Follow these guidelines:

  • For a single XML sitemap: update it at least once a day (if your site changes regularly) and ping Google after you update it.
  • For a set of XML sitemaps: maximize the number of URLs in each XML sitemap. The limit is 50,000 URLs or a maximum size of 10MB uncompressed, whichever is reached first. Ping Google for each updated XML sitemap (or once for the sitemap index, if that's used) every time it is updated. A common mistake is to put only a handful of URLs into each XML sitemap file, which usually makes it harder for Google to download all of these XML sitemaps in a reasonable time.

RSS/Atom

RSS/Atom feeds should convey recent updates of your site. They are usually small and updated frequently. For these feeds, we recommend:

  • When a new page is added or an existing page meaningfully changed, add the URL and the modification time to the feed.
  • In order for Google to not miss updates, the RSS/Atom feed should have all updates in it since at least the last time Google downloaded it. The best way to achieve this is by using PubSubHubbub. The hub will propagate the content of your feed to all interested parties (RSS readers, search engines, etc.) in the fastest and most efficient way possible.

Generating both XML sitemaps and Atom/RSS feeds is a great way to optimize crawling of a site for Google and other search engines. The key information in these files is the canonical URL and the time of the last modification of pages within the website. Setting these properly, and notifying Google and other search engines through sitemaps pings and PubSubHubbub, will allow your website to be crawled optimally, and represented accordingly in search results.

If you have any questions, feel free to post them here, or to join other webmasters in the webmaster help forum section on sitemaps.

Webmaster Level: Beginner

“Hey, how do I get my business on the web?” Having worked at Google for nine years, if I had a penny for every time someone asked me that question… :) To answer, today we’re releasing a short video series (30 minutes total!), sharing the same advice we’d give to our friends and family. It’s the advice I’d give to my sister, Marnie, who owns a jewelry store, or my cousin, Scott, who works as a realtor. Video spoiler alert: You won’t need to make a website, but you definitely need a way for your local business to reach potential customers using their mobile phones, tablets, or desktop computers.

Video series to help local business owners of all technical levels to get their business found on the web. It focuses on the benefits of creating a Yelp business page, Facebook page, Google+ page, etc.

The great thing about video is that you can pause at any time and work at your own pace. Next time you hear the question: “How do I get my business on Google?”, please share the link and let's get more local businesses online!

Series: Build an online presence for your local business

Video #1: Introduction and hot topics (3:22)
Meet my sister, Marnie, who owns a jewelry store and my cousin, Scott, who works as a realtor. Follow them as we talk about the big changes in the last decade, such as making sure your business can reach customers at work, home, or on-the-go using their mobile phones.
Video #2: Determine your business’ value-add and online goal (4:08)
With the example of Scott, the realtor, you’ll learn about the marketing funnel, setting an online goal, and highlighting what makes your business special.
Video #3. Find potential customers (7:41)
Marnie and Scott figure out their customers’ most common journeys to reach their business. We'll use their examples to brainstorm how you can reach customers on review sites, through search engines, maps apps, and social and professional networking sites.
Video #4: Basic implementation and best practices (5:23)
The fundamentals and best practices to take your business from offline to online!
Video #5: Differentiate your business from the competition (5:09)
With Scott’s business as a realtor, see how to demonstrate that your local business is the best choice for customers by adding photos, videos, and getting reviews.
Video #6: Engage customers with a holistic online identity (4:51)
We'll end the series by showing how Scott makes sure his online presence sends a cohesive message to customers and answers all their common questions. :)

Webmaster level: advanced

Over the summer the Webmaster Tools team has been cooking up an update to the Webmaster Tools API. The new API is consistent with other Google APIs, makes it easier to authenticate for apps or web-services, and provides access to some of the main features of Webmaster Tools.

If you've used other Google APIs, getting started with the new Webmaster Tools API will be easy! We have examples for Python, Java, as well as OACurl (for fans of command lines).

This API allows you to:

  • list, add, or remove sites from your account (you can currently have up to 500 sites in your account)
  • list, add, or remove sitemaps for your websites
  • get warning, error, and indexed counts for individual sitemaps
  • get a time-series of all kinds of crawl errors for your site
  • list crawl error samples for specific types of errors
  • mark individual crawl errors as "fixed" (this doesn't change how they're processed, but can help simplify the UI for you)

We'd love to see what you're building with our APIs! Feel free to link to your projects in the comments below. Should you have any questions about the usage of the API, feel free to post in our help forum as well.


Webmaster level: Beginner

Today, the new Webmaster Academy goes live in 22 languages! New or beginner webmasters speaking a multitude of languages can now learn the fundamentals of making a great site, providing an enjoyable user experience, and ranking well in search results. And if you think you’re already familiar with these topics, take the quizzes at the end of each module to prove it :).

So give Webmaster Academy a read in your preferred language and let us know in the comments or help forum what you think. We’ve gotten such great and helpful feedback after the English version launched this past March so we hope this straightforward and easy-to-read guide can be helpful (and fun!) to everyone.

Let’s get great sites and searchable content up and running around the world.

Webmaster level: All
Today you’ll see a new and improved sitelinks search box. When shown, it will make it easier for users to reach specific content on your site, directly through your own site-search pages.

What’s this search box and when does it appear for my site?

When users search for a company by name—for example, [Megadodo Publications] or [Dunder Mifflin]—they may actually be looking for something specific on that website. In the past, when our algorithms recognized this, they'd display a larger set of sitelinks and an additional search box below that search result, which let users do site: searches over the site straight from the results, for example [site:example.com hitchhiker guides].
This search box is now more prominent (above the sitelinks), supports Autocomplete, and—if you use the right markup—will send the user directly to your website's own search pages.

How can I mark up my site?

You need to have a working site-specific search engine for your site. If you already have one, you can let us know by marking up your homepage as a schema.org/WebSite entity with the potentialAction property of the schema.org/SearchAction markup. You can use JSON-LD, microdata, or RDFa to do this; check out the full implementation details on our developer site.
If you implement the markup on your site, users will have the ability to jump directly from the sitelinks search box to your site’s search results page. If we don’t find any markup, we’ll show them a Google search results page for the corresponding site: query, as we’ve done until now.
As always, if you have questions, feel free to ask in our Webmaster Help forum.

Update (16:30h CET, September 12th): We're noticing an enthusiastic uptick in the markup implementation after the initial announcement last week! Here are the two main issues we've observed so far, and what you need to do to fix them:

  1. Make sure that when you replace the curly braces and all that's inside of it with a search term it leads to a valid URL on your site.
    For example: if your "target" value is "http://www.example.com/search?q={searchTerm}", ensure that "http://www.example.com/search?q=foo" and "http://www.example.com/search?q=bar" both lead to search result pages about "foo" and "bar".
  2. Make sure that the "query-input" field points to the same string that's inside the curly braces in the "target" field.
    For example: if your "target" value is "http://www.example.com/search?q={searchTerm}", you must use "searchTerm" as the "name" within "query-input".

Webmaster level: advanced

Everyone wants to use less bandwidth: hosts want lower bills, mobile users want to stay under their limits, and no one wants to wait for unnecessary bytes. The web is full of opportunities to save bandwidth: pages served without gzip, stylesheets and JavaScript served unminified, and unoptimized images, just to name a few.

So why isn't the web already optimized for bandwidth? If these savings are good for everyone then why haven't they been fixed yet? Mostly it's just been too much hassle. Web designers are encouraged to "save for web" when exporting their artwork, but they don't always remember.  JavaScript programmers don't like working with minified code because it makes debugging harder. You can set up a custom pipeline that makes sure each of these optimizations is applied to your site every time as part of your development or deployment process, but that's a lot of work.

An easy solution for web users is to use an optimizing proxy, like Chrome's. When users opt into this service their HTTP traffic goes via Google's proxy, which optimizes their page loads and cuts bandwidth usage by 50%.  While this is great for these users, it's limited to people using Chrome who turn the feature on and it can't optimize HTTPS traffic.

With Optimize for Bandwidth, the PageSpeed team is bringing this same technology to webmasters so that everyone can benefit: users of other browsers, secure sites, desktop users, and site owners who want to bring down their outbound traffic bills. Just install the PageSpeed module on your Apache or Nginx server [1], turn on Optimize for Bandwidth in your configuration, and PageSpeed will do the rest.

If you later decide you're interested in PageSpeed's more advanced optimizations, from cache extension and inlining to the more aggressive image lazyloading and defer JavaScript, it's just a matter of enabling them in your PageSpeed configuration.

Learn more about installing PageSpeed or enabling Optimize for Bandwidth.





[1] If you're using a different web server, consider running PageSpeed on an Apache or Nginx proxy.  And it's all open source, with porting efforts underway for IIS, ATS, and others.

Webmaster level: All

This June, we introduced a weeklong social campaign called #NoHacked. The goals for #NoHacked are to bring awareness to hacking attacks and offer tips on how to keep your sites safe from hackers.

We held the campaign in 11 languages on multiple channels including Google+, Twitter and Weibo. About 1 million people viewed our tips and hundreds of users used the hashtag #NoHacked to spread awareness and to share their own tips. Check them out below!

Posts we shared during the campaign:


Some of the many tips shared by users across the globe:
  • Pablo Silvio Esquivel from Brazil recommends users not to use pirated software (source)
  • Rens Blom from the Netherlands suggests using different passwords for your accounts, changing them regularly, and using an extra layer of security such as two-step authentication (source)
  • Дмитрий Комягин from Russia says to regularly monitor traffic sources, search queries and landing pages, and to look out for spikes in traffic (source)
  • 工務店コンサルタント from Japan advises everyone to choose a good hosting company that's knowledgeable in hacking issues and to set email forwarding in Webmaster Tools (source)
  • Kamil Guzdek from Poland advocates changing the default table prefix in wp-config to a custom one when installing a new WordPress to lower the risk of the database from being hacked (source)

Hacking is still a surprisingly common issue around the world so we highly encourage all webmasters to follow these useful tips. Feel free to continue using the hashtag #NoHacked to share your own tips or experiences around hacking prevention and awareness. Thanks for supporting the #NoHacked campaign!

And in the unfortunate event that your site gets hacked, we’ll help you toward a speedy and thorough recovery:

Webmaster level: all

Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Google Drive, for example, automatically have a secure connection to Google.

Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites.

We want to go even further. At Google I/O a few months ago, we called for “HTTPS everywhere” on the web.

We’ve also seen more and more webmasters adopting HTTPS (also known as HTTP over TLS, or Transport Layer Security), on their website, which is encouraging.

For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We've seen positive results, so we're starting to use HTTPS as a ranking signal. For now it's only a very lightweight signal — affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content — while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.


Lock


In the coming weeks, we’ll publish detailed best practices (it's in our help center now) to make TLS adoption easier, and to avoid common mistakes. Here are some basic tips to get started:

  • Decide the kind of certificate you need: single, multi-domain, or wildcard certificate
  • Use 2048-bit key certificates
  • Use relative URLs for resources that reside on the same secure domain
  • Use protocol relative URLs for all other domains
  • Check out our Site move article for more guidelines on how to change your website’s address
  • Don’t block your HTTPS site from crawling using robots.txt
  • Allow indexing of your pages by search engines where possible. Avoid the noindex robots meta tag.

If your website is already serving on HTTPS, you can test its security level and configuration with the Qualys Lab tool. If you are concerned about TLS and your site’s performance, have a look at Is TLS fast yet?. And of course, if you have any questions or concerns, please feel free to post in our Webmaster Help Forums.

We hope to see more websites using HTTPS in the future. Let’s all make the web more secure!

(Cross-posted on the Google News Blog)

Webmaster level: All

UPDATE: Great News -- The Publisher Center is now available in all countries where Google News has an edition.

If you're a news publisher, your website has probably evolved and changed over time -- just like your stories. But in the past, when you made changes to the structure of your site, we might not have discovered your new content. That meant a lost opportunity for your readers, and for you. Unless you regularly checked Webmaster Tools, you might not even have realized that your new content wasn’t showing up in Google News. To prevent this from happening, we are letting you make changes to our record of your news site using the just launched Google News Publisher Center.

With the Publisher Center, your potential readers can be more informed about the articles they’re clicking on and you benefit from better discovery and classification of your news content. After verifying ownership of your site using Google Webmaster Tools, you can use the Publisher Center to directly make the following changes:

  • Update your news site details, including changing your site name and labeling your publication with any relevant source labels (e.g., “Blog”, “Satire” or “Opinion”)
  • Update your section URLs when you change your site structure (e.g., when you add a new section such as http://example.com/2014commonwealthgames or http://example.com/elections2014)
  • Label your sections with a specific topic (e.g., “Technology” or “Politics”)

Whenever you make changes to your site, we’d recommend also checking our record of it in the Publisher Center and updating it if necessary.

Try it out, or learn more about how to get started.

At the moment the tool is only available to publishers in the U.S. but we plan to introduce it in other countries soon and add more features.  In the meantime, we’d love to hear from you about what works well and what doesn’t. Ultimately, our goal is to make this a platform where news publishers and Google News can work together to provide readers with the best, most diverse news on the web.

Webmaster level: intermediate-advanced

To crawl, or not to crawl, that is the robots.txt question.

Making and maintaining correct robots.txt files can sometimes be difficult. While most sites have it easy (tip: they often don't even need a robots.txt file!), finding the directives within a large robots.txt file that are or were blocking individual URLs can be quite tricky. To make that easier, we're now announcing an updated robots.txt testing tool in Webmaster Tools.

You can find the updated testing tool in Webmaster Tools within the Crawl section:

Here you'll see the current robots.txt file, and can test new URLs to see whether they're disallowed for crawling. To guide your way through complicated directives, it will highlight the specific one that led to the final decision. You can make changes in the file and test those too, you'll just need to upload the new version of the file to your server afterwards to make the changes take effect. Our developers site has more about robots.txt directives and how the files are processed.

Additionally, you'll be able to review older versions of your robots.txt file, and see when access issues block us from crawling. For example, if Googlebot sees a 500 server error for the robots.txt file, we'll generally pause further crawling of the website.

Since there may be some errors or warnings shown for your existing sites, we recommend double-checking their robots.txt files. You can also combine it with other parts of Webmaster Tools: for example, you might use the updated Fetch as Google tool to render important pages on your website. If any blocked URLs are reported, you can use this robots.txt tester to find the directive that's blocking them, and, of course, then improve that. A common problem we've seen comes from old robots.txt files that block CSS, JavaScript, or mobile content — fixing that is often trivial once you've seen it.

We hope this updated tool makes it easier for you to test & maintain the robots.txt file. Should you have any questions, or need help with crafting a good set of directives, feel free to drop by our webmaster's help forum!

Webmaster level: all
A common annoyance for web users is when websites require browser technologies that are not supported by their device. When users access such pages, they may see nothing but a blank space or miss out a large portion of the page's contents.
Starting today in our English search results in the US, we will indicate to searchers when our algorithms detect pages that may not work on their devices. For example, Adobe Flash is not supported on iOS devices or on Android versions 4.1 and higher, and a page whose contents are mostly Flash may be noted like this:
This search label has been deprecated. 

Developing modern multi-device websites

Fortunately, making websites that work on all modern devices is not that hard: websites can use HTML5 since it is universally supported, sometimes exclusively, by all devices. To help webmasters build websites that work on all types of devices regardless of the type of content they wish to serve, we recently announced two resources:
  • Web Fundamentals: a curated source for modern best practices.
  • Web Starter Kit: a starter framework supporting the Web Fundamentals best practices out of the box.
By following the best practices described in Web Fundamentals you can build a responsive web design, which has long been Google's recommendation for search-friendly sites. Be sure not to block crawling of any Googlebot of the page assets (CSS, JavaScript, and images) using robots.txt or otherwise. Being able to access these external files fully helps our algorithms detect your site's responsive web design configuration and treat it appropriately. You can use the Fetch and render as Google feature in Webmaster Tools to test how our indexing algorithms see your site.
As always, if you need more help you can ask a question in our webmaster forum.

If you are targeting users in more than one country, chances are you already heard about rel-alternate-hreflang. If you haven't, in short, this annotation enables Google and other search engines to serve the correct language or regional version of pages to searchers, which can lead to increased user satisfaction.

Making sure the deployed annotations are usable by search engines can be rather difficult, especially on sites with many pages, and site owners all around the world haven’t been shy telling us about this. Today we're releasing a feature that should make debugging rel-alternate-hreflang annotations much easier.

The Language Targeting section in the International Targeting feature enables you to identify two of the most common issues with hreflang annotations:
  • Missing return links: annotations must be confirmed from the pages they are pointing to. If page A links to page B, page B must link back to page A, otherwise the annotations may not be interpreted correctly.
    For each error of this kind we report where and when we detected them, as well as where the return link is expected to be.
incorrect_backlinks.png

  • Incorrect hreflang values: The value of the hreflang attribute must either be a language code in ISO 639-1 format such as "es", or a combination of language and country code such as "es-AR", where the country code is in ISO 3166-1 Alpha 2 format.
    In case our indexing systems detect language or country codes that are not in these formats, we provide example URLs to help you fix them.

incorrect_language.png

Additionally, we've moved the geographic targeting setting to this part of Webmaster Tools, so that you can find all information relevant to international and multilingual targeting in the same place.

We hope you'll find this new feature useful and that it helps you to identify issues with the rel-hreflang-implementation on your site. If you have comments or questions about the feature, please post in our Webmaster Help Forum.

Posted by Gary Illyes, Webmaster Trends

Webmaster level: All

Do you have an Android app in addition to your website? You can now connect the two so that users searching from their smartphones and tablets can easily find and reach your app content.

App deep links in search results help your users find your content more easily and re-engage with your app after they’ve installed it. As a site owner, you can show your users the right content at the right time — by connecting pages of your website to the relevant parts of your app you control when your users are directed to your app and when they go to your website.


Hundreds of apps have already implemented app indexing. This week at Google I/O, we’re announcing a set of new features that will make it even easier to set up deep links in your app, connect your site to your app, and keep track of performance and potential errors.

Getting started is easy

We’ve greatly simplified the process to get your app deep links indexed. If your app supports HTTP deep linking schemes, here’s what you need to do:

  1. Add deep link support to your app
  2. Connect your site and your app
  3. There is no step 3 (:

As we index your URLs, we’ll discover and index the app / site connections and may begin to surface app deep links in search results.

We can discover and index your app deep links on our own, but we recommend you publish the deep links. This is also the case if your app only supports a custom deep link scheme. You can publish them in one of the following ways:

There’s one more thing: we’ve added a new feature in Webmaster Tools to help you debug any issues that might arise during app indexing. It will show you what type of errors we’ve detected for the app page-web page pairs, together with example app URIs so you can debug:



We’ll also give you detailed instructions on how to debug each issue, including a QR code for the app deep links, so you can easily open them on your phone or tablet. We’ll send you Webmaster Tools error notifications as well, so you can keep up to date.



Give app indexing a spin, and as always, if you need more help ask questions on the Webmaster help forum.

Webmaster level: intermediate-advanced
Few topics confuse and scare webmasters more than site moves. To help you avoid surprises, we've created an in-depth guide on how to handle site moves in a Googlebot-friendly way. So what, exactly, is a site move and how do you go about moving a site correctly?

Basics of site moves

A site move is, broadly, one of two types of content migrations:
  • Site moves without URL changes. Only the underlying infrastructure serving the website is changed without any visible changes to the URL structure. For example, you might move www.example.com to a different hosting provider while keeping the same URLs and site structure on www.example.com.
  • Site moves with URL changes. Here, the URLs on the website change in any number of ways:
    • The protocol: http://www.example.com to https://www.example.com
    • The domain name: example.com to example.net
    • The URL paths: http://example.com/page.php?id=1 to http://example.com/widget
We've seen cases where webmasters implemented site moves incorrectly, or missed out steps that would have greatly increased the chances of the site move completing successfully. To help webmasters design and implement site moves correctly, we've updated the site move guidelines in our Help Center. In parallel, we continue to improve our crawling and indexing systems to detect and handle site moves if you follow our guidelines.

Moving to responsive web design

A related increasingly-common question we're seeing is how can a website move from having separate mobile URLs or dynamic serving to using responsive web design. To help you implement this configuration change, please see this new page on our smartphones recommendations site.
And, as always, please ask on our Webmaster Help forums if you have more questions.

Webmaster level: all

Have you ever used Google Search on your smartphone and clicked on a promising-looking result, only to end up on the mobile site’s homepage, with no idea why the page you were hoping to see vanished? This is such a common annoyance that we’ve even seen comics about it. Usually this happens because the website is not properly set up to handle requests from smartphones and sends you to its smartphone homepage—we call this a “faulty redirect”.

We’d like to spare users the frustration of landing on irrelevant pages and help webmasters fix the faulty redirects. Starting today in our English search results in the US, whenever we detect that smartphone users are redirected to a homepage instead of the the page they asked for, we may note it below the result. If you still wish to proceed to the page, you can click “Try anyway”:

This search label has been deprecated. 

And we’re providing advice and resources to help you direct your audience to the pages they want. Here’s a quick rundown:

1. Do a few searches on your own phone (or with a browser set up to act like a smartphone) and see how your site behaves. Simple but effective. :)

2. Check out Webmaster Tools—we’ll send you a message if we detect that any of your site’s pages are redirecting smartphone users to the homepage. We’ll also show you any faulty redirects we detect in the Smartphone Crawl Errors section of Webmaster Tools:


3. Investigate any faulty redirects and fix them. Here’s what you can do:
  • Use the example URLs we provide in Webmaster Tools as a starting point to debug exactly where the problem is with your server configuration.
  • Set up your server so that it redirects smartphone users to the equivalent URL on your smartphone site.
  • If a page on your site doesn’t have a smartphone equivalent, keep users on the desktop page, rather than redirecting them to the smartphone site’s homepage. Doing nothing is better than doing something wrong in this case.
  • Try using responsive web design, which serves the same content for desktop and smartphone users.
If you’d like to know more about building smartphone-friendly sites, read our full recommendations. And, as always, if you need more help you can ask a question in our webmaster forum.

Webmaster level: all

In April, we launched App Indexing in English globally so deep links to your mobile apps could appear in Google Search results on Android everywhere. Today, we’re adding the first publishers with content in other languages: Fairfax Domain, MercadoLibre, Letras.Mus.br, Vagalume, Idealo, L'Equipe, Player.fm, Upcoming, Au Feminin, Marmiton, and chip.de. In the U.S., we now also support some more apps -- Walmart, Tapatalk, and Fancy.

We’ve also translated our developer guidelines into eight additional languages: Chinese (Traditional), French, German, Italian, Japanese, Brazilian Portuguese, Russian, and Spanish.



If you’re interested in participating in App Indexing, and your content and implementation are ready, please let us know by filling out this form. As always, you can ask questions on the mobile section of our webmaster forum.

Finally, if you’re headed to Google I/O in June, be sure to check out the session on the “Future of Apps and Search”, where we’ll share some more updates on App Indexing.

Webmaster level: all

The Fetch as Google feature in Webmaster Tools provides webmasters with the results of Googlebot attempting to fetch their pages. The server headers and HTML shown are useful to diagnose technical problems and hacking side-effects, but sometimes make double-checking the response hard: Help! What do all of these codes mean? Is this really the same page as I see it in my browser? Where shall we have lunch? We can't help with that last one, but for the rest, we've recently expanded this tool to also show how Googlebot would be able to render the page.

Viewing the rendered page

In order to render the page, Googlebot will try to find all the external files involved, and fetch them as well. Those files frequently include images, CSS and JavaScript files, as well as other files that might be indirectly embedded through the CSS or JavaScript. These are then used to render a preview image that shows Googlebot's view of the page.

You can find the Fetch as Google feature in the Crawl section of Google Webmaster Tools. After submitting a URL with "Fetch and render," wait for it to be processed (this might take a moment for some pages). Once it's ready, just click on the response row to see the results.

Fetch as Google

Handling resources blocked by robots.txt

Googlebot follows the robots.txt directives for all files that it fetches. If you are disallowing crawling of some of these files (or if they are embedded from a third-party server that's disallowing Googlebot's crawling of them), we won't be able to show them to you in the rendered view. Similarly, if the server fails to respond or returns errors, then we won't be able to use those either (you can find similar issues in the Crawl Errors section of Webmaster Tools). If we run across either of these issues, we'll show them below the preview image.

We recommend making sure Googlebot can access any embedded resource that meaningfully contributes to your site's visible content, or to its layout. That will make Fetch as Google easier for you to use, and will make it possible for Googlebot to find and index that content as well. Some types of content – such as social media buttons, fonts or website-analytics scripts – tend not to meaningfully contribute to the visible content or layout, and can be left disallowed from crawling. For more information, please see our previous blog post on how Google is working to understand the web better.

We hope this update makes it easier for you to diagnose these kinds of issues, and to discover content that's accidentally blocked from crawling. If you have any comments or questions, let us know here or drop by in the webmaster help forum.