[go: nahoru, domu]

A few weeks ago we held another Webmaster Conference Lightning Talk, this time about Rich Results and Search Console. During the talk we hosted a live chat and a lot of viewers asked questions - we tried to answer all we could, but our typing skills didn’t match the challenge… so we thought we’d follow up on the questions posed in this blog post.

If you missed the video, you can watch it below: it discusses how to get started with rich results and use Search Console to optimize your search appearance in Google Search.

Rich Results & Search Console FAQs

Will a site rank higher against competitors if it implements structured data?

Structured data by itself is not a generic ranking factor. However, it can help Google to understand what the page is about, which can make it easier for us to show it where relevant and make it eligible for additional search experiences.

Which structured data would you recommend to use in Ecommerce category pages?

You don't need to mark up the products on a category page, a Product should only be marked up when they are the primary element on a page.

How much content should be included in my structured data? Can there be too much?

There is no limit to how much structured data you can implement on your pages, but make sure you’re sticking to the general guidelines. For example the markup should always be visible to users and representative of the main content of the page.

What exactly are FAQ clicks and impressions based on?

A Frequently Asked Question (FAQ) page contains a list of questions and answers pertaining to a particular topic. Properly marked up FAQ pages may be eligible to have a rich result on Search and an Action on the Google Assistant, which can help site owners reach the right users. These rich results include snippets with frequently asked questions, allowing users to expand and collapse answers to them. Every time such a result appears in Search results for an user it will be counted as an impression on Search Console, and if the user clicks to visit the website it will be counted as a click. Clicks to expand and collapse the search result will not be counted as clicks on Search Console as they do not lead the user to the website. You can check impressions and clicks on your FAQ rich results using the ‘Search appearance’ tab in the Search Performance report.

Will Google show rich results for reviews made by the review host site?

Reviews must not be written or provided by the business or content provider. According to our review snippets guidelines: “Ratings must be sourced directly from users” - publishing reviews written by the business itself are against the guidelines and might trigger a Manual Action.

There are schema types that are not used by Google, why should we use them?

Google supports a number of schema types, but other search engines can use different types to surface rich results, so you might want to implement it for them.

Why do rich results that previously appeared in Search sometimes disappear?

The Google algorithm tailors search results to create what it thinks is the best search experience for each user, depending on many variables, including search history, location, and device type. In some cases, it may determine that one feature is more appropriate than another, or even that a plain blue link is best. Check the Rich Results Status report, If you don’t see a drop in the number of valid items, or a spike in errors, your implementation should be fine.

How can I verify my dynamically generated structured data?

The safest way to check your structured data implementation is to inspect the URL on Search Console. This will provide information about Google's indexed version of a specific page. You can also use the Rich Results Test public tool to get a verdict. If you don’t see the structured data through those tools, your markup is not valid.

How can I add structured data in WordPress?

There are a number of WordPress plugins available that could help with adding structured data. Also check your theme settings, it might also support some types of markup.

With the deprecation of the Structured Data Testing Tool, will the Rich Results Test support structured data that is not supported by Google Search?

The Rich Results Test supports all structured data that triggers a rich result on Google Search, and as Google creates new experiences for more structured data types we’ll add support for them in this test. While we prepare to deprecate the Structured Data Testing Tool, we’ll be investigating how a generic tool can be supported outside of Google.

Stay Tuned!

If you missed our previous lightning talks, check the WMConf Lightning Talk playlist. Also make sure to subscribe to our YouTube channel for more videos to come! We definitely recommend joining the premieres on YouTube to participate in the live chat and Q&A session for each episode!

Posted by Daniel Waisberg, Search Advocate

Over the past year, we have been working to upgrade the infrastructure of the Search Console API, and we are happy to let you know that we’re almost there. You might have already noticed some minor changes, but in general, our goal was to make the migration as invisible as possible. This change will help Google to improve the performance of the API as demand grows.

Note that if you’re not querying the API yourself you don't need to take action. If you are querying the API for your own data or if you maintain a tool which uses that data (e.g. a WordPress plugin), read on. Below is a summary of the changes:

  1. Changes on Google Cloud Platform dashboard: you’ll see is a drop in the old API usage report and an increase in the new one.
  2. API key restriction changes: if you have previously set API key restrictions, you might need to change them.
  3. Discovery document changes: if you’re querying the API using a third-party API library or querying the Webmasters Discovery Document directly, you will need to update it by the end of the year.

Please note that other than these changes, the API is backward compatible and there are currently no changes in scope or functionality.

Google Cloud Platform (GCP) changes

In the Google Cloud Console dashboards, you will notice a traffic drop for the legacy API and an increase in calls to the new one. This is the same API, just under the new name.

Image: Search Console API changes in Google Cloud Console

You can monitor your API usage on the new Google Search Console API page.

API key restriction changes

As mentioned in the introduction, these instructions are important only if you query the data yourself or provide a tool that does that for your users.

To check if you have an API restriction active on your API key, follow these steps in the credentials page and make sure the Search Console API is not restricted. If you have added an API restriction for your API keys you will need to take action by August 31.

In order to allow your API calls to be migrated automatically to the new API infrastructure, you need to make sure the Google Search Console API is not restricted.

  • If your API restrictions are set to “Don’t restrict key” you’re all set.
  • If your API restrictions are set to “Restrict key”, the Search Console API should be checked as shown in the image below.
Image: Google Search Console API restrictions setting


Discovery document changes

If you’re querying the Search Console API using an external API library, or querying the Webmasters API discovery document directly you will need to take action as we’ll drop the support in the Webmasters discovery document. Our current plan is to support it until December 31, 2020 - but we’ll provide more details and guidance in the coming months.

If you have any questions, feel free to ask in the Webmasters community or the Google Webmasters Twitter handle.

Posted by Nati Yosephian, Search Console Software Engineer

Mobile-first indexing has been an ongoing effort of Google for several years. We've enabled mobile-first indexing for most currently crawled sites, and enabled it by default for all the new sites. Our initial plan was to enable mobile-first indexing for all sites in Search in September 2020. We realize that in these uncertain times, it's not always easy to focus on work as otherwise, so we've decided to extend the timeframe to the end of March 2021. At that time, we're planning on switching our indexing over to mobile-first indexing.

For the sites that are not yet ready for mobile-first indexing, we’ve already mentioned some issues blocking your sites in previous blog posts. Now that we’ve done more testing and evaluation, we have seen a few more issues that are worth mentioning to better prepare your sites.

Make sure Googlebot can see your content

In mobile-first indexing, we will only get the information of your site from the mobile version, so make sure Googlebot can see the full content and all resources there. Here are some things to pay attention to:

Robots meta tags on mobile version

You should use the same robots meta tags on the mobile version as those on the desktop version. If you use a different one on the mobile version (such as noindex or nofollow), Google may fail to index or follow links on your page when your site is enabled for mobile-first indexing.

Lazy-loading on mobile version

Lazy-loading is more common on mobile than on desktop, especially for loading images and videos. We recommend following lazy-loading best practices. In particular, avoid lazy-loading your primary content based on user interactions (like swiping, clicking, or typing), because Googlebot won't trigger these user interactions.

For example, if your page has 10 primary images on the desktop version, and the mobile version only has 2 of them, with the other 8 images loaded from the server only when the user clicks the “+” button:

(Caption: desktop version with 10 thumbnails / mobile version with 2 thumbnails)

In this case, Googlebot won’t click the button to load the 8 images, so Google won't see those images. The result is that they won't be indexed or shown in Google Images. Follow Google's lazy-loading best practices, and lazy load content automatically based on its visibility in the viewport.

Be aware of what you block

Some resources have different URLs on the mobile version from those on the desktop version, sometimes they are served on different hosts. If you want Google to crawl your URLs, make sure you're not disallowing crawling of them with your robots.txt file.

For example, blocking the URLs of .css files will prevent Googlebot from rendering your pages correctly, which can harm the ranking of your pages in Search. Similarly, blocking the URLs of images will make these images disappear from Google Images.

Make sure primary content is the same on desktop and mobile

If your mobile version has less content than your desktop version, you should consider updating your mobile version so that its primary content (the content you want to rank with, or the reason for users to come to your site) is equivalent. Only the content shown on the mobile version will be used for indexing and ranking in Search. If it’s your intention that the mobile version has less content than the desktop version, your site may lose some traffic when Google enables mobile-first indexing for your site, since Google won't be able to get the full information anymore.

Use the same clear and meaningful headings on your mobile version as on the desktop version. Missing meaningful headings may negatively affect your page's visibility in Search, because we might not be able to fully understand the page.

For example, if your desktop version has the following tag for the heading of the page:

<h1>Photos of cute puppies on a blanket</h1>

Your mobile version should also use the same heading tag with the same words for it, rather than using headings like:

<h1>Photos</h1> (not clear and meaningful)

<div>Photos of cute puppies on a blanket</div> (not using a heading tag)

Check your images and videos

Make sure the images and videos on your mobile version follow image best practices and video best practices. In particular, we recommend that you perform the following checks:

Image quality

Don’t use images that are too small or have a low resolution on the mobile version. Small or low-quality images might not be selected for inclusion in Google Images, or shown as favorably when indexed.

For example, if your page has 10 primary images on the desktop version, and they are normal, good quality images. On the mobile version, a bad practice is to use very small thumbnails for these images to make them all fit in the smaller screen:

(Caption: desktop version with normal thumbnails / mobile version tiny thumbnails)

In this case, these thumbnails may be considered “low quality” by Google because they are too small and in a low resolution.

Alt attributes for images

As previously mentioned, remember that using less-meaningful alt attributes might negatively affect how your images are shown in Google Images.

For example, a good practice is like the following:

<img src="dogs.jpg" alt="A photo of cute puppies on a blanket"> (meaningful alt text)

While bad practices are like the following:

<img src="dogs.jpg" alt> (empty alt text)

<img src="dogs.jpg" alt="Photo"> (alt text not meaningful)

Different image URLs between desktop and mobile version

If your site uses different image URLs for the desktop and mobile version, you may see a temporary traffic loss from Google Images while your site transitions to mobile-first indexing. This is because the image URLs on the mobile version are new to the Google indexing system, and it takes some time for the new image URLs to be understood appropriately. To minimize a temporary traffic loss from search, review whether you can retain the image URLs used by desktop.

Video markup

If your desktop version uses schema.org's VideoObject structured data to describe videos, make sure the mobile version also includes the VideoObject, with equivalent information provided. Otherwise our video indexing systems may have trouble getting enough information about your videos, resulting in them not being shown as visibly in Search.

Video and image placement

Make sure to position videos and images in an easy to find location on the mobile version of your pages. Videos or images not placed well could affect user experience on mobile devices, making it possible that Google would not show these as visibly in search.

For example, assume you have a video embedded in your content in an easy to find location on desktop:

On mobile, you place an ad near the top of the page which takes up a large part of the page. This can result your video being moved off the page, requiring users to scroll down a lot to find the video:

(Caption: video on mobile is much less visible to users)

In this case, the page might not be deemed a useful video landing page by our algorithms, resulting in the video not being shown in Search.

You can find more information and more best practices in our developer guide for mobile-first indexing.

Mobile-first indexing has come a long way. It's great to see how the web has evolved from desktop to mobile, and how webmasters have helped to allow crawling and indexing to match how users interact with the web! We appreciate all your work over the years, which has helped to make this transition fairly smooth. We’ll continue to monitor and evaluate these changes carefully. If you have any questions, please drop by our Webmaster forums or our public events.

Today we are announcing that the Rich Results Test fully supports all Google Search rich result features - it’s out of beta 🥳. In addition, we are preparing to deprecate the Structured Data Testing Tool 👋 - it will still be available for the time being, but going forward we strongly recommend you use the Rich Results Test to test and validate your structured data.

Rich results are experiences on Google Search that go beyond the standard blue link. They’re powered by structured data and can include carousels, images, or other non-textual elements. Over the last couple years we’ve developed the Rich Results Test to help you test your structured data and preview your rich results.

Here are some reasons the new tool will serve you better:
  • It shows which Search feature enhancements are valid for the markup you are providing
  • It handles dynamically loaded structured data markup more effectively
  • It renders both mobile and desktop versions of a result
  • It is fully aligned with Search Console reports
You can use the Rich Results Test to test a code snippet or a URL to a page. The test returns errors and warnings we detect on your page. Note that errors disqualify your page from showing up as a rich result. While warnings might limit the appearance, your page is still eligible for showing up as a rich result. For example, if there was a warning for a missing image property, that page could still appear as a rich result, just without an image.

Here are some examples of what you’ll see when using the tool.

Image: valid structured data on Rich Results Test

Image: code explorer showing error on Rich Results Test

Image: search preview on Rich Results Test

Learn more about the Rich Results Test, and let us know if you have any feedback either through the Webmasters help community or Twitter.

Posted by Moshe Samet, Search Console Product Manager

Thanks to our users, we receive hundreds of spam reports every day. While many of the spam reports lead to manual actions, they represent a small fraction of the manual actions we issue. Most of the manual actions come from the work our internal teams regularly do to detect spam and improve search results. Today we're updating our Help Center articles to better reflect this approach: we use spam reports only to improve our spam detection algorithms.

Spam reports play a significant role: they help us understand where our automated spam detection systems may be missing coverage. Most of the time, it's much more impactful for us to fix an underlying issue with our automated detection systems than it is to take manual action on a single URL or site.

In theory, if our automated systems were perfect, we would catch all spam and not need reporting systems at all. The reality is that while our spam detection systems work well, there’s always room for improvement, and spam reporting is a crucial resource to help us with that. Spam reports in aggregate form help us analyze trends and patterns in spammy content to improve our algorithms.

Overall, one of the best approaches to keeping spam out of Search is to rely on high quality content created by the web community and our ability to surface it through ranking. You can learn more about our approach to improving Search and generating great results at our How Search Works site. Content owners and creators can also learn how to create high-quality content to be successful in Search through our Google Webmasters resources. Our spam detection systems work with our regular ranking systems, and spam reports help us continue to improve both so we very much appreciate them.

If you have any questions or comments, please let us know on Twitter.

Posted by Gary



Every search matters. That is why whenever you come to Google Search to find relevant and useful information, it is our ongoing commitment to make sure users receive the highest quality results possible.

Unfortunately, on the web there are some disruptive behaviors and content that we call "webspam" that can degrade the experience for people coming to find helpful information. We have a number of teams who work to prevent webspam from appearing in your search results, and it’s a constant challenge to stay ahead of the spammers. At the same time, we continue to engage with webmasters to ensure they’re following best practices and can find success on Search, making great content available on the open web.

Looking back at last year, here’s a snapshot of how we fought spam on Search in 2019, and how we supported the webmaster community.

Fighting Spam at Scale

With hundreds of billions of webpages in our index serving billions of queries every day, perhaps it’s not too surprising that there continue to be bad actors who try to manipulate search ranking. In fact, we observed that more than 25 Billion pages we discover each day are spammy. That’s a lot of spam and it goes to show the scale, persistence, and the lengths that spammers are willing to go. We’re very serious about making sure that your chance of encountering spammy pages in Search is as small as possible. Our efforts have helped ensure that more than 99% of visits from our results lead to spam-free experiences.

Updates from last year

In 2018, we reported that we had reduced user-generated spam by 80%, and we’re happy to confirm that this type of abuse did not grow in 2019. Link spam continued to be a popular form of spam, but our team was successful in containing its impact in 2019. More than 90% of link spam was caught by our systems, and techniques such as paid links or link exchange have been made less effective.

Hacked spam, while still a commonly observed challenge, has been more stable compared to previous years. We continued to work on solutions to better detect and notify affected webmasters and platforms and help them recover from hacked websites.

Spam Trends

One of our top priorities in 2019 was improving our spam fighting capabilities through machine learning systems. Our machine learning solutions, combined with our proven and time-tested manual enforcement capability, have been instrumental in identifying and preventing spammy results from being served to users.

In the last few years, we’ve observed an increase in spammy sites with auto-generated and scraped content with behaviors that annoy or harm searchers, such as fake buttons, overwhelming ads, suspicious redirects and malware. These websites are often deceptive and offer no real value to people. In 2019, we were able to reduce the impact on Search users from this type of spam by more than 60% compared to 2018.

As we improve our capability and efficiency in catching spam, we’re continuously investing in reducing broader types of harm, like scams and fraud. These sites trick people into thinking they’re visiting an official or authoritative site and in many cases, people can end up disclosing sensitive personal information, losing money, or infecting their devices with malware. We have been paying close attention to queries that are prone to scam and fraud and we’ve worked to stay ahead of spam tactics in those spaces to protect users.

Working with webmasters and developers for a better web

Much of the work we do to fight against spam is using automated systems to detect spammy behavior, but those systems aren’t perfect and can’t catch everything. As someone who uses Search, you can also help us fight spam and other issues by reporting spam on search, phishing or malware. We received nearly 230,000 reports of search spam in 2019, and we were able to take action on 82% of those reports we processed. We appreciate all the reports you sent to us and your help in keeping search results clean!

So what do we do when we get those reports or identify that something isn’t quite right? An important part of what we do is notifying webmasters when we detect something wrong with their website. In 2019, we generated more than 90 million messages to website owners to let them know about issues, problems that may affect their site’s appearance on Search results and potential improvements that can be implemented. Of all messages, about 4.3 million were related to manual actions, resulting from violations of our Webmaster Guidelines.

And we’re always looking for ways to better help site owners. There were many initiatives in 2019 aimed at improving communications, such as the new Search Console messages, Site Kit for WordPress sites or the Auto-DNS verification in the new Search Console. We hope that these initiatives have equipped webmasters with more convenient ways to get their sites verified and will continue to be helpful. We also hope this provides quicker access to news and that webmasters will be able to fix webspam issues or hack issues more effectively and efficiently.

While we deeply focused on cleaning up spam, we also didn’t forget to keep up with the evolution of the web and rethought how we wanted to treat “nofollow” links. Originally introduced as a means to help fight comment spam and annotate sponsored links, the “nofollow” attribute has come a long way. But we’re not stopping there. We believe it’s time for it to evolve even more, just as how our spam fighting capability has evolved. We introduced two new link attributes, rel="sponsored" and rel="ugc", that provide webmasters with additional ways to identify to Google Search the nature of particular links. Along with rel="nofollow", we began treating these as hints for us to incorporate for ranking purposes. We are very excited to see that these new rel attributes were well received and adopted by webmasters around the world!

Engaging with the community

As always, we’re grateful for all the opportunities we had last year to connect with webmasters around the world, helping them improve their presence in Search and hearing feedback. We delivered more than 150 online office hours, online events and offline events in many cities across the globe to a wide range of audience including SEOs, developers, online marketers and business owners. Among those events, we have been delighted by the momentum behind our Webmaster Conferences in 35 locations across 15 countries and 12 languages around the world, including the first Product Summit version in Mountain View. While we’re not currently able to host in-person events, we look forward to more of these events and virtual touchpoints in the future.

Webmasters continued to find solutions and tips on our Webmasters Help Community with more than 30,000 threads in 2019 in more than a dozen languages. On YouTube, we launched #AskGoogleWebmasters as well as series such as SEO mythbusting to ensure that your questions get answered and your uncertainties get clarified.

We know that our journey to better web with you is ongoing and we would love to continue this with you in the year to come! Therefore, do keep in touch on Twitter, YouTube, blog, Help Community or see you in person at one of our conferences near you!