[go: nahoru, domu]


hotdog

lion king
...and infinitely more fun: webmasters and their pets incognito! Happy Halloween, everyone! If you see any costumes that would pass the SafeSearch filter :), feel like sharing a gripe or telling a good story, please join the chat!

Take care, and don't forget to brush your teeth.
 Yours scarily,
  The Webmaster Central Team


Our glasses-wearing, no vampire-teeth vampire (Ryan), zoombie Mur, Holiday Fail (Tiffany Lane), Colbert Hipster (Dan Vanderkam), Rick Astley Cutts, Homeboy Ben D'Angelo, Me -- pinker & poofier, Investment Bank CEO Shyam Jayaraman (though you can't see the golden parachute in his backpack)



Chark as Juno, Wysz as Beah Burger (our co-worker), Adi and Matt Dougherty as yellow ninja, red ninja!


Heroes come in all shapes and sizes...

Powdered toast man, Mike Leotta

Adam Lasnik as, let me see if I get this right, a "secret service agent masquerading as a backstage tech" :)

What featured over 750 webmasters and a large number of Googlers from around the world, hundreds of questions, and over one hundred answers over the course of nearly two hours?  If you guessed "the Tricks and Treats webmaster event from this earlier this month!" well, you're either absolutely brilliant, you read the title of this post, or both!

How did it go?
It was an exhilarating, exhausting, and educational event, if we may say so ourselves, even though there were a few snafus.  We're aware that the sound quality wasn't great for some folks, and we've also appreciated quite-helpful constructive criticisms in this feedback thread.  Last but not least, we are bummed to admit that someone (whose name starts with 'A' and ends with 'M') uncharacteristically forgot to hit the record button (really!), so there's unfortunately no audio recording to share :-(.

But on more positive notes, we're delighted that so many of you enjoyed our presentations (embedded below), our many answers, and even some of our bad jokes (mercifully not to be repeated).

What next?
Well, for starters, all of us Webmaster Central Googlers will be spending quite some time taking in your feedback.  Some of you have requested sessions exclusively covering particular (pre-announced) topics or tailored to specific experience levels, and we've also heard from many webmasters outside of the U.S. who would love online events in other languages and at more convenient times.  No promises, but you can bet we're eager to please!  Stay tuned on this blog (and, as a hint and hallo to our German-speaking webmasters, do make sure to follow our German webmaster blog  ;-).  

And finally, a big thank you!
A heartfelt thank you to my fellow Googlers, many of whom got up at the crack of dawn to get to the office early for the chat and previous day's runthrough or stayed at work late in Europe.  But more importantly, major props to all of you (from New Delhi, New York, New Zealand and older places) who asked great questions and hung out with us online for up to two hours.  You webmasters are the reason we love coming to work each day, and we look forward to our next chat!

*  *  *

The presentations...
We had presentations from John, Jonathan, Maile, and Wysz.  Presentations from the first three are embedded below (Wysz didn't have a written presentation this time).


John's slides on "Frightening Webmastering Myths"


Jonathan's slides on "Using the Not Found errors report in Webmaster Tools"


Maile's slides on "Where We're Coming From"


Edited on Wednesday, October 29 at 6:00pm to update number of participants


No matter where in the world you are, you can vote right now on webmaster-oriented questions by registering for our free Webmaster chat  ("Tricks and Treats") which is scheduled for tomorrow at 9am PDT (5pm GMT).  Even better: you can suggest your own questions that you'd like Webmaster Central Googlers to answer.


We're using the new Google Moderator tool, so posting questions and voting on your favorites is fun and easy; you'll receive an e-mail with a link to the webmaster chat questions right after you register.  Click on the check mark next to questions you find particularly interesting and important. Click on the X next to questions that seem less relevant or useful.  From your votes, Google Moderator will surface the best questions, helping us spend more time in the chat on issues you really care about.

Feel free to review our post from yesterday for more details on this event.

See you there!


P.S. - Speaking of voting:  If you're an American citizen, we hope you're also participating in the upcoming presidential election! Our friends in Google Maps have even prepared a handy lookup tool to help you find your voting place -- check it out!




You know how some myths just won't die?  Well, do we have some great news for you!  A not-so-scary bunch of Gooooooooooooglers will be on hand to drive a stake through the most ghoulish webmastering myths and misconceptions in our live online "Tricks and Treats" chat this coming Wednesday.

That's right!  You'll be treated to some brief presentations and then have the chance to ask lots of questions to Googlers ranging from Matt Cutts in Mountain View to John Mueller in Zurich to Kaspar Szymanski in Dublin (and many more folks as well).


Here's what you'll need
  • About an hour of free time
  • A computer with audio capabilities that is connected to the Internet and has these additional specifications
    (We'll be broadcasting via the Internet tubes this time rather than over the phone lines)
  • A URL for the chat, which you can only get when you register for the event (don't worry -- it's fast and painless!)
  • Costumes: optional

What will our Tricks and Treats chat include?
  • INTRO:  A quick hello from some of your favorite Help Group Guides
  • PRESO:  A 15 minute presentation on "Frightening Myths and Misconceptions" by John Mueller
  • FAQs:  A return of our popular "Three for Three," in which we'll have three different Googlers tackling three different issues we've seen come up in the Group recently... in under three minutes each!
  • And lots of Q&A!  You'll have a chance to type questions during the entire session (actually, starting an hour prior!) using our hunky-dory new Google Moderator tool.  Ask, then vote!  With this tool and your insights, we expect the most interesting questions to quickly float to the top.

When and how can you join in?
  1. Mark the date on your calendar now:  Wednesday, October 22, at 9am PDT, noon EDT, and 5pm GMT
  2. Register right now for this event.  Please note that you'll need to click on the "register" link on the bottom lefthand side.
  3. Optionally post questions via Google Moderator one hour prior to the start of the event.  The link will be mailed to all registrants.
  4. Log in 5-10 minutes prior to the start of the chat, using the link e-mailed to you by WebEx (the service hosting the event).
  5. Interact!  During the event, you'll be able to chat (by typing) with your fellow attendees, and also post questions and vote on your favorite questions via Google Moderator.

We look forward to seeing you online!  In the meantime, if you have any questions, feel free to post a note in this thread of our friendly Webmaster Help Group.

Edited on October 21st at 12:15pm and 12:29pm PDT to add:
We've decided to open up the Google Moderator page early.  Everyone who registered for this event previously and everyone registering from this moment on will receive the link in e-mail.  Also, the event is scheduled for *5pm* GMT (correctly listed on the registration page and in the followup e-mails).

Since both tennis and table tennis are in the Olympics, perhaps you're wondering: if there's soccer, why not "table soccer?" Of course, we know table soccer by another name; and while foosball may not be an Olympic sport, we still cheered Nathan Johns and Jan Backes—two members of our Search Quality team—as they brought home the foosball silver medal at the search engine foosball smackdown at SES San Jose.

"Smackdown" doesn't quite equate to "Olympics," but check out the intensity—you could hear a pin drop!

silver medalists at foosball

The gold medal (cup) went to the search engine down the road. :)

gold medalists at foosball
Yahoo's first place winners Daniel Wong and Jake Rosenberg.

Just to be sure they weren't ringers, I quizzed Daniel and Jake, "How can you prevent a file from being crawled?" They correctly answered, "robots.txt."

Gold cup well deserved.



Last Thursday, many of you just couldn't get enough of us and joined our second live Webmaster Central chat, "JuneTune." It was an action-packed session with live presentations, questions and answers and chatting about cats and other important topics. Over the course of an hour and a half, we made four presentations, received over 600 questions and passed around close to 500 chat messages. It was great to see so many Googlers around the world involved: Adam, Bergy, Evan, Jessica, Maile, Matt (Cutts), Matt (Dougherty), Reid and Wysz in Mountain View; Jonathan and Susan in Kirkland; Alvar, Mariya, Pedro and Uli in Dublin; and me in Zürich. We had users from about as many places as Matt (Harding) has danced in: Alaska, Argentina, Arizona, Australia, Brazil, California, Canada, Chile, Colorado, Costa Rica, Denmark, Egypt, Florida, France, Germany, Greece, Hawaii, India, Indonesia, Ireland, Israel, Italy, Malaysia, Mexico, Minnesota, Missouri, Nebraska, New York, New Zealand, Ohio, Pennsylvania, Peru, Philippines, Poland, Portugal, Saudi Arabia, South Africa, Spain, Switzerland, The Netherlands, Russia, Ukraine, United Kingdom, Vietnam and a bunch from Seattle, Washington - thank you all for joining us!

To help make the most out of this session, we'd like to make the transcripts and presentations available to everyone. We're also working on filling in the blanks and have started to answer many of the unanswered questions online in the Webmaster Help Group. You'll find the full (and just slightly cleaned up) questions and answers there as well.

I presented an overview of factors involved in personalized search at Google:



Maile gave a nice presentation of case sensitivity in web search in general:



The audio part of these presentations is in the audio transcript below. It also includes Jonathan's coverage of reasons why ranking may change, Wysz's presentation of ways to get URLs removed from our index, as well as everything else that happened on the phone! Enjoy :)

Audio transcript (MP3)

We hope to continue to improve on making these events useful to you, so don't forget to send us your feedback. We'll be back!



All of us with Webmaster Central share one passion: a serious love for improving the Internet.

If you're organizing an event with an audience that would benefit from our discussing building search-engine friendly sites and maximizing the resources of Webmaster Central -- such as our Webmaster Tools, Help Center and Discussion Group -- please submit a speaker request. We'll work with our Corporate Communications team to see if we can add your event to our schedule.

With the intention of helping people make great content accessible on the web, we attended over 15 events this year -- including search conferences, business schools and marketing expos. We feel that we can be most helpful to:
  • Site owners/webmasters/bloggers who feature original, compelling content or tools, such as their...
    • Neighborhood store, restaurant, dentist office, etc.
    • Service or product (e.g. freelance photographer or online wizard for house decorating)
    • Passion, hobby, opinion (latest from the San Francisco music scene, perspectives on the upcoming election)
  • Web developers, web designers, SEOs/SEMs who build sites for others
Submitting a speaking request does not guarantee our attendance, but we'll definitely review each submission with our Communications team. Also, if we can't attend your event this time around, but we feel we could make a positive impact in the future, we'll keep the event on our radar.

Now I'd love to introduce two of our newest speakers who have been active in the Webmaster Help Group for some time: Michael Wyszomierski and Reid Yokoyama.

Hi, I'm Michael, but I go by "Wysz" in the Webmaster Help Group. When I'm not talking to webmasters or doing other search-related work, I like to tinker with my personal blog, take photos, and edit videos. Blogs, videos, podcasts, and other online media often come to my rescue when I'm searching for information online, so I'd love to talk to fellow content providers about how to make sure their sites can be understood by Google.
Hi, I'm Reid. I'm originally from St. Louis, Missouri, but have fallen in love with the weather, biking trails, and culture of the Bay Area. I studied to be a historian and even wrote a Senior honors thesis on Japanese American resettlement in San Francisco after WWII, but as an avid blogger, found myself increasingly interested in the transaction of knowledge and information through the Internet. I'm particularly passionate about helping small businesses build out high quality websites and helping them understand how Google's tools can help them in the process.

Many of us Webmaster Help Group guides have happily gotten to meet Group members at various functions around the world. Over sandwiches in San Jose. Salads in Stockholm. Sweets in Sydney. And it's been super!

But some of you are hard to find and so we haven't had the pleasure of chatting with you. Therefore, we've decided to visit you right through your beloved monitor screen.

On Friday, March 28 at 9am PDT / noon EDT / 16:00 GMT, we'll be having our first-ever all-group live chat, where you'll have a chance to hear and see us answer some of your most pressing questions. All that's required is a phone (we'll pay for the call), a sufficiently-modern web browser, and an internet connection.

We'll be posting a "sticky note" with more details in the Random Chitchat section of the Group a day or two before this online meetup, and we're looking forward to chatting with you soon!

Talkatively yours,
Adam and the English Webmaster Help Guides

EDITED ON MARCH 26 TO ADD:
More information about the chat is now available on this page.


February is that time of the year: the Search Engine Strategies conference hits London! A few of us were there to meet webmasters and search engine representatives to talk about the latest trends and issues in the search engine world.

It was a three-day marathon full of interesting talks - and of course, we heard a lot of good questions in between the sessions! If you didn't get a chance to talk with us, fear not: we've pulled together some of the best questions we encountered. You can find a few of them below, and an additional set in our Webmaster Help Group. Please join the discussion!

Why should I upload a Sitemap to Google Webmaster Tools, if my site is crawled just fine?

All sites can benefit from submitting a Sitemap to Google Webmaster Tools. You may help us to do a better job of crawling and understanding your site, especially if it has dynamic content or a complicated architecture.

Besides, you will have access to more information about your site, for example the number of pages from your Sitemaps that are indexed by Google, any errors Google found with your Sitemap, as well as warnings about potential problems. Also, you can submit specialized Sitemaps for certain types of content including Video, Mobile, News and Code.
More information about the benefits of submitting a Sitemap to Google Webmaster Tools can be found here.

How do you detect paid links? If I want to stay on the safe side, should I use the "nofollow" attribute on all links?

We blogged about our position on paid links and the use of nofollow a few months ago. You may also find it interesting to read this thread in our Help Group about appropriate uses of the nofollow attribute.

How do I associate my site with a particular country/region using Google Webmaster Tools? Can I do this for a dynamic website?

The instructions in our Help Center explain that you can associate a country or region to an entire domain, individual subdomains or subdirectories. A quick tip: if, for instance, you are targeting the UK market, better ways of structuring your site would be example.co.uk, uk.example.com, or example.com/uk/. Google can geolocate all of those patterns.

If your domain name has no regional significance, such as www.example.com, you can still associate your website with a country or region. To do that you will need to verify the domain, or the subdomains and/or subdirectories one by one in your Webmaster Tools account and then associate each of them with a country/region. However, for the moment we don't support setting a geographical target for patterns that can't be verified such as, for example, www.example.com/?region=countrycode.

I have a news site and it is not entirely crawled. Why? Other crawlers had no problem crawling us...

First off, make sure that nothing prevents us from crawling your news site - the architecture of your site or the robots.txt file. Also, we suggest you sign up for Webmaster Tools and submit your content. We specifically have the News Sitemap protocol for sites offering this type of content. If you take advantage of this feature, we can give you more information on which URLs we had trouble with and why. It really rocks!

A quick note to conclude: the lively, international environment of SES is always incredible. I have had a lot of interesting conversations in English, as well as in Italian, French and Spanish. Fellow Googlers chatted with webmasters in English, Danish, Dutch, German and Hungarian. That's amazing - and a great opportunity to get to know each other better, in the language you speak! So next time you wonder how Google Universal Search works in English or you're concerned about Google News Search in German, don't hesitate; grab us for a chat or write to us!

Written by Juliane Stiller, Search Quality


Our German Webmaster Central Blog celebrates its first birthday and we'd like to raise our glasses to 57 published posts in the last year! We enjoy looking back at an exciting first year of blogging and communicating with webmasters. It's the growing webmaster community that made this blog a success. Thanks to our readers for providing feedback on our blog posts and posting in the German Webmaster Help group.

Over the past year, we published numerous articles specifically targeted for the German market - topics varying from affiliate programs to code snippets. We also translated many of the applicable English posts for the German blog. If you speak German (Hallo!) come check out the German Webmaster Blog and subscribe to our feed or email alert.

Hope to see you soon,
Juliane Stiller on behalf of the German Webmaster Communication Team



If you've got JavaScript skills and you'd like to implement such things as Google Gadgets or Maps on your site, bring your laptops and come hang out with us in Mountain View.

This Friday, my team (Google Developer Programs) is hosting a hackathon to get you started with our JavaScript APIs. There will be plenty of our engineers around to answer questions. We'll start with short introductions of the APIs and then break into groups for coding and camaraderie. There'll be food, and prizes too.

The featured JavaScript APIs:When: Friday, February 29 - two sessions (you're welcome to attend both)
  • 2-5:30 PM
  • 6-10 PM
Where: The Googleplex
Building 40
1600 Amphitheatre Pkwy
Mountain View, CA 94043
Room: Seville Tech Talk, 2nd floor

See our map for parking locations and where to check in. (Soon, you too, will be making maps like this! :)

Just say yes and RSVP!

And no worries if you're busy this Friday; future hackathons will feature other APIs and more languages. Check out the Developer Events Calendar for future listings. Hope to see you soon.

Last month, several of us with Webmaster Central hit the "good times" jackpot at PubCon Vegas 2007. We realize not all of you could join us, so instead of returning home with fuzzy dice for everyone, we've got souvenir conference notes.

Listening to the Q&A, I was pleased to hear the major search engines agreeing on best practices for many webmaster issues. In fact, the presentations in the duplicate content session were mostly, well, duplicate. When I wasn't sitting in on one of the many valuable sessions, I was chatting with webmasters either at the Google booth, or at Google's "Meet the Engineers" event. It was exciting to hear from so many different webmasters, and to help them with Google-related issues. Here are a few things that were on the minds of webmasters, along with our responses:

Site Verification Files and Meta Tags
Several webmasters asked, "Is it necessary to keep the verification meta tag or HTML file in place to remain a verified owner in Webmaster Tools?" The answer is yes, you should keep your verification file or meta tag live to maintain your status as a verified owner. These verification codes are used to control who has access to the owner-specific tools for your site in Webmaster Tools. To ensure that only current owners of a site are verified, we periodically re-check to see if the verification code is in place, and if it is not, you will get unverified for that site. While we're on the topic:

Site Verification Best Practices
  • If you have multiple people working on your site with Webmaster Tools, it's a good idea to have each person verify the site with his or her own account, rather than using a shared login. That way, as people come and go, you can control the access appropriately by adding or removing verification files or meta tags for each account.
  • You may want to keep a list of these verification codes and which owner they are connected to, so you can easily control access later. If you lose track, you can always use the "Manage site verification" option in Webmaster Tools, which allows you to force all site owners to reverify their accounts.
Subdomains vs. Subdirectories
What's the difference between using subdomains and subdirectories? When it comes to Google, there aren't major differences between the two, so when you're making that decision, do what works for you and your visitors. Following PubCon, our very own Matt Cutts outlined many of the key issues in a post on his personal blog. In addition to those considerations, if you use Webmaster Tools (which we hope you do!), keep in mind that you'll automatically be verified for deeper subdirectories of any sites you've verified, but subdomains need to be verified separately.

Underscores vs. Dashes
Webmasters asked about the difference between how Google interprets underscores and dashes in URLs. In general, we break words on punctuation, so if you use punctuation as separators, you're providing Google a useful signal for parsing your URLs. Currently, dashes in URLs are consistently treated as separators while underscores are not. Keep in mind our technology is constantly improving, so this distinction between underscores and dashes may decrease over time. Even without punctuation, there's a good chance we'll be able to figure out that bigleopard.html is about a "big leopard" and not a "bigle opard." While using separators is a good practice, it's likely unnecessary to place a high priority on changing your existing URLs just to convert underscores to dashes.

Keywords in URLs
We were also asked if it is useful to have relevant keywords in URLs. It's always a good idea to be descriptive across your site, with titles, ALT attributes, and yes, even URLs, as they can be useful signals for users and search engines. This can be especially true with image files, which otherwise may not have any text for a search engine to consider. Imagine you've taken a picture of your cat asleep on the sofa. Your digital camera will likely name it something like IMG_2937.jpg. Not exactly the most descriptive name. So unless your cat really looks like an IMG_2937, consider changing the filename to something more relevant, like adorable-kitten.jpg. And, if you have a post about your favorite cat names, it's much easier to guess that a URL ending in my-favorite-cat-names would be the relevant page, rather than a URL ending in postid=8652. For more information regarding issues with how Google understands your content, check out our new content analysis feature in Webmaster Tools, as well as our post on the URL suggestions feature of the new Google Toolbar.

Moving to a new IP address
We got a question about changing a site's IP address, and provided a few steps you can take as a webmaster to make sure things go smoothly. Here's what you can do:
  1. Change the TTL (Time To Live) value of your DNS configuration to something short, like five minutes (300 seconds). This will tell web browsers to re-check the IP address for your site every five minutes.
  2. Copy your content to the new hosting environment, and make sure it is live on the new IP address.
  3. Change your DNS settings so your hostname points to the new IP address.
  4. Check your logs to see when Googlebot starts crawling your site on the new IP address. To make sure it's really Googlebot who's visiting, you can verify Googlebot by following these instructions. You can then log into Webmaster Tools and monitor any crawl errors. Once Googlebot is happily crawling on the new IP address, you should be all set as far as Google is concerned.
  5. To make sure everyone got the message of your move, you may want to keep an eye out for visits to your old IP address before shutting it down.
Proxies
A few webmasters were concerned that proxy services are being indexed with copies of their content. While it's often possible to find duplicate copies of your content in our results if you look hard enough, the original source is most likely going to be ranked higher than a proxy copy. However, if you find this not to be the case, please drop us some URLs in the Webmaster Help Group. There are many Googlers including myself who monitor this group and escalate issues appropriately.

It was great talking with webmasters at the conference -- we hope those of you unable to join us found this post useful. If you want to continue to talk shop with me, other Googlers, and your fellow webmasters, join the follow-up conversation in the Webmaster Help Group.

Update: Additional PubCon notes from Jonathan Simon are available in our discussion group.



We're fortunate to meet many of you at conferences, where we can chat about web search and Webmaster Tools. We receive a lot of good feedback at these events: insight into the questions you're asking and issues you're facing. However, as several of our Webmaster Help Group friends have pointed out, not everyone can afford the time or expense of a conference; and many of you live in regions where webmaster-related conferences are rare.

So, we're bringing the conference to you.

We've posted notes in our Help Group from conferences we recently attended:
Next month, Jonathan and Wysz will post their notes from PubCon, while Bergy and I will cover SES Chicago.

If you can make it to one of these, we'd love to meet you face to face, but if you can't, we hope you find our jottings useful.



With apologizes to Vic Mizzy, we've written short verse to the tune of the "Addams Family" theme (please use your imagination):

We may be hobbyists or just geeky,
Building websites and acting cheeky,
Javascript redirects we won't make sneaky,
Our webmaster fam-i-ly!

Happy Halloween everyone! Feel free to join the discussion and share your Halloween stories and costumes.


Magnum P.I., Punk Rocker, Rubik's Cube, Mr. T., and Rainbow Brite
a.k.a. Several members of our Webmaster Tools team: Dennis Geels, Jonathan Simon, Sean Harding, Nish Thakkar, and Amanda Camp


Panda and Lolcat
Or just Evan Tang and Matt Cutts?


7 Indexing Engineers and 1 Burrito


Cheese Wysz, Internet Repairman, Community Chest, Internet Pirate (don't tell the RIAA)
Helpful members of the Webmaster Help Group: Wysz, MattD, Nathan Johns (nathanj) , and Bergy


Count++
Webspam Engineer Shashi Thakur (in the same outfit he wore to Searchnomics)


Hawaiian Surfer Dude and Firefox
Members of Webmaster Central's communications team: Reid Yokoyama and Mariya Moeva


Napolean Dynamite and Raiderfan
Shyam Jayaraman (speaking at SES Chicago, hopefully doing the dance) and me



As summer inches towards fall and in many places the temperature is still rising, you're probably thinking the best place to be right now is on the beach, by a pool or inside somewhere that's air-conditioned. These are all good choices, but next week there's somewhere else to be that's both hot and cool: the Search Engines Strategies conference in San Jose. In addition to the many tantalizing conference sessions covering diverse topics related to search, there will be refreshments, food, and of course, air-conditioning.
Googlers attending SES San Jose
Additionally, on Tuesday evening at our Mountain View ‘plex we're hosting the “Google Dance” -- where conference attendees can eat, drink, play, dance, and talk about search. During the Google Dance be sure to attend the “Meet the Engineers” event where you’ll be able to meet and have a conversation with 25 or more engineers including Webmaster Central’s own Amanda Camp. Also, if you get a spare minute from merry-making, head over to the Webmaster Tools booth, where you’ll find Maile Ohye offering lots of good advice.

If you’re a night owl, you’ll probably also be interested in the unofficial late-night SES after-parties that you only know about if you talk to the right person. To stem the potential barrage of “where’s the party” questions, I'd like to make it clear that I unfortunately am not the right person. But if you happen to be someone who’s organizing a late night party, please consider inviting me. ;)

"Enough about the parties -- what about the conference?," you ask. As you would expect, Google will be well-represented at the conference. Here is a sampling of the Search-related sessions at which Googlers will be speaking:

Universal & Blended Search
Monday, August 20
11:00am-12:30pm
David Baile

Personalization, User Data & Search
Monday, August 20
2:00 - 3:30pm
Sep Kamvar

Searcher Behavior Research Update
Monday, August 20
4:00 - 5:30pm
Oliver Deighton

Are Paid Links Evil?
Tuesday, August 21
4:45 - 6:00pm
Matt Cutts

Keynote Conversation
Wednesday, August 22
9:00 - 9:45am
Marissa Mayer

Search APIs
Wednesday, August 22
10:30am - 12:00pm
Jon Diorio

SEO Through Blogs & Feeds
Wednesday, August 22
10:30am - 12:00pm
Rick Klau

Duplicate Content & Multiple Site Issues
Wednesday, August 22
1:30 - 2:45pm
Greg Grothaus

CSS, AJAX, Web 2.0 & Search Engines
Wednesday, August 22
3:15 - 4:30pm
Amanda Camp

Search Engine Q&A On Links
Wednesday, August 22
4:45 - 6:00pm
Shashi Thakur

Meet the Crawlers
Thursday, August 23
10:45am - 12:00pm
Evan Roseman

We will also have a large presence in the conference expo hall where members of the Webmaster Central Team like Susan Moskwa and I will be present at the Webmaster Tools booth to answer questions, listen to your thoughts and generally be there to chat about all things webmaster related. Bergy and Wysz, two more of us who tackle tough questions in the Webmaster Help Groups, will be offering assistance at the Google booth (live and in person, not via discussion thread).

If you're reading this and thinking, "I should go and grab the last frozen juice bar in the freezer," I suggest that you save that frozen juice bar for when you return from the conference and find that your brain's overheating from employing all the strategies you've learned and networking with all the people you've met.

Joking aside, we are psyched about the conference and hope to see you there. Save a cold beverage for me!



Held on June 27th, Searchnomics 2007 gave us (Greg Grothaus and Shashi Thakur) a chance to meet webmasters and answer some of their questions. As we're both engineers focused on improving search quality, the feedback was extremely valuable. Here's our take on the conference and a recap of some of what we talked about there.

Shashi: While I've worked at Google for over a year, this was my first time speaking at a conference. I spoke on the "Search Engine Friendly Design" panel. The exchanges were hugely valuable, helping me grasp some of the concerns of webmasters. Greg and I thought it would be valuable to share our responses to a few questions:

Does location of server matter? I use a .com domain but my content is for customers in the UK.

In our understanding of web content, Google considers both the IP address and the top-level domain (e.g. .com, .co.uk). Because we attempt to serve geographically relevant content, we factor domains that have a regional significance. For example, ".co.uk " domains are likely very relevant for user queries originating from the UK. In the absence of a significant top-level domain, we often use the web server's IP address as an added hint in our understanding of content.

I have many different sites. Can I cross-link between them?

Before you begin cross-linking sites, consider the user's perspective and whether the crosslinks provide value. If the sites are related in business -- e.g., an auto manual site linking to an auto parts retail site, then it could make sense -- the links are organic and useful. Cross-linking between dozens or hundreds of sites, however, probably doesn't provide value, and I would not recommend it.


Greg: Like Shashi, this was also my first opportunity to speak at a conference as a Googler. It was refreshing to hear feedback from the people who use the software we work every day to perfect. The session also underscored the argument that we're just at the beginning of search and have a long way to go. I spoke on the subject of Web 2.0 technologies. It was clear that many people are intimidated by the challenges of building a Web 2.0 site with respect to search engines. We understand these concerns. You should expect see more feedback from us on this subject, both at conferences and through our blog.

Any special guidance for DHTML/AJAX/Flash documents?

It's important to make sure that content and navigation can be rendered/negotiated using only HTML. So long as the content and navigation are the same for search crawlers and end users, you're more than welcome to use advanced technologies such as Flash and/or Javascript to improve the user experience using a richer presentation. In "Best uses of Flash," we wrote in more detail about this, and are working on a post about AJAX technology.

Un paio di chiarimenti...

Ciao! Siamo appena rientrati da un breve soggiorno in Italia. Tempo fantastico! Abbiamo partecipato come spettatori al Search Engine Strategies conference a Milano nei giorni 29 e 30 maggio. La conferenza è stato davvero una fantastica opportunità per parlare con molti di voi! Ci ha fatto molto piacere esserci e vorrei ringraziare tutti quelli che si sono fermati semplicemente a salutare o a discutere di strategie dei motori di ricerca. Abbiamo avuto la possibilità di parlare con diversi dei partecipanti e con alcuni dei più importanti attori del mondo SEO e Web Search Marketing in Italia. Discussioni utili e fruttuose per molti aspetti. Si e' parlato di come il mercato Web si stia sviluppando in Italia, di strategie SEO e di evangelizzazione (la traduzione italiana suona veramente forte).

Un buon numero di voi è saltato fuori con domande interessanti, e mi piacerebbe ora esporre un caso per poi fornire un paio di chiarificamenti che siano chiari e concisi.

Allora partiamo. Questa è la situazione in cui un webmaster potrebbe ritrovarsi: ho ottimizzato questo sito utilizzando tecniche non in accordo con le linee guida di Google. Ce la siamo cavata per un po', e questo ci ha aiutato a raggiungere la seconda posizione nei risultati di ricerca per alcune parole chiave. Ad un certo punto però, abbiamo ricevuto una email dal team della qualità della ricerca di Google che diceva che il nostro sito non sarebbe stato momentaneamente più presente nell'indice (nelle email c'è sempre almeno un esempio delle tecniche utilizzate). Abbiamo allora sistemato il sito togliendo tutto ciò che non era conforme alle linee guida e dopo alcuni giorni il nostro sito era di nuovo presente nell'indice. Come è possibile che non è più posizionato in seconda posizione nonostante il fatto che abbiamo rimosso tutto ciò che non era conforme alle linee guida?!

Va bene, lasciatemi fare un paio di domande prima di rispondere.

  • Non avete ottimizzato il sito utilizzando quelle tecniche al fine di posizionarlo il meglio possibile artificialmente?
  • Non pensavate che quelle tecniche avrebbero funzionato, almeno in una prospettiva di breve periodo?

Quindi se c'è stato un utilizzo di tecniche spam, incoraggiamo il sito che ha ricevuto la notifica da Google a prendere la cosa seriamente. Molti ripuliscono il proprio sito dalle tecniche scorrette di ottimizzazione dopo aver ricevuto una nostra notifica, ma noi dobbiamo anche tenere in considerazione che oltre a quelle presenti sul sito (per esempio testo nascosto, redirecting doorway page, etc) spesso ci sono anche tecniche utilizzate al di fuori del sito stesso come link popularity artificiali per guadagnarsi un’ottima posizione nelle pagine dei risultati di ricerca di Google.

Quindi, per rendere la questione più chiara possibile, una volta che ognuna delle manipolazioni sopra citate, inserite ai fini del posizionamento, e’ stata rimossa, il sito torna ad occupare la posizione che merita sulla base dei suoi contenuti e della sua link popularity naturale. C'è in oltre da evidenziare che il posizionamento del vostro sito dipende anche dagli altri siti relazionati al vostro per argomento trattato e tali siti nel frattempo potrebbero essere stati ottimizzati correttamente, va da sé che questo avrebbe un impatto anche sulla vostra posizione.

Notate che non c’è alcun tipo di penalizzazione preventiva applicata a quei siti che, ora puliti, hanno però visto in precedenza un utilizzo di tecniche non consentite. E questo è un punto a cui teniamo particolarmente: non rimangono né malus né macchie nella storia di un sito.

E' per questo motivo che insistiamo fermamente nel consigliare di lavorare sodo sui propri contenuti in modo che siano una risorsa che abbia valore per gli utenti, essendo proprio il buon contenuto una delle risorse più importanti che alimentano una link populary naturale e tutti dovremmo ormai sapere quanto una tale popolarità possa essere solida.

Qualità della ricerca, qualità dei contenuti e l'esperienza dei tuoi lettori.

Tra le varie conversazioni sulla qualità della ricerca, una su tutte ricorreva più spesso. Mi riferisco alle landing page e come scrivere per i motori di ricerca, due temi che spesso viaggiano in coppia quando si parla di risultati organici di ricerca.

Pensiamo allora al tuo visitatore che ha cercato qualcosa con Google e ha trovato la tua pagina. Ora, che tipo di accoglienza gli stai riservando? Una buona esperienza di ricerca consiste nel trovare una pagina che contiene l'informazione necessaria per rispondere alla domanda posta all'inizio.

Tuttavia un errore frequente nello scrivere per motori di ricerca é dimenticare proprio il visitatore e focalizzare l'attenzione solo sulla sua domanda. In effetti potremmo sostenere, "é con quella chiave di ricerca che hanno trovato la mia pagina!"

Alla fine dei conti, esasperare un comportamento del genere potrebbe portare a creare pagine fatta "su misura" per rispondere a quella ricerca ma con ben poco contenuto. Pagine del genere spesso utilizzano tecniche quali, tra l'altro, pure ripetizioni di parole, contenuti duplicati e in generale minimo contenuto. Ricapitolando, possono anche essere a tema con la domanda posta - ma per il tuo visitatore, sono inutili. In altri termini, hai finito per creare una pagina scritta solo per i motori di ricerca e ti sei dimenticato del visitatore. Il risultato é che l'utente trova pagine all'apparenza a tema ma in realtà completamente insignificanti.

Queste pagine "insignificanti", fatte artificialmente per generare traffico dai motori, non rappresentano una buona esperienza di ricerca. Anche se non adottano tecniche scorrette, quali ad esempio testo o links nascosti, sono fatte solo ed esclusivamente per posizionarsi per specifiche parole chiave, o combinazioni di parole, ma in realtà non offrono autonomamente alcun valore come risultato di una ricerca.

Un primo approccio per capire se stai causando una cattiva esperienza di ricerca ai tuoi utenti é controllare che le pagine trovate siano davvero utili. Queste pagine avranno contenuto a tema, che risponde alla domanda originalmente posta dall'utente ed in generale sono significative e rilevanti. Potresti cominciare con il controllo delle pagine che ricevono più visite e passare poi a rivedere tutto il sito. E per concludere, un consiglio: in generale, anche quando si vuole ottimizzare la pagina affinché il motore la trovi facilmente, bisogna ricordarsi che i visitatori sono il tuo pubblico e che una pagina scritta per i motori di ricerca non soddisfa necessariamente le aspettative del visitatore in termini di qualità e contenuti. Allora se stai pensando a come scrivere per il motore di ricerca, pensa invece ai tuoi utenti e a qual é il valore che stai offrendo loro!

We're back from SES Milan!

...with a couple of clarifications

Ciao everybody! We just got back from Italy—great weather there, I must say! We attended SES in Milan on the 29th and 30th of May. The conference was a great opportunity to talk to many of you. We really had a good time and want to thank all the people who stopped by to simply say "hi" or to talk to us in more detail about search engine strategies. This gave us a chance to talk to many participants and many of the big Italian actresses and actors in the SEO and web marketing worlds. We discussed recent developments in the Italian internet market, SEO strategies and evangelizing.

A number of you have raised interesting questions, and we'd like to go through two of these in more detail.

This is a situation a webmaster might find himself/herself in: I optimized this site using some sneaky techniques that are not in accordance with Google´s Webmaster Guidelines. I got away with it for a while and it helped me to rank in second position for certain keywords. Then, suddenly, I got an email from Google saying my site has been banned from the index because of those techniques (in these emails there is always an example of one of the infractions found). I now have cleaned up the site and after some days the site was back in the index.
Why on earth doesn't my site rank in the second position anymore, even though I've already paid for the sneaky techniques we used?

OK, before answering let me ask you a couple of questions:

  • Didn't you optimize your site with those techniques in order to artificially boost the ranking?
  • Didn't you think those techniques had worked out (in a short term perspective at least)?

So, if there has been spamming going on, we encourage a site that has gotten an email from Google to take this notification seriously. Many people clean up their sites after receiving a notification from us. But we must also take into account that besides the shady SEO techniques used on a particular site (for instance hidden text, redirecting doorway pages, etc) there are often off-site SEO techniques used such as creating artificial link popularity in order to gain a high position in Google's SERPs.

So, to make it straightforward, once those manipulations to make a site rank unnaturally high are removed, the site gains the position it merits based on its content and its natural link popularity. Note that of course the ranking of your site also depends on other sites related to the same topic and these sites might have been optimized in accordance to our guidelines, which might affect the ranking of your site.

Note that a site does not keep a stain or any residual negative effect from a prior breach of our webmaster guidelines, after it has been cleaned up.

That is why we first and foremost recommend to work hard on the content made for the audience of your site, as the content is a decisive factor for building natural link popularity. We all know how powerful a strong natural link popularity can be.

Search quality, content quality and your visitor's experience.

During our conversations about search-related issues, another topic that came up frequently was landing pages and writing for search engines, which are often related when we consider organic search results.

So, think of your visitors who have searched for something with Google and have found your page. Now, what kind of welcome are you offering? A good search experience consists of finding a page that contains enough information to satisfy your original query.

A common mistake in writing optimized content for search engines is to forget about the user and focus only on that particular query. One might say, that's how the user landed on my page!

At the end of that day, exaggerating this attitude might lead to create pages only made to satisfy that query but with no actual content on them. Such pages often adopt techniques such as, among others, mere repetition of keywords, duplicate content and overall very little value. In general, they might be in line with the keywords of the query – but for your visitor, they’re useless. In other words, you have written pages solely for the search engine and you forgot about the user. As a result, your visitor will find a page apparently on topic but totally meaningless.

These “meaningless” pages, artificially made to generate search engine traffic, do not represent a good search experience. Even though they do not employ other not recommendable techniques, such as for examples hidden text and links, they are very much made solely for the purpose of ranking for particular keywords, or a set of keywords, but actually are not offering a satisfying search result in itself.

A first step to identify if you are causing a bad search experience for your visitor consists of checking that the pages that he or she finds are actually useful. They will have topical content, that satisfies the query for which your visitor has found it and are overall meaningful and relevant. You might want to start with the pages that are most frequently found and extend your check up to your entire site. To sum up, as general advice, even if you want to make a page that is easily found via search engines, remember that the users are your audience, and that a page optimized for the search engine does not necessarily meet the user's expectations in terms of quality and content. So if you find yourself writing content for a search engine, you should ask yourself what the value is for the user!

Last week, I participated in the duplicate content summit at SMX Advanced. I couldn't resist the opportunity to show how Buffy is applicable to the everday Search marketing world, but mostly I was there to get input from you on the duplicate content issues you face and to brainstorm how search engines can help.

A few months ago, Adam wrote a great post on dealing with duplicate content. The most important things to know about duplicate content are:
  • Google wants to serve up unique results and does a great job of picking a version of your content to show if your sites includes duplication. If you don't want to worry about sorting through duplication on your site, you can let us worry about it instead.
  • Duplicate content doesn't cause your site to be penalized. If duplicate pages are detected, one version will be returned in the search results to ensure variety for searchers.
  • Duplicate content doesn't cause your site to be placed in the supplemental index. Duplication may indirectly influence this however, if links to your pages are split among the various versions, causing lower per-page PageRank.
At the summit at SMX Advanced, we asked what duplicate content issues were most worrisome. Those in the audience were concerned about scraper sites, syndication, and internal duplication. We discussed lots of potential solutions to these issues and we'll definitely consider these options along with others as we continue to evolve our toolset. Here's the list of some of the potential solutions we discussed so that those of you who couldn't attend can get in on the conversation.

Specifying the preferred version of a URL in the site's Sitemap file
One thing we discussed was the possibility of specifying the preferred version of a URL in a Sitemap file, with the suggestion that if we encountered multiple URLs that point to the same content, we could consolidate links to that page and could index the preferred version.

Providing a method for indicating parameters that should be stripped from a URL during indexing
We discussed providing this in either an interface such as webmaster tools on in the site's robots.txt file. For instance, if a URL contains sessions IDs, the webmaster could indicate the variable for the session ID, which would help search engines index the clean version of the URL and consolidate links to it. The audience leaned towards an addition in robots.txt for this.

Providing a way to authenticate ownership of content
This would provide search engines with extra information to help ensure we index the original version of an article, rather than a scraped or syndicated version. Note that we do a pretty good job of this now and not many people in the audience mentioned this to be a primary issue. However, the audience was interested in a way of authenticating content as an extra protection. Some suggested using the page with the earliest date, but creation dates aren't always reliable. Someone also suggested allowing site owners to register content, although that could raise issues as well, as non-savvy site owners wouldn't know to register content and someone else could take the content and register it instead. We currently rely on a number of factors such as the site's authority and the number of links to the page. If you syndicate content, we suggest that you ask the sites who are using your content to block their version with a robots.txt file as part of the syndication arrangement to help ensure your version is served in results.

Making a duplicate content report available for site owners
There was great support for the idea of a duplicate content report that would list pages within a site that search engines see as duplicate, as well as pages that are seen as duplicates of pages on other sites. In addition, we discussed the possibility of adding an alert system to this report so site owners could be notified via email or RSS of new duplication issues (particularly external duplication).

Working with blogging software and content management systems to address duplicate content issues
Some duplicate content issues within a site are due to how the software powering the site structures URLs. For instance, a blog may have the same content on the home page, a permalink page, a category page, and an archive page. We are definitely open to talking with software makers about the best way to provide easy solutions for content creators.

In addition to discussing potential solutions to duplicate content issues, the audience had a few questions.

Q: If I nofollow a substantial number of my internal links to reduce duplicate content issues, will this raise a red flag with the search engines?
The number of nofollow links on a site won't raise any red flags, but that is probably not the best method of blocking the search engines from crawling duplicate pages, as other sites may link to those pages. A better method may be to block pages you don't want crawled with a robots.txt file.

Q: Are the search engines continuing the Sitemaps alliance?
We launched sitemaps.org in November of last year and have continued to meet regularly since then. In April, we added the ability for you to let us know about your Sitemap in your robots.txt file. We plan to continue to work together on initiatives such as this to make the lives of webmasters easier.

Q: Many pages on my site primarily consist of graphs. Although the graphs are different on each page, how can I ensure that search engines don't see these pages as duplicate since they don't read images?
To ensure that search engines see these pages as unique, include unique text on each page (for instance, a different title, caption, and description for each graph) and include unique alt text for each image. (For instance, rather than use alt="graph", use something like alt="graph that shows Willow's evil trending over time".

Q: I've syndicated my content to many affiliates and now some of those sites are ranking for this content rather than my site. What can I do?
If you've freely distributed your content, you may need to enhance and expand the content on your site to make it unique.

Q: As a searcher, I want to see duplicates in search results. Can you add this as an option?
We've found that most searchers prefer not to have duplicate results. The audience member in particular commented that she may not want to get information from one site and would like other choices, but for that case, other sites will likely not have identical information and therefore will show up in the results. Bear in mind that you can add the "&filter=0" parameter to the end of a Google web search URL to see additional results which might be similar.

I've brought back all the issues and potential solutions that we discussed at the summit back to my team and others within Google and we'll continue to work on providing the best search results and expanding our partnership with you, the webmaster. If you have additional thoughts, we'd love to hear about them!



Today is Google Developer Day! We're hosting events for developers in ten cities around the world, as you can read about from Matt Cutts and on our Google Blog. Jonathan Simon and Maile Ohye, whom you have seen on this blog, at conferences, and in our discussion forum, are currently hanging out at the event in San Jose.

I've been at the Beijing event, where I gave a keynote about "Plumbing the Web -- APIs and Infrastructures" for 600 Chinese web developers. I talked about a couple of my favorite topics, Sitemaps and Webmaster Tools, and some of the motivations behind them. Then I talked a bit about consumer APIs and some of our backend infrastructures to support our platform.

Check out the video of my keynote on YouTube or see some of the other videos from the events around the globe.