What’s wrong with Google Sitemaps
Last Friday it seemed like the whole blogosphere was abuzz with the news that Google unveiled its new Google Sitemaps service, a free inclusion service where you publish an XML file of your site pages to Google so its spider can get a better sense of what to crawl of your site. This is good news, especially for dynamic sites that aren’t getting fully indexed. I appreciate Google once again showing its thought leadership. Not only is Google giving webmasters a new way to relay information about their site structure information to its spiders, but it’s sharing this new technology with the other search engines by releasing the protocol and code as open source.
This all sounds wonderful, but there are 2 quite major problems with Google’s approach.
- First, it doesn’t solve the duplicate pages problem that a great many dynamic sites have. Even the Google Store suffers from this (which I blogged about previously but here’s a more recent example of a Google Store product page being duplicated times in Google’s index). The Google Sitemaps protocol does not provide a way for webmasters to convey which pages are duplicates of other pages. A site that gets crawled incorrectly by Googlebot, due to superfluous or non-essential parameters/flags being included in the URLs of links on the pages, will continue to get crawled incorrectly. An “Official Google Sitemaps Team Member” states that the sitemap XML file will merely augment their crawl, it won’t replace existing pages in the index:
This program is a complement to, not a replacement of, the regular crawl. The benefit of Sitemaps is two fold:
— For links we already know about thro our regular spidering, we plan to use the metadata you supply (e.g., lastmod date, changefreq, etc.) to improve how we crawl your site.
— For the links we dont know about, we plan to use the additional links you supply, to increase our crawl coverage.The high-level Google engineer who goes by GoogleGuy in the online forums explains Google Sitemaps in this way:
Imagine if you have pages A, B, and C on your site. We find pages A and B through our normal web crawl of your links. Then you build a sitemap and list the pages B and C. Now there’s a chance (but not a promise) that we’ll crawl page C. We won’t drop page A just because you didn’t list it in your sitemap. And just because you listed a page that we didn’t know about doesn’t guarantee that we’ll crawl it. But if for some reason we didn’t see any links to C, or maybe we knew about page C but the url was rejected for having too many parameters or some other reason, now there’s a chance that we’ll crawl that page C.
So, the way I read GoogleGuy’s explanation, if pages A and C are essentially duplicates of each other, with A containing an additional superfluous parameter in its URL (like sortby=default or lang=english), then BOTH could end up in Google’s index. Thus, Google Sitemaps won’t reduce the amount of duplication in Google’s index; in fact, I believe it will increase it.
Duplicate pages, on its own, may not sound like a problem for webmasters as much as it is for Google itself, which has to dedicate additional resources to maintain all this redundant content in its index. However, it does have serious implications for webmasters, because it results in PageRank dilution ?Ä® where multiple versions of a page split up the “votes” (links) and PageRank score that a single version of the page would aggregate.
- This brings me to the second, related problem with Google Sitemaps: it doesn’t do anything to alleviate the phenomenon of PageRank dilution. PageRank dilution results in lower PageRank, which in turn results in lower rankings. For example, consider that the above-mentioned Google Store’s product page (the “Black is Back T-Shirt”) is in Google’s index 5 times instead of just once. So each of those 5 variations earns only a fraction of the total potential PageRank score that it could have earned if all the links pointed to a single “Black is Back T-Shirt” page.Google Sitemaps needs to provide a way to convey, or to sync up with, the site’s hierarchical internal linking structure, so that it’s clear which pages should get how much of a share of the PageRank flowing into the site’s home page. Since the primary holder of PageRank score is the home page (that is, after all, the page that most everyone links to), it’s up to the site’s internal hierarchical linking structure to pass the PageRank of the home page to the rest of the site. As such, a page that is 2 clicks away from the home page will get a much larger share of PageRank score passed on to it from the home page, versus a page that is 5 clicks away from the home page.
Here’s how I suggest both of the above issues be rectified: by extending robots.txt with some additional directives that specify:
- which parameter in a dynamic URL is the “key field”
- which parameter is the product ID and which is the category ID (specifically for online catalogs)
- which parameters are superfluous or that don’t signficantly vary the content displayed
Armed with this information, Googlebot will be able to not only eliminate duplicate pages but also intelligently choose the most appropriate version to save in its index and then associate with that page the PageRank of ALL versions of the page. The days of session IDs killing a site’s Google visibility would be over! Google admits in its Sitemaps FAQ that session IDs are still a problem even with the advent of Google Sitemaps:
Q: URLs on my site have session IDs in them. Do I need to remove them?
Yes. Including session IDs in URLs may result in incomplete and redundant crawling of your site.
Remember, getting indexed only gets you to the party, it doesn’t mean you’re going to be popular at the party. Google Sitemaps may help you get more pages indexed, but if those pages all have a PageRank score of 0, then what was the point? It’ll be like sitting along the wall the whole time with no one asking you to dance!
GravityStream, our SEO proxy technology (the concept of SEO proxies is explained in my article in Catalog Age last October) deals with PageRank dilution by distilling URLs in links into their lowest common denominator and replacing them on the proxy. We’ve found that, even as Googlebot gets more aggressive at spidering dynamic sites with complex URLs and starts indexing one of our clients’ sites more fully, our proxy still has a major leg-up on the native site that it’s proxying. For example, our GravityStream proxy of PETsMART.com is #1 in Google for “best pet toys”, and yet the corresponding page on the PETsMART.com native site is nowhere in the first 10 pages of results even though it is indexed. Until Google extends Google Sitemaps to deal with PageRank dilution, I’d expect that a GravityStream proxy will still trump a native site, even if it’s using Google Sitemaps. That means that currently, despite Google Sitemaps, GravityStream still plays an important role for online retailers. Nonetheless, it’s my sincere hope that Google takes my feedback on board and reworks their protocol!
Chapter 6:
Keyword Research
From the fundamentals of link building to the nuances of natural linking patterns, virality, and authority.
Related Posts
Thursday Three: Embrace Journaling, Tackle Tardiness, and Explore Our Energetic Echo
Here’s what I found inspirational, challenging, or just downright hilarious this week. What caught your eye? And, remember to check out this week’s great podcast episodes: Scaling a SaaS Company with Jason Morehouse “A crucial factor to business success is to find and take the personal path that works best for you.” — Jason Morehouse […]
Read MoreThursday Three: Harrison’s harmony, conquering a blank canvas, & gut health hacks
Here’s what I found inspirational, challenging, or just downright hilarious this week. What caught your eye? And, remember to check out this week’s great podcast episodes: Be a Sales Game Changer with Fred Diamond “True elite sales professionals develop a dedicated mindset, proactive client interaction, and continuous self-preparation. They understand their client’s needs and enable […]
Read MoreThursday Three: Rebirth of sleeper trains, 4,000 weeks is a long/short time, and golden age for medicine
Here’s what I found inspirational, challenging, or just downright hilarious this week. What caught your eye? And, remember to check out this week’s great podcast episodes: A Story Worth Retelling with Luke Storey “Aligned values are the cornerstone of successful partnerships, whether in business or life, as they shape our moral code, define our priorities, […]
Read More