Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Google Webmaster Hangout Notes: 16th October 2015

google webmaster hangouts recaps - SEO tips

In the Google Webmaster Hangout on 16th October, John Mueller discussed more about Google’s recent announcement on JavaScript-based websites, gave more detail on a few redirect queries, stated that 500 errors can affect ranking and reiterated that 302 redirects do pass PageRank.

Here are our notes with times and direct quotes from John where relevant. We’ve also included some further reading on crawling JavaScript/AJAX websites at the end.

 

Only disavow on the new domain when migrating

0:35: When you migrate a website to a new domain, you only need to disavow on the new domain (but disavow links pointing to both domains).

 

Crawl the site using a separate crawling tool to identify technical SEO issues

5:07: If you see Google crawling URLs you don’t recognize, you should look for issues which can be cleaned up. John advocates running a separate crawl of your website.

 

Lots of URLs redirecting to one unrelated URL may be treated as a soft 404

07:52: Too many URLs redirecting to a single unrelated URL (such as the homepage) may be treated as a soft 404 instead, as having lots of old content redirected to a page that is not a direct equivalent is confusing for users and Google can’t “equate” the pages to anything.

 

Google don’t use Google Analytics data for ranking

14:07: John stated that Google don’t use Google Analytics data for ranking.

 

CTR from search results doesn’t affect site-wide rankings

14:15: Google do look at click-throughs from search results to assess their algorithms, but this doesn’t impact rankings on a site or page level:

“We do sometimes use some information about clicks from search when it comes to analyzing algorithms so when we try to figure out which of these algorithms is working better, which ones are causing problems [and] which ones are causing improvements for the search results, that’s where we would look into that. But it’s not something that you would see on a per-site or per-page basis.”

 

URLs in a site migration are processed one-by-one, so redirect correctly

14:38: Google process every URL on a one-by-one basis as they go along, so if you change site architecture (URLs) it’s important to set up redirects properly and make sure they’re working correctly for every page. John explained:

“If you’re changing your site within your website then we do this on a per-URL basis kind of as we go along… some of those [changes] will be picked up faster, some of them take a lot longer to be crawled and essentially that’s kind of a flowing transition from one state to the next one…

“The important part here is of course that you do make sure to follow our guidelines for site restructuring [and] that you set up redirects properly so that we can understand that this old page is equivalent to this new page and that’s how we should treat them from our side.”

 

Recommended page fetch time is less than one second

17:42: John recommends a page fetch time of less than one second, as sites that respond quicker will get a higher crawl rate. This won’t directly help you rank better but might help get you ranked sooner.

 

302 (temporary) redirects are treated as 301s (permanent) after a while

22:40: 302 (temporary) redirects will be ignored at first as Google will assume you want to original URL to still be indexed/ranked. After a while they will treat it as 301 instead and drop the original page in favour of the redirect target. John also reiterated that 302 redirects (as well as 301s) pass PageRank.

 

Recommendations for out of stock and expired pages

28:16: How to treat out of stock or expired pages depends on the situation, but 301 redirecting to a relevant alternative product, or a category page with good alternatives, is fine. Otherwise a 410/404 is best.

Side note: find out more about handling expired content at our guide to Managing Churn and Change for Optimum Search Performance.

 

500 errors can impact ranking

31:50: While you won’t be penalized directly for returning too many 500 errors, they can still impact ranking. They might result in a lower crawl rate, as Google might take server errors to mean that they are crawling your website too quickly. If the site persistently sends 500 errors back to Google, then they will be treated like 404s and the URLs will be dropped from search results.

Also, don’t attempt to get pages that are returning 500 errors dropped from the index by using a meta noindex: Google won’t see the content of a 500 page so won’t know to drop them. While the effect of getting them dropped because they are returning 500 errors will be the same as having them dropped via a meta noindex, you still risk having your site crawled at a lower rate, so this isn’t a good method either.

If you do want these URLs removed, then It’s best to get to the root of why they are returning 500 errors and then noindex them in the normal way, or make them return a 404/410 instead.

 

Removed URLs will disappear from results after one day (but they are still technically indexed)

39:08: Using the URL removal tool will mean the URLs should disappear from results within a day. However they are still technically indexed, and recrawled periodically, so they will appear again based on the latest crawl data when the removal expires*.

Later he clarifies that these URLs are just stripped out of the search results at the last minute, and are still included in other calculations.

*Side note: Google recently changed the language on the URL removal tool to clarify that it only temporarily removes URLs from the index, which contradicts the previously-held popular belief that this tool could be used to permanently delete URLs from the index.

 

JavaScript-based sites should be fine now for SEO

45:22: Google’s recent announcement about JavaScript concerned the fact that they think JavaScript-based sites should be fine for SEO now:

“If you have a normal website that uses JavaScript to create your content, you don’t need to use the AJAX crawling scheme anymore. We can pretty much crawl and process most types of JavaScript setup [and] most types of JavaScript-based sites and we can pick that up directly for our index.

“So that’s essentially what this announcement was: we’re not recommending AJAX crawling setup anymore. We’re still going to respect those URLs [and] we’ll still be able to crawl and index those URLs, but in the future we recommend that you either do pre-rendering directly, in the sense that users and search engines will see a pre-rendered page, which is also something that a number of third-party services do, or use JavaScript directly as you have it and in general we’ll be able to pick that up.”

John suggests the pre-render as an alternative, however in Google’s Official announcement they said that ‘websites shouldn’t re-render pages only for Google’, and advised caution when deciding how this will affect user experience:

“If you pre-render pages, make sure that the content served to Googlebot matches the user’s experience, both how it looks and how it interacts. Serving Googlebot different content than a normal user would see is considered cloaking, and would be against our Webmaster Guidelines.”

However he also discussed some issues you might have with this approach and said that there are some things they might not pick up yet. Fetch and Render in Search Console is the best way to check this.

 

Recommendations for hashbang URLs

52:20: In response to the question: “I want to index my page without the hashbang and not giving a HTML snapshot to search engines so after yesterday’s announcement should I work on server delivery from HTML snapshot or what?”

John stated:

“The first thing that needs to happen is that we need to have individual URLs so if you have the current URLs with the hashbang, then that’s something you might want to migrate to a cleaner, normal-looking URL structure using HTML5, pushState, for example as a way to navigate using JavaScript within a website. So that’s something that’s probably a fairly big step that needs to be done at some point.

“With regards to serving HTML snapshots or not, I think that’s something you probably want to look at in a second step. If you’re doing that at the same time then use the fetch and render tool in Search Console to see how Google is able to pick up the content without the HTML snapshot and also maybe double-check to see where else you need to have your pages visible. If you’re using a special social network that you need to have your content visible in, then double-check to see whether they can pick up the right content from those pages as well.

“If you need to use HTML snapshots or pre-rendering, for any of those pages (and) for any of those places where you want your content shown then Google will be able to use that as well.”

Side note: John seemed to be saying if you have a hashbang-only URL you need to migrate to a friendly URL indexable structure. We assume he meant a hashbang only, as this was the original specification for the escaped fragment solution. For example:

https://www.example.com/ajax.html#key1=value1&key2=value2

In other guides, Google have said that URLs with hashbangs and exclamation will be now be indexed. For example:

https://www.example.com/ajax.html#!key1=value1&key2=value2

So we assume this is also considered a friendly format.

It’s just the # only URLs that need to be changed if you are relying on the escaped fragment solution, which will eventually be deprecated.

 

Further reading on JavaScript/AJAX and Google:

Google Search Console Help: Guide to AJAX crawling for webmasters and developers
How to Crawl AJAX Escaped Fragment Websites
Taming Parameters: Duplication, Design, Handling and Useful Tools
Angular JS and SEO

Avatar image for Tristan Pirouz
Tristan Pirouz

Marketing Strategist

Tristan is an SEO enthusiast, strategist, and the former Head of Marketing at Lumar.

Newsletter

Get the best digital marketing & SEO insights, straight to your inbox