Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Google Webmaster Hangout Notes: Tuesday 17th May

google webmaster hangouts recaps - SEO tips

Notes from the Google Webmaster Hangout on 17th May 2016, when John Mueller discusses a wide range of SEO topics including crawl rates, JavaScript based links and pages, affiliate links, and disallowed scripts.

 

Crawl Rate Limited by Server Performance

Crawl rate is somewhere between the minimum list of pages Google wants to update, and the maximum number of pages they think it’s safe to crawl without impacting performance. Any new pages discovered can be crawled provided there is some remaining budget, but might get queued up for the next day.

 

For Long Term Temporary URL Migrations, Use 301

For temporary URL migrations over a long period such as 6 months, a 301 redirect is better than 302 to speed up the move each way.

 

Don’t Use Noindex on Canonicalised Pages

Don’t mix a noindex with a canonical tag to a different page.

 

High Volumes of Sitewide Links Confuse Google

John recommends against high volumes of sitewide navigation links which make it harder for for Google to understand the connections between the pages.

 

JavaScript Links Are Delayed But Otherwise Equivalent to HTML Links

John says that JavaScript based links need to be rendered first, and will take a slighlty longer time to pick up, maybe a day or so, but otherwise will be treated equivalent to static HTML links.

 

Google Accepts Canonical Unless They Seem Wrong

Google will try to follow canonical directives by default, but they ignore canonical tags if there are significant content differences, or if a lot of URLs canonicalising to the same page, if they think it’s a mistake.

 

Disallow Overrides Parameter Removal

URL Parameter settings in Search Console are a hint for Google, and they will validate them periodically. The Robots.txt disallow overrides the parameter removal, so it’s better to use the parameter tool to consolidate duplicate pages instead of disallow.

 

Content Loaded Via Onclick Won’t Be Indexed

Google won’t see any content which is loaded via an onclick event, but they will find URLs inside JavaScript code itself and try to crawl them. It has to be loaded onto the page be default without an onclick in order for Google to see it.

 

Mobile Sites Don’t Need to be Indexed

Separate mobile sites should be canonicalising to the desktop page, so you don’t need to submit them to Google via a Sitemap, but it’s still worth adding to Search Console.

 

Google Follows Secondary Links If the First Is Nofollowed

If you have 2 links on a page to the same target, but one is nofollowed, then Google will just crawl the followable link.

 

Google Recommends JavaScript Pages over Escaped Fragment

Google is recommending against using the escaped fragment solution to get JavaScript pages crawled, they should be able to crawl the page on normal clean URLs.

 

Affiliate Links are OK

Google doesn’t dislike affiliate monetised sites, provided they are good quality sites with some original content.

 

Disallowed Scripts Don’t Generally Cause Penalties

Disallowed scripts which are flagged as errors are only an issue if they affect the displaying of content you want indexed, otherwise it’s OK to leave them disallowed.

Avatar image for Sam Marsden
Sam Marsden

SEO & Content Manager

Sam Marsden is Lumar's former SEO & Content Manager and currently Head of SEO at Busuu. Sam speaks regularly at marketing conferences, like SMX and BrightonSEO, and is a contributor to industry publications such as Search Engine Journal and State of Digital.

Newsletter

Get the best digital marketing & SEO insights, straight to your inbox