Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Google Webmaster Hangout Notes: June 30th 2017

google webmaster hangouts recaps - SEO tips

Notes from the Google Webmaster Hangout on the 30th of June, 2017.

 

Quality is Measured at a Page and Site Level

Google looks at quality on a per page basis as much as possible, but they also look at the overall picture which is affected by the individual pages.

 

Noindexed Pages Don’t Impact Site Quality

Site quality is only measured on indexable pages. If the quality of pages cannot be improved, you can use a noindex on low quality pages.

 

Google Doesn’t Care About HTML to Text Ratio

HTML to text ratio doesn’t matter to Google, as long as HTML page doesn’t exceed their size limit (~100mb), but it can result in slow pages which affects usability.

 

Use Canonical Tags to Test a New Website Before Migration

When you want to test a new website in parallel with the old one, you can launch the new site on a sub-domain and canonicalise the new pages to old pages.

 

It May Take Weeks to Recover Rankings After a Manual Action Penalty

It make take some time after a manual action penalty reconsideration has been accepted to recover rankings. It can take days or weeks, particularly if the entire site has been removed from Google’s index.

 

Content in Iframes May be Indexed on the Embedding Page

Pages embedded within an iframe on another page may be indexed as content on the embedding page as it will be seen when the page is rendered. You can use X-Frame-Options to prevent browsers from embedding a page which Google will respect.

 

Add Tag in Header to Specify Don’t Want Content Embedded in iframe

You can add a tag to the header of your page which will let browsers know that you don’t want content to be embeddable as an iframe.

 

Internal Search Pages Should Not Be Indexable

Google recommends you block internal search from being indexed as this will likely increase number of pages indexed for that site and can be be inefficient for crawling and indexing.

 

Google Follows 5 Steps in a Redirect Chain at a Time

Googlebot will follow up to five redirects at a time, and crawl further redirect steps at a later date to find where redirect ends up, but Google recommends redirecting directly from the original URL to the final URL.

 

Keep Old Domains Redirecting Following a Domain Migration

John recommends retaining old domains following a domain migration and keep the redirects active for as long as possible.

 

Google Recognises When a Different Site is Launched on an Old Domain

When a new site appears on a domain that was previously used by a different site or company, Google attempts to recognise it as a different site.

 

Links Within Primary Content Provide More Context but Less Weight than Sitewide Links

Googlebot differentiates between boilerplate content in headers, sidebars and footers for indexing. Links within the primary content provide more context than sitewide links, but sitewide links to pass more weight.

 

Structured Data Shouldn’t Differ Between HTML and Javascript

Google first crawls and indexes raw html and then the rendered HTML. If structured data differs between two could be confusing for Google.

 

Googlebot Crawls URLs in Javascript Code But Treats Them as Nofollow

Googlebot crawls urls found in Javascript code and automatically nofollows them. If the JavaScript modifies the HTML, then Google will respect a nofollow attribute.

Avatar image for Sam Marsden
Sam Marsden

SEO & Content Manager

Sam Marsden is Lumar's former SEO & Content Manager and currently Head of SEO at Busuu. Sam speaks regularly at marketing conferences, like SMX and BrightonSEO, and is a contributor to industry publications such as Search Engine Journal and State of Digital.

Newsletter

Get the best digital marketing & SEO insights, straight to your inbox