Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Google Webmaster Hangout Notes: October 1st 2019

SEO and digital marketing industry news

Notes from the Google Webmaster Hangout on the 1st of October 2019.
 

Links in Footnotes Are Seen Differently Than Links Included Within Text

There is a difference in the way Google views links in footnotes compared to links within content. Links within the content of a page provide extra context which makes it easier for Google to understand what the link is about. In comparison, links within footnotes, with no set anchor text, are considered completely separate. John recommends using links within the content, where users will be able to easily access and use them.


 

Meta Robots Max Snippet Also Applies to Featured Snippet Length

The meta robots max snippet tag is used to tell Google how many characters a normal text snippet should be and applies regardless of where it is shown i.e in a featured snippet or a typical link result. One exception John mentioned is that if you are using structured data to trigger a specific rich result type then it would not apply.


 

Make Sure You Are Not Blocking Featured Snippet Results with the Meta Max Snippet Tag

If you want your page to appear as a featured snippet, but you set your max snippet length to one character, it is likely this will not be shown as a featured snippet.


 

Using The Data No Snippet Tag Will Not Affect Rankings for Content

The data no snippet tag lets you specify which parts of the content you don’t want to have shown in a search results description and applies to all places results will be shown. This will not affect the ranking for the content, just what is shown within search result previews.


 

Translated Content is Treated as a Completely New Piece of Content

Content that has been translated by a translator is treated as unique content and not seen as spun content, this is because the words are different when translated from one language to another.


 

Google Have a List Of ccTLDs That They Treat As Generic TLDs

To find out if your website is treated as a generic domain, John recommends accessing the international settings in GSC and seeing if you can set an international target.


 

Blocking Googlebot’s IP is The Best Way to Prevent Google From Crawling Your Site While Allowing Other Tools to Access It

If you want to block Googlebot from crawling a staging site, but want to allow other crawling tools access, John recommends whitelisting the IPs of the users and tools you need to view the site but disallowing Googlebot. This is because Google may crawl pages they find on a site, even if they have a noindex tag, or index pages without crawling them, even if they are blocked in robots.txt.


 

Google Has a Separate User Agent For Crawling Sitemaps & For GSC Verification

Google has a separate user agent that fetches the sitemap file, as well as one to crawl for GSC verification. John recommends making sure you are not blocking these.


 

Be the First to Know About the Latest Insights From Google

Hangout notes

Avatar image for Ruth Everett
Ruth Everett

Technical SEO

Ruth Everett is a data & insights manager at Code First Girls, and a former technical SEO analyst at Lumar. You'll most often find her helping clients improve their technical SEO, writing about all things SEO, and watching videos of dogs.

Newsletter

Get the best digital marketing & SEO insights, straight to your inbox