More or Less Every New Website is Rendered When Google Crawls it For the First Time
Nearly every website goes through the two waves of indexing when Google sees it for the first time, meaning it isn’t indexed before it has been rendered.
URL Removal Tool Hides Pages But Doesn’t Impact Crawling or Indexing
The URL Removal Tool only hides a page from the search results. Nothing is changed with regards to the crawling and indexing of that page.
There Isn’t a Separate Index for Mobile and Desktop Indexing
Google have one main index where either the mobile or desktop version of a site is contained, this is the version which will then be shown in search results. However, if you have a seperate mobile site, Google will always show this version to users on a mobile device.
Disallowed Pages With Backlinks Can be Indexed by Google
Pages blocked by robots.txt cannot be crawled by Googlebot. However, if they a disallowed page has links pointing to it Google can determine it is worth being indexed despite not being able to crawl the page.
Google May Index Redirected URLs if Served in Sitemap Files
Redirects and sitemaps are both signals that Google uses to select preferred URLs. If you redirect to a destination URL but the source URL is in a sitemap, this is giving Google conflicting signals about which URL you want to be shown in search
Internal Search Results Pages Should be Blocked Unless They Provide Unique Value
Internal search result pages should be blocked from crawling because it could overload the site’s server and they tend to be low quality. However, there may be instances where it makes sense to have these pages indexed if they provide value.
Ensure all Key Content is Available if You Are Streaming Content
If a site is streaming content progressively to a page, John would recommend ensuring all key content is available immediately due to the method used to render content. Any additional content which is useful for users but not critical to be indexed can then be streamed progressively.
Googlebot No Longer Needs to Convert Hashbang URLs into Escaped Fragments
Googlebot no longer converts hashbang URLs into escaped fragments as it is able to render and index them directly rather than using the pre-rendered version specified with the escaped fragment. Therefore, John would recommend moving to something that’s URL-based rather than hashtag-based.
The Indexing Issue From Last Week Has Been Resolved
The indexing issue seen last week related to Google having issues indexing new content has now been resolved. If sites are still experiencing issues with the indexing of their content this won’t be related to the previous issue.