Noindex

A rel=”noindex” directive is used to instruct search engines not to include a page within their index, to prevent it from appearing within search results. Our Hangout Notes explain the use of this directive, along with further advice from Google and real world examples.

Either Disallow Pages in Robots.txt or Noindex Not Both

August 23, 2019 Source

Noindexing a page and blocking it in robots.txt will mean the noindex will not be seen, as Googlebot won’t be able to crawl it. Instead, John recommends using one or the other.


Noindex Thin Pages That Provide Value to Users on Site But Not in Search

July 23, 2019 Source

Some pages on your site may have thin content so it won’t be as valuable to have them indexed and shown in search, but if they are useful to users navigating your website then you can noindex them rather than removing them.


Google Will Use Other Canonicalization Factors If the Canonical Is Noindex

March 22, 2019 Source

Google would receive conflicting signals if a canonical points to a noindex page. John suggested that Google would rely on other canonicalization factors in this scenario to decide which page should be indexed, such as internal links.


Only Use Sitemap Files Temporarily for Serving Removed URLs to be Deindexed

November 16, 2018 Source

Sitemap files are a good temporary solution for getting Google to crawl and deindex lists of removed URLs quickly. However, make sure these sitemaps aren’t being served to Google for too long.


Submit Subdirectories to URL Removal Tool to Get Around Individual URL Limits

November 16, 2018 Source

The URL Removal Tool limits the number of individual URLs that can be submitted to be removed per day. To get around this, you can submit subdirectories to get entire sections of content removed from Google’s index.


Number of Noindexed Pages Has No Effect on Rankings or Site Quality

October 2, 2018 Source

Having a lot of noindexed pages doesn’t affect rankings or how Google perceives a site’s quality. For example, many sites need to noindex private content that requires a user to log in to access.


Many-to-one Redirects & Noindexed Pages Are Sometimes Treated as Soft 404s

October 2, 2018 Source

Noindexed pages and too many pages that redirect to one URL can both be treated as soft 404 errors by Google. Having soft 404s doesn’t impact the perceived quality of your website, but these pages won’t be crawled as frequently or indexed at all.


Avoid Using Google Tag Manager to Implement Critical Tags Like Noindex

September 4, 2018 Source

John suggests that search engines other than Google may struggle to process GTM tags. Also, because tags are powered by JavaScript, they will only be rendered and applied to a page a few days or weeks after the initial HTML page is indexed.


Google May Take Longer to Process Redirects For Noindexed Pages

July 27, 2018 Source

Google crawls noindexed pages less frequently. If a redirect is set up for a noindexed page, Google may take longer to process this because it is being crawled less frequently.


Related Topics

Crawling Indexing Crawl Budget Crawl Errors Crawl Rate Disallow Sitemaps Last Modified Nofollow RSS Canonicalization Fetch and Render