A rel=”noindex” directive is used to instruct search engines not to include a page within their index, to prevent it from appearing within search results. Our Hangout Notes explain the use of this directive, along with further advice from Google and real world examples.

Google Would View a Page Canonicalized to a Noindex URL as a Noindexed Page

December 10, 2019 Source

If you have a canonical link pointing to a page that is noindexed, the page canonicalised to it would also be considered noindex. This is because Google would view it as a redirect to a noindex page and therefore drop it.

There is No Risk of a Noindex Signal Being Transferred to the Target Canonical Page

December 10, 2019 Source

If a page is marked as noindex and also has a canonical link to an indexable page, there is no risk of the noindex signal being transferred to the target canonical page.

Using Noindex Header Tag Will Not Prevent Google From Viewing a Page

October 18, 2019 Source

Including a ‘noindex X-Robots-Tag’ HTTP header directive on a sitemap file will not affect how Google is able to process the file. You can also include this directive on other documents such as CSS files, as it will not affect how Google views them, instead it will just prevent them from showing up in a web search.

Either Disallow Pages in Robots.txt or Noindex Not Both

August 23, 2019 Source

Noindexing a page and blocking it in robots.txt will mean the noindex will not be seen, as Googlebot won’t be able to crawl it. Instead, John recommends using one or the other.

Noindex Thin Pages That Provide Value to Users on Site But Not in Search

July 23, 2019 Source

Some pages on your site may have thin content so it won’t be as valuable to have them indexed and shown in search, but if they are useful to users navigating your website then you can noindex them rather than removing them.

Google Will Use Other Canonicalization Factors If the Canonical Is Noindex

March 22, 2019 Source

Google would receive conflicting signals if a canonical points to a noindex page. John suggested that Google would rely on other canonicalization factors in this scenario to decide which page should be indexed, such as internal links.

Only Use Sitemap Files Temporarily for Serving Removed URLs to be Deindexed

November 16, 2018 Source

Sitemap files are a good temporary solution for getting Google to crawl and deindex lists of removed URLs quickly. However, make sure these sitemaps aren’t being served to Google for too long.

Submit Subdirectories to URL Removal Tool to Get Around Individual URL Limits

November 16, 2018 Source

The URL Removal Tool limits the number of individual URLs that can be submitted to be removed per day. To get around this, you can submit subdirectories to get entire sections of content removed from Google’s index.

Number of Noindexed Pages Has No Effect on Rankings or Site Quality

October 2, 2018 Source

Having a lot of noindexed pages doesn’t affect rankings or how Google perceives a site’s quality. For example, many sites need to noindex private content that requires a user to log in to access.

Related Topics

Crawling Indexing Crawl Budget Crawl Errors Crawl Rate Disallow Sitemaps Last Modified Nofollow RSS Canonicalization Fetch and Render