Noindex, Nofollow & Disallow

Sam Marsden
Sam Marsden

On 17th September 2015 • 16 min read

The three words above might sound like SEO gobbledegook, but they’re words worth knowing, since understanding how to use them means you can order Googlebot around. Which is fun.

So let’s start with the basics: there are three ways to control which parts of your site search engines will crawl:

  1. Noindex: tells search engines not to include your page(s) in search results.
  2. Disallow: tells them not to crawl your page(s).
  3. Nofollow: tells them not to follow the links on your page.

 

What is a Noindex Meta Tag?

A ‘noindex’ tag tells search engines not to include the page in search results.

The most common method of noindexing a page is to add a tag in the head section of the HTML, or in the response headers. To allow search engines to see this information, the page must not already be blocked (disallowed) in a robots.txt file. If the page is blocked via your robots.txt file, Google will never see the noindex tag and the page might still appear in search results.

To tell search engines not to index your page, simply add the following to the </head> section:

<meta name=”robots” content=”noindex, follow”>

The second part of the content tag here indicates that all the links on this page should be followed, which we’ll discuss below.

Alternatively, the noindex tag can be used in an X-Robots-Tag in the HTTP header:

X-Robots-Tag: noindex

For more information see Google Developers’ post on Robots meta tag and X-Robots-Tag HTTP header specifications.
 

How Can I Use Noindex in a Robots.txt File?

A ‘noindex’ tag in your robots.txt file also tells search engines not to include the page in search results, but is a quicker and easier way to noindex lots of pages at once, especially if you have access to your robots.txt file. For example, you could noindex any URLs in a specific folder.

Here’s an example of a noindex directive that could be placed in the robots.txt file:

Noindex: /robots-txt-noindexed-page/

However, Google advise against using this method: John Mueller has stated that ‘you shouldn’t rely on it’.
 

What is a Disallow Directive?

Disallowing a page means you’re telling search engines not to crawl it, which must be done in the robots.txt file of your site. It’s useful if you have lots of pages or files that are of no use to readers or search traffic, as it means search engines won’t waste time crawling those pages.

stop

To add a disallow, simply add the following into your robots.txt file:

Disallow: /your-page-url/

If the page has external links or canonical tags pointing to it, it could still be indexed and ranked, so it’s important to combine a disallow with a noindex tag, as described below.

A word of caution: by disallowing a page you’re effectively removing it from your site.

Disallowed pages cannot pass PageRank to anywhere else – so any links on those pages are effectively useless from an SEO perspective – and disallowing pages that are supposed to be included can have disastrous results for your traffic, so be extra careful when writing disallow directives.
 

How Can I Combine Noindex and Disallow?

Noindex (page) + Disallow: Disallow can’t be combined with noindex on the page, because the page is blocked and therefore search engines won’t crawl it to know that they’re not supposed to leave the page out of the index.

Noindex (robots.txt) + Disallow: This prevents pages appearing in the index, and also prevents the pages being crawled. However, remember that no PageRank can pass through this page.

To combine a disallow with a noindex in your robots.txt, simply add both directives to your robots.txt file:

Disallow: /example-page-1/

Disallow: /example-page-2/

Noindex: /example-page-1/

Noindex: /example-page-2/
 

What is a Nofollow Tag?

A nofollow tag on a link tells search engines not to use a link to decide on the importance of the linked pages (PageRank) or discover more URLs within the same site.

Common uses for nofollows include links in comments and other content that you don’t control, paid links, embeds such as widgets or infographics, links in guest posts, or anything off-topic that you still want to link people to.

Historically SEOs have also selectively nofollowed links, to funnel internal PageRank to more important pages.

Nofollow tags can be added in one of two places:

A nofollow won’t prevent the linked page from being crawled completely; it just prevents it being crawled through that specific link. Our own tests, and others, have shown that Google will not crawl a URL which it finds in a nofollowed link.

Google state that if another site links to the same page without using a nofollow tag or the page appears in a Sitemap, the page might still appear in search results. Similarly, if it’s a URL that search engines already know about, adding a nofollow link won’t remove it from the index.

In September 2019, Google announced an update to their nofollow directive and introduced two new link attributes, these are:

In addition, all links marked with nofollow, sponsored or ugc are now treated as hints regarding which links to consider in search and when crawling, as opposed to just a signal, as was used previously for nofollow. You can find out more about this update in our post which also covers the impact of these along with expert insights.

What is Noindex Nofollow?

As mentioned above, adding a nofollow tag to a page won't prevent it from being crawled completely. Therefore, to prevent it from being indexed, you’ll also need to noindex the page. This will allow Google to still be able to crawl the page but it will not appear in the index. Pages you will probably want to noindex include; admin/login pages, internal search results and registration pages. To stop Google crawling the page completely, you should also disallow it (see above).

 

Other directives: Canonical Tags, Pagination and Hreflang

There are other ways to tell Google and other search engines how to treat URLs:

 

How much time should you spend on reducing crawl budget?

You might hear a lot of talk on SEO forums about how important crawl efficiency and crawl budget is for SEO and, while it’s common practice to disallow and noindex large groups of pages that have no benefit to search engines or readers (for example, back-end code that is only used for the running of the site, or some types of duplicate content), deciding whether to hide lots of individual pages is probably not the best use of time and effort.

Google likes to index as many URLs as possible, so, unless there is a specific reason to hide a page from search engines, it’s usually ok to leave the decision up to Google. In any case, even if you hide pages from search engines, Google will still keep checking to see if those URLs have changed. This is especially pertinent if there are links pointing to that page; even if Google has forgotten about the URL, it might re-discover it the next time a link is found to it anyway.
 

Testing using Search Console, DeepCrawl and Robotto

 

Test robots.txt using Search Console

The robots.txt Tester tool in Search Console (under Crawl) is a popular and largely effective way to check a new version of your file for any errors before it goes live, or test a specific URL to see whether it’s blocked:

robottxt-tester

However, this tool doesn’t work exactly the same way as Google, with some subtle differences in conflicting Allow/Disallow rules which are the same length.

The robots.txt testing tool reports these as Allowed, however Google has said ‘If the outcome is undefined, robots.txt evaluators may choose to either allow or disallow crawling. Because of that, it’s not recommended to rely on either outcome being used across the board.’

John Mueller

For more detail, read this Webmaster Central Help Forum discussion.
 

Find all non-indexable pages using DeepCrawl

Run a Universal crawl without any restrictions (but with the robots.txt conditions applied) to allow DeepCrawl to return all of your URLs and show you all indexable/non-indexable pages.

If you have URL parameters that have been blocked from Googlebot using Search Console, you can mimick this set-up for your crawl using the Remove Parameters field under Advanced Settings > URL Rewriting.

You can then use the following reports to check that the site is set-up as you’d expect on your first crawl, and then combine them with the built-in change logs on subsequent crawls.
 

Indexation > Noindex Pages

This report will show you all pages that contain a noindex tag in the meta information, HTTP header or robots.txt file.
 

Indexation > Disallowed Pages

This report contains all URLs that can’t be crawled because of a disallow rule in the robots.txt file. There are figures for both of these reports in the dashboard of your report:

disallowed pages

Use our intuitive reporting in each of our reports to check particular folders and spot patterns in URLs that you might otherwise miss:

Noindex-Pages
 

Test a new robots.txt file using DeepCrawl

Use DeepCrawl’s Robots.txt Overwrite function in Advanced Settings to replace the live file with a custom one.

techinical_audit_robotrxt_overwrite_function_DeepCrawl

You can then use your test version instead of the live version next time you start a crawl.

The Added and Removed Disallowed URLs reports will then show exactly which URLs were affected by the changed robots.txt, making evaluation very simple.

For a more information, read our guide to managing robots.txt changes with DeepCrawl.
 

Want More Like This?

We hope that you’ve found this post useful in learning more about noindex, nofollow and disallow to control the crawling of your site.

You can read more about these topics in our Technical SEO Library or if you want to learn how to conduct a technical SEO audit have a read of our guide.

Additionally, if you’re interested in keeping up with Google’s latest updates and best practice recommendations then why not loop yourself in to our emails?

Loop Me In!

Author

Sam Marsden
Sam Marsden

Sam Marsden is DeepCrawl's SEO & Content Manager. Sam speaks regularly at marketing conferences, like SMX and BrightonSEO, and is a contributor to industry publications such as Search Engine Journal and State of Digital.

 

Tags

Get the knowledge and inspiration you need to build a profitable business - straight to your inbox.

Subscribe today