Crawl Budget

A crawl budget is allocated to every site and determines how many pages and resources can be crawled by search engines. Our Hangout Notes cover recommendations for the optimization of this crawl budget, as well as providing insights from Google around how crawl budget is controlled.

Redirects Can Impact Crawl Budget Due to Added Time for URLs to be Fetched

August 9, 2019 Source

If there are a lot of redirects on a site, this can impact crawl budget as Google will detect that URLs are taking longer to fetch and will limit the number of simultaneous requests to the website to avoid causing any issues to the server.


Excluded Pages in GSC Are Included in Overall Crawl Budget

August 9, 2019 Source

The pages that have been excluded in the GSC Index Coverage report count towards overall crawl budget. However, your important pages that are valid for indexing will be prioritised if your site has crawl budget limitations.


Crawl Budget Not Affected by Response Time of Third Party Tags

May 10, 2019 Source

For Google, crawl budget is determined by how many pages and resources they fetch from a website per day. If a page has a large response time they may crawl the site less to avoid overloading the server, but this will not be affected by any third party tags on the page.


Putting Resources on a Separate Subdomain May Not Optimize Crawl Budget

May 3, 2019 Source

Google can still recognise if subdomains are part of the same server and will therefore distribute crawl budget for the server as a whole as it is still having to process all of the requests. However, putting static resources on a CDN will balance crawling across the two sources independently.


Check Server Logs If More Pages Crawled Than Expected

May 1, 2019 Source

If Googlebot is crawling many more pages than it actually needs to be crawled on the site, John recommends checking the server logs to determine exactly which pages Googlebot is crawling. For example, it could be that JavaScript files with a session ID attached are being crawled and bloating the total no. crawled pages.


Crawl Budget Limitations May Delay JavaScript Rendering

December 21, 2018 Source

Sometimes the delay in Google’s JavaScript rendering is caused by crawl budget limitations. Google is actively working on reducing the gap between crawling pages, and rendering them with JavaScript, but it will take some time, so they recommend dynamic, hybrid or server-side rendering content for sites with a lot of content.


Crawl Budget Updates Based on Changes Made to Site

December 14, 2018 Source

A site’s crawl budget changes a lot over time, as Google’s algorithms react quickly to changes made to a website. For example, if a new CMS is launched incorrectly with no caching and it’s slow, then Googlebot will likely slow down crawling over the next couple of days so that the server isn’t overloaded.


Use Log Files to Identify Crawl Budget Wastage & Issues With URL Structure

July 13, 2018 Source

When auditing eCommerce sites, John recommends first looking at what URLs are crawled by Googlebot. Then identify crawl budget wastage and perhaps change the site’s URL structure to stop Googlebot crawling unwanted URLs with parameters, filters etc.


Small to Medium-Sized Sites Don’t Have to Worry About Crawl Budget

April 6, 2018 Source

Sites with ‘a couple hundred thousand pages’ or fewer don’t need to worry about crawl budget, Google will be able to crawl them just fine.


Related Topics

Crawling Indexing Crawl Errors Crawl Rate Disallow Sitemaps Last Modified Nofollow Noindex RSS Canonicalization Fetch and Render