Disallowed Pages With Bot Hits

What does this report contain?

Pages that are disallowed, but which were crawled by search engine crawlers

How can this report be used?

These pages received traffic from a crawler, but are disallowed in the robots.txt. They may have been disallowed recently (in which case, you need to ensure that they should be disallowed), or the crawler may have been ignoring the robots.txt directives.


Crawl Budget, Log Files

API Report Code: