7 Incredibly Efficient DeepCrawl Features (No Spreadsheets Required)

Alyssa Ordu
Alyssa Ordu

On 16th November 2015 • 9 min read

I don’t like Excel. There, I said it.

Don’t get me wrong, Excel is really useful when the need arises. But, at the risk of sounding like an 80s businessman, time is money. As a freelancer, the longer admin tasks take, the less money I make.

So I don’t think I’m alone in wanting an SEO tool that:

  1. Lets me know when there’s a problem.
  2. Shows me where that problem is so that I can fix it.
  3. Saves me time (and therefore money) by showing me everything in an easy-to-use dashboard.

That’s why I use DeepCrawl to monitor how my sites are doing, and I believe any online marketer can do the same, even those who aren’t technically-minded. Here are seven of my favourite features.

toolkit
 

1. No spreadsheets required...

In my time using DeepCrawl I’ve found pages that are duplicated from others, an issue with paginated blog posts and invalid Open Graph tags.

For a full list of everything DeepCrawl can help you find and manage — from problems with canonical/hreflang tags to redirects and broken links — take a look at Fili Wiese’s guide to 40 DeepCrawl tweaks to make a website soar in Google Search on the State of Digital blog.

And, since DeepCrawl is based online, all the data is available in my browser. I can switch from working at home to the office, and between computers, and still see everything I need.

 

2. Schedule regular crawls (one less thing to remember)

There’s no need to start a crawl when you log in: as long as you’ve scheduled a crawl, your information will be waiting for you when you arrive.

 

3. Clear, easy-to-use results

Everything you need is right there in the dashboard when you log in. There’s no downloading or manual work required: you’ll see an accurate picture of the problems when you log in.

1

The most important part for me is the issues list on the left-hand side: these are the problems I need to fix in order to improve the set-up of the site and bring it in line with SEO best practice:

2
 

4. Dig deeper to see where the issues are

Click on an issue to see every URL affected by that problem. In the example below I’ve clicked on my duplicate titles report, which shows me each URL that has a duplicate title somewhere in the site, plus a list of all the duplicates:

3

I can then click on a URL to see everything I need to know about that URL, including any other issues that it’s experiencing:

4

Anything in red could be causing an issue for the performance of that page and therefore could be affecting the Google ranking, so I can get it fixed straightaway.

 

5. See which SEO issues are affecting traffic and engagement

With one extra click in my crawl setup, I can integrate search traffic data from Google Analytics with my list of SEO issues to see which problems I should prioritize (ie. which ones are affecting my bottom line), which ones are affecting engagement and what might be causing my rankings to suffer.

It could be a title that’s not been properly optimized, a H1 in the wrong place or something really amiss like a broken page or a redirect: whatever it is, I can see everything I need in one place, along with time on-site, visits and bounce rate, titles and descriptions, social tag information, word count, load time, and link information.

To add this information, I just select the right Google Analytics profile in Analytics Settings, and run my crawl as normal.

Once the crawl has run, everything I need will be under Universal > Analytics in my report:

Here’s the Missing in Organic Landing Pages report in action:

5

I can see quite a few category pages that don’t bring in any search traffic here, so now that I have a list of them all I can start looking for patterns and quick-wins.

I can click on each URL to see the set-up on that page. For example, I can immediately see that my category pages are not optimized: the page title is very short and not engaging at all, plus there’s no description:

6

Fixing this issue might not make the page rank very well immediately, but it’s a good place to start and flags a bigger issue with the category pages on this site.

I’ve recently found this report really useful in identifying an issue with a site’s tag pages: by using the filter at the top of the page, I could filter just to pages that had /tag/ in the URL and prove that none of this site’s tag pages were adding any value to the site (since they’re not used for any other functionality), and that we should start again with new (optimized) tags.

 

6. Filter, and then filter again

You can filter any report in much the same way as you can do in Excel using the ‘Matches’ and ‘Does not match’ search fields at the top of a report. This is great for two reasons:

Here I’ve filtered my duplicate pages report to just see my outfit of the day posts in the fashion category. In this case I can see I’ve got a large number of duplicated posts, which is an easy issue to fix, especially when I can also see which pages they’re duplicated with:

7
 

7. Work from the cloud

Since all my data on DeepCrawl is based on the cloud, I can switch offices using my faithful Chromebook as many times as I want in a day and still have access to everything I need, without having to manage several different Excel spreadsheets across different computers.

 

Using DeepCrawl: advanced guides

Author

Alyssa Ordu
Alyssa Ordu

Alyssa is a keen traveller, cocktails & dad jokes enthusiast who does Marketing, in that order. A lover of outreach, connect with her for opportunities to collaborate, or exchange a pun or two.

 

Tags

Get the knowledge and inspiration you need to build a profitable business - straight to your inbox.

Subscribe today