What is site architecture?
Site architecture is the foundation of the technical health of any website. Site performance optimisation is incredibly important, there’s no doubt about that. However, to reap the rewards of mobile usability enhancements, site speed improvements or better international targeting, you need to make sure your website is structured and organised correctly.
Access the Downloadable PDF Version of this Guide
The concept of site architecture is made up of two key elements:
- Categorisation: The types of pages on your site.
- Linking: How the pages on your site are connected.
Site architecture is a combination of the different pages on a website, and the way they are linked to one another.
Analysing site architecture is vital for an SEO, because it fundamentally affects how search engines are able to crawl your website, and how users are able to navigate it in order to convert.
Your website lives or dies based on how well architectured, designed and integrated it is. Every piece of content you publish, every article you write, every product you sell and every campaign you run is constrained, or set free, by your architecture.
Your primary navigation needs to reflect your users' need groups. Your taxonomies need to intuitively connect your content. Your internal linking needs to allow users and search engines to explore contextually. Your lateral navigation needs to expose secondary routes without overwhelming users with choice. And all of this needs to change, evolve and adapt as your site continues to grow and change.
Design and manage a well-structured site and make a million tiny tweaks to your website's content, structure and linking, every day, until you die. That's how you win.
To start you off with some key areas to be thinking about when exploring site architecture, Bastian Grimm of Peak Ace AG has put together a list of the most common site architecture errors that he has found when auditing websites:
One major topic is always cannibalisation, in its multiple facets.
- Inconsistency in internal linking, with signals referring very differently within one domain. Examples include inconsistent use of naming regarding internal anchor texts (e.g. the same anchor text is used for multiple URLs, or a single URL has a different/unspecific anchor text). As anchor texts transmit relevancy, we often see improvements after implementing a consistent internal linking structure.
- When multiple URLs are targeting the same keywords. Besides the general duplicate content problem, what often happens is that none of the pages generate any top positions. Therefore, setting up a proper targeting/indexing strategy and monitoring it over time is essential.
Efficiency is also very important - you need to eliminate waste as best as possible and optimise your domain in such a way that Google can process it quickly and efficiently. Wasting Google's resources is like throwing money out of the window (for them) - a situation Google doesn't like at all.
- If multiple links (from the same source URL) refer to one and the same destination. From an SEO perspective, this isn't necessary, also link equity is not passed on additionally.
- Badly maintained sitemaps. Sitemaps that endeavour to serve everything are a waste of resources and should be rectified. They should only serve HTTP 200 indexable, non-canonicalised URLs. To ensure that your sitemaps are clean and are therefore encouraging Google to discover new pages, crawl them before uploading. If they are okay, submit them using GSC. The new GSC provides much better data with regards to correctly implemented sitemaps.
- The widespread lack of maintenance that still exists across all industries is shocking. Even now we still see cases in which 302s are being used instead of 301s for permanent URL redirects, internal link redirects or even broken pages.
- Poor management of sorting and filtering URLs. Based on log files, we often see that Google runs into these generated, duplicate URLs for no reason. Letting Google crawl all your sorting and filtering URLs is simply a massive waste of resources. There are common and (still) functioning workarounds to prevent these cases (i.e. Post-Redirect-Get), but using the nofollow attribute is not a solution here.
- It is vital to be as fast as possible for both Google and the user. As your crawl budget is more or less based on computing timing and Google wants to process your domain quickly, fast loading websites have a clear advantage.
Depending on the number URLs a domain has, correctly setting up crawling directives is essential.
- As Google favours well-structured informational architecture, you need to understand the situation or problem you want to solve with your pages and then make an informed decision. However, directives like robots.txt and robots meta noindex are often misused or combined and therefore cannot work properly (e.g. you should never mix these two because the crawler can't process the noindex tag within the <head> if the page is blocked by the robots.txt).
- As we know, the canonical tag is more of a hint than a directive and it is often the case that Google ignores it. Therefore, over-reliance on canonical tags can lead to messing up the index. Rectifying such situations can take months and a lot of re-crawling by Google. Therefore, check and monitor your canonical tags frequently to see if they are functioning as desired.
The last big field that is always among the top topics for site architecture is prioritisation.
- Surprisingly, we often see a lack of planning (or even no planning at all) that ultimately leads to "chaos" (e.g. the most relevant pages being only partially linked from the homepage). Special attention should be paid to internal prioritisation and the accessibility of important pages. A method of consistently monitoring your setup is also vital. If you are planning to change important navigational elements or you are just restructuring your template, you should always check the condition of your site both before and after the changes go live on a staging system.
- Another common example of incorrect prioritisation is when product pages get an equal amount of links to top category pages. As category pages are often much more valuable, you should analyse and monitor the proportion of links referring to these different URL types. If needed, you should think about changing your internal linking in favour of category pages.
Let’s take a closer look into some of the key considerations for search engines and users with regards to site architecture.
How site architecture impacts search engines
Search engines will do their best to crawl any websites they find without any guidance. However, we can make the search engines’ jobs much easier if we can organise our websites in an understandable way for their crawlers. Without the clear structuring of content, your most important pages could be completely overlooked by Google; is that a risk you want to take?
Site architecture is crucial for SEO because it impacts search engine crawlers in terms of crawling and indexing. Not only does site structure affect a search engine’s ability to navigate a website and find pages to add to its index, but it also helps in demonstrating the importance and topical relevance of different pages, which is key for ranking.
It's impossible to overstate the importance of a good site architecture for SEO. Search engines take a lot of signals from the way a site is structured and how information is categorised. With a good site structure, you can send strong semantic signals to search engines and also help your users find the right content quickly as they navigate through your site. It's especially important to look at aspects like click depth, content hierarchy, and proper labelling of links and sections. Following best practices for information architecture tends to result in an SEO-friendly website that often has an edge over competitors.
Internal linking is particularly important because it determines how deep a page is within a site’s architecture, which directly impacts Googlebot’s crawl rate.
Depth of content affects crawl rates.
-John Mueller, Google Webmaster Hangout
The way a site is internally linked also affects which pages get crawled and how frequently. If an important page is only linked to a couple of times, Google will crawl it less.
Internal linking affects crawl frequency.
-John Mueller, Google Webmaster Hangout
The structure and internal linking of a website will decide how often and how efficiently Google will be able to crawl it.
It’s important to consolidate the pages on your website and monitor the links between them to keep search engines happy and able to continue crawling with ease. This means keeping on top of ‘dead-end’ pages.
Crawlability is a major function of site architecture. Broken links hurt Google's ability to index your website and recommend its content.
To keep Google happy (and provide a great user experience), periodically crawl your website for errors. Also, see issues exactly as Google does with the Index Coverage report in Google Search Console.
If you find any broken links or determine that you need to delete outdated content, create a redirect for each issue. Redirects can help Google understand things like whether you've just moved content to a different URL or deleted it completely.
If you use WordPress, there are plenty of plugins (like Yoast SEO Premium) that can help you do this without having to go into your website's backend.
We’ll go into more detail and share some best practice advice on utilising internal linking to help improve site architecture later on in this guide.
How site architecture impacts users
The quality of your site architecture doesn’t just impact how search engines can crawl and index your content, it also affects users. This is arguably a more pressing issue as user experience becomes more and more intrinsically tied to rankings, especially for Google.
A site’s structure will determine a user’s journey, which pages they are more likely to land on and are able to navigate to, and whether or not they can complete their primary goal for visiting the website. This will impact user engagement, which will ultimately determine whether or not the user has a positive or negative experience with the brand. That’s a big deal. To keep users happy, you need to consistently match their expectations by mapping pages on a site to their intent.
Focus on structuring your site so that users are able to find what they’re looking for as quickly and as often as possible.
At the foundational level, your architecture should be guided by understanding users and the way that they think about your offerings. The more your categories and site taxonomy match up with your users' mental maps, the more intuitive navigating the site will be. Some of those learnings can come through keyword research (helping to match the terminology that people are using). What's more instructive is gathering people from your target demographics and watching them perform different tasks on your site. Your user experience has to be the foundation of the architecture you choose.
Just like the other aspects of SEO today, focusing on improving a website’s site architecture for humans first and foremost will lead to a winning search strategy.
The most common pitfall of information architecture is piling on content, images, and links 'for SEO'. Your website is a window into how your organization runs. When a user feels that it is smart and sophisticated, they tend to stick around. When their next step doesn't seem clear and intuitive, decision fatigue sets in and abandonment rates rise.
When investing in flashy features that don't satisfy user intent, ask yourself, “Does this create momentum?” If the effort to create and maintain a resource doesn’t move a user toward the next macro or micro-conversion, be curious and question how the investment can better serve your business and audience.
The difference between URL structure and site architecture
There can often be confusion in the SEO industry about the difference between site structure and URL structure, and the two terms are sometimes used interchangeably. It’s important to note that there are distinct differences between the two.
Site structure refers to the entire architecture of a website and how all of its different pages are connected, whereas URL structure refers to the content of an individual URL string, as well as how it is formatted.
URL structure should be treated as supplementary to a site’s architecture and hierarchy, because this is a useful signal for users as the words in a URL can convey meaning and context to them while browsing a website. However, site architecture focuses on internal linking and the findability of pages.
Click depth determines page importance more than URL structure.
-John Mueller, Google Webmaster Hangout
Google sees pages that are one click from the homepage as the most relevant pages on a site and are given more weight in the search results. This is a much more important aspect for search engines that the structure of a URL.
Site architecture and URL structure are different concepts. URL structure should simply be used as a signal for conveying context to users.
What is information architecture?
Information architecture is a key concept to know about when exploring the topic of site architecture. It is concerned with the arrangement and structuring of pages on a website so that they’re easier to understand and navigate.
Information architecture is about organising the content on a site and orientating the user to better help the flow of their journey on a website.
Your site visitors are probably not going to spend all day reading your content, unless they are a lawyer – or it's your ever-so-proud mum. So they will most likely arrive at the site not on the homepage. Every page is a potential start page and you need to have in place an organised structure so that a visitor can understand where they are on the site and find what they are looking for to suit their intent.
Oddly, the biggest problem with site architecture is with homepages. I have seen and worked on so many that do not do the basics of explaining what the site is and what you can find there. Tell the visitor in plain language and images what you do and what they can find on this site. That will inform the new visitor and assure the returning visitor. Assume they don't know who you are or what you do, even if you are a household brand, as those visitors might not be your immediate audience, but they may be in the future.
To me, breadcrumbs are a key element of a site’s structure as they not only help the user quickly understand where they are within a site, but also help the search engines figure it all out, especially when those breadcrumbs include structured data.
It’s about making clear to the user exactly where they are on a website at all times, where the information they want is situated in relation to them, and how they can get there easily. We do the thinking by structuring a sound information architecture, so users don’t have to.
Successful information architecture is as intuitive as possible for visitors with a ‘Don’t Make Me Think’ mentality.
However, information architecture needs to be rooted in data. What pages convert the most visitors? Are they easy to find in your navigation? Do you make it easy for visitors to flow through your site to the next logical steps? Are you using the same terminology as your target market? (Social Natural Language Processing is very helpful for this.)
Information architecture compliments SEO perfectly, because the key goal of both is findability. SEO ensures websites are findable in the search engines, and information architecture takes over once a user lands on a website to make sure the exact content they want is findable. The end result is meeting the user’s search intent and creating a positive user experience, which benefits everyone.
These are the four fundamental aspects that information architecture brings to the table, as explained by Shari Thurow:
- Categorization - All sites need a primary hierarchical structure, or a primary taxonomy, which ultimately becomes the primary navigation on the website. Without a primary hierarchical structure, users will not have a sense of beginning or ending when they try to find their desired content.
- Organization - Information architects are skilled in categorizing, classifying and organizing content according to user mental models.
- Prioritization - If the navigation system contains too many links and is too wordy, it will be difficult to scan, making desired content less findable. Likewise, if page content has too many embedded text links, then content becomes difficult to read, and the very information piece that a searcher desires becomes more difficult to find.
- Labeling - Text in the primary navigation, whether it is formatted as a graphic image or in CSS, are certainly navigation labels. But other items on a web page are also navigation labels, such as headings and embedded text links. SEO professionals might not realize it, but these can have a positive or negative influence on navigation design and website usability.
When getting started with information architecture optimisation for SEO, aligning content structure and navigation is key for making sure both users and search engines can understand the most important pages and their relevance.
Having a strong information architecture can help in many ways by acting as a strong foundation to build on. In my opinion, the three most important elements of IA for SEO are: URL structure, navigation structure, and content positioning. If you are able to align all three in matching patterns, then it can help your SEO efforts greatly. This is because the search engines will better be able to assign keyword relevancy, content segmentation, and equitable link authority. Also, users will be able to easily navigate the site, discover the most important topics, and more efficiently utilize the site’s offering.
To demonstrate the importance of information architecture, here are just a few of the consequences of getting it wrong:
- Duplicate pages and categories
- Imbalanced distribution of topical authority
- Poor internal linking
- Confusing or incomplete navigation
- Negative user experience
To implement a well-optimised information architecture, some research is required, including mapping target keywords and intents to the right landing pages. Here are some tips on how to begin this process.
Here are my two cents (or pennies, your majesty), on where I find it most useful to spend my time thinking about information architecture and doing the following:
- Figuring out what steps the customer has in their buying journey that can be met with search.
- Finding all the keywords that meet those intents and grouping those to pages based off my own instincts.
- Looking at the SERPs (either programmatically or by hand), to figure out how to group keywords to pages based off what Google thinks are separate, independent queries.
- Pulling 2 & 3 together into a coherent mapping of pages to keywords.
- Sit back, let the dollars (or pounds, your majesty) stack up and never work again.
- Go back to the mapping and re-optimise, because you almost never get it right first time.
After a couple iterations you're often in a good place where your customers should be able to find the right pages on your website at the right points in the journey.
Defining the different page types on a site
Before digging into the details of categorisation and internal linking, it can be beneficial to take a step back and think about the bigger picture. Users arriving from organic search or different marketing channels will be much less likely to convert on your website if your business offering does not match their intent. That’s why it’s important to think about your business’ objectives first so you can target them to customers who would actually be interested in them.
Assessing business objectives
A successful site architecture needs to start with proper planning. This involves sitting down and really thinking about the key goals of your business to decide which categories and pages need to exist on your website. Make sure that the large extent of your core product or service offering is available to users through your site’s architecture.
Your business objectives and core service offering need to form the foundation of your site architecture.
Once the core business focuses have been set and agreed upon, build upon them with user insights. Even if an important page is included on a website, it won’t be truly findable unless it is described in words that your users understand and expect. This is where keyword research, keyword trend and user behaviour analysis comes in.
Combining business focuses with user insights will provide a strong foundation which will feed into everything else in your website’s structure, from the categories you include to the navigation types you use.
Mapping the purpose of pages
Every page on your website should have a purpose. The analysis of these different purposes and how many pages fulfil them is an important starting point when performing site architecture optimisation.
Here are some of the main purposes that a page will serve:
Once you’ve grouped pages into core purposes, compare the ratios between these groupings to gain invaluable insights. For example, if you find that there is a higher number of navigational pages compared to the educational article pages or transactional product pages, then you know there is an issue that needs fixing.
The different purposes of your pages will feed into the ideal site architecture and the different journeys that will be mapped out for your users.
Prioritising which pages to include
The pages to prioritise within your site’s architecture should align with your business goals and the interests of your target audience, which can be identified through keyword research. You can also gain additional insights into the popularity and performance of your different pages by combining analytics and search data to help identify key pages which should be highlighted.
Utilise current engagement and performance metrics to highlight pages that are already successful at what they do to include within the site architecture.
The best way to get the most out of this data is to combine it all in one place with your crawl data. That way you can gather together all of the pages that exist and are crawlable on your site, as well as how many impressions they’re getting in Google Search Console and how much traffic they’re getting in Google Analytics, for example.
At DeepCrawl, we call this concept the ‘Search Universe’, and you can find out more about it in this article.
Start Building Your Search Universe with DeepCrawl
Organising and categorising pages
Now that you’ve figured out who your business is targeting and what it has to offer, you can start getting into the organisation and categorisation of the pages on your website. A great starting point is to analyse its structural hierarchy and taxonomy. There are a number of different methods you can use to do this in a way that will drive results.
Here are some of the different ways of organising your website:
- By date or time
- By alphabetical order
- By location
- By topic
- By target audience
- By task or process
- By attributes or facets
- By combinations of the above
Creating the best taxonomy and hierarchy
Taxonomy is all about classifying the different pages of a website to provide an overview of its product range and service offering in an understandable way. Your website’s taxonomy will define the navigation, categories and tagging that are needed to encompass these different content types.
To demonstrate how this form of structuring works in action, here are the four main types of website taxonomy:
1. Flat Taxonomy: Only includes top-level categories which are weighted equally.
2. Hierarchical Taxonomy: Pages are arranged in order of importance, and the further down you go within the structure, the more specific the topic.
A key consideration for a hierarchical taxonomy is to aim for a broad and shallow structure rather than narrow and deep. Keeping choice and options open for users is key for content findability. However, be sure to balance having an architecture that’s as flat as possible with the ability to create a contextual hierarchy of your content.
It is better to have a user who can survey a large landscape via a mega menu or via related content, than it is to have them go down tunnels.
3. Network Taxonomy: Organised into both hierarchical and associative categories. With this method you can add an additional contextual navigation alongside your main global navigation. This can be achieved with ‘recommended reading’, ‘most popular’, ‘most viewed’ and on-page links to relevant pages.
4. Facet Taxonomy: Pages can be assigned to multiple attributes, and users can choose which facets to explore that most suit them. For example, on a clothing website, ‘clothing’ would sit in the middle with ‘size’, ‘colour’, ‘style’, ‘brand’ and ‘price’ around it as facets.
Source: Marketing Land
One thing to be aware of when using a facet taxonomy is URL and content duplication. This type of taxonomy is more technically difficult to manage as products can be categorised within different facets which can create duplicate pages.
While someone working for a business will know the ins and outs of its product and service offering and can make intelligent assumptions about how everything should be categorised, it’s very important to layer user insights into the taxonomy and the labelling of your website. The labels of a taxonomy must be clear and recognisable for users, and should include the terms they know and expect to see.
The organization of information extends past the bots, requiring an in-depth understanding of how users engage with a site.
Some seed questions to begin research include: What trends appear in search volume (by location, device)? What are common questions users have? Which pages get the most traffic? What are common user journeys? What are users’ traffic behaviors and flow? How do users leverage site features (e.g. internal site search)?”
Analysing a site’s existing taxonomy against user behaviour and demand will define the pages and categories you need on your site.
One way to let users feed into your taxonomy labelling is with a method called card sorting. Commonly used by information architects, card sorting is a session where users are given index cards (either physical ones or on a computer) and a list of items, and are then tasked with placing the items into groups and labelling the groups. These are the two main types of card sorting tests:
- Open card sort test: Users are given free reign to categorise whichever way fits their mental model.
- Closed card sort test: Labels are already defined and users simply put objects into relevant lists. This test should be used to validate the results from the open card sort test.
For card sorting tests to work, make sure you don’t ask any leading questions or feed participants with any of your own terms for labels. It’s all about gaining insights into the user’s mental model and their own choice of words, not the SEO’s.
This method can provide invaluable information around what your customers and target personas would expect from your website for it to be understandable for them. A website that is truly understandable to its target audience is one that is much more likely to convert.
Once the top-level categorisation has been mapped out, then you can align the supporting content that should sit within these categories and underneath the core content. You can think of the pillars of your website’s offering as ‘buckets’ which can be filled with relevant content.
Targeting pages to queries and intents
While mapping out a taxonomy, these are the two main problems you’re likely to come across:
- The taxonomy is too extensive. Look out for multiple pages serving similar purposes or targeting the same keywords and intents.
- The taxonomy isn’t extensive enough. Cross-reference internal search data and keywords driving impressions and traffic against your pages to find gaps in your taxonomy and, therefore, opportunities for new landing pages.
It’s likely that you’ll either have too many pages targeting key search terms, resulting in cannibalisation, or you’ll be missing pages for key search terms.
One of the problems I find most often is losing the balance between a category’s size and its purpose. Sometimes website owners create too many similar and granular categories which results in dispersing the authority of a given website and making the real purpose of a given section hard to find. This is problematic from an SEO point of view and also may cause users to feel lost.
The opposite situation occurs when website owners combine broad categories together even if they deserve to be independent sections. They miss opportunities for creating strong thematic sections and also force users to struggle with finding what they really need.
Many times website owners also forget about intuitive information architecture that should go hand-in-hand with optimizing the whole structure.
Think about the goal of site architecture which is always the same: reflecting logical dependencies between pages. As a result, search engines can better understand the website and users are happy that they can easily navigate between pages.
A website could either be targeting keywords with too many pages or too few. Analysis is required to see where improvements are needed or where opportunities lie.
One of the main considerations when targeting pages to queries and intents is to have only one URL available for each unique piece of content. Not only does this help combat the issue of content duplication, but it also consolidates link equity.
Mapping pages on a website to intent rather than individual keywords helps avoid cannibalisation. If you have one strong page that matches a user’s intent to apply for a credit card, for example, this can rank for multiple keywords with this same intent. A page that ranks for ‘apply for a credit card’ could also rank for ‘credit card with bad credit score’ and ‘interest free credit card.’ You shouldn’t spin out lots of thin doorway pages to match each of these keywords and try to shoehorn these into your site’s architecture.
To keep up to date with the different URLs on your website and see which articles have been written on your key topics over time, regular content auditing is a must. For more advice on how to keep on top of your website’s content, take a look at this guide to content auditing.
Utilising tag pages
Creating a hierarchy for your website can be useful for helping users and search engines better understand the topical relevance and importance of your pages. Due to the nature of category pages and how they sit within a site’s architecture, they are hierarchical. Tag pages, on the other hand, aren’t subject to the same constraints. By using tagging, you can split content evenly into topics and group them by relevance, no matter where they sit in the overall hierarchy.
Tag pages give you the flexibility to organise and group content together to suit the additional interests and browsing intents of your users, where your main navigation might be limited. They should be dictated by your taxonomy, with business focus and user insights feeding into the tags used.
Tagging works as a shortcut for grouping topically relevant pages without being limited by the hierarchical structure of a website.
Page tagging can help feed into the automatic population of ‘you may also like’ sections which lead users to other relevant content. This helps them consume more of your site’s content, increasing their time on site and pages per session once they’ve been drawn in by an article on a topic that they’re interested in.
It’s important to be smart with your use of tagging. When used correctly, it can serve as a secondary topical navigation through your website’s main architectural ‘bucket’ categories. When used incorrectly, tagging can be an SEO’s worst nightmare. Tag pages can be one of the main offenders which contribute to index bloat.
Think about what would happen for a large editorial site that adds a tag for every possible keyword or subtopic possible. Each of those tags becomes a landing page, and each of those landing pages can be served to users as thin content or waste search engines’ crawl budget.
Go back to your taxonomy mapping and any card sorting conclusions you came to if you surveyed user expectations for your site’s categorisation. Then group your tags by the topics that users expect, want and search for in their own terms.
Page tagging should be done so that individual pages are associated to and compiled on a landing page with its own URL, rather than adding a tag via a parameter onto the individual page. Here’s an example of what a tag URL should look like:
This helps keep URLs clean and canonical, as URL parameters can get out of control very quickly, especially for large sites.
The best ways to internally link pages together
Now we’ve walked through how to gather together the pages on your website into understandable and relevant categories, let’s examine how to internally link these pages together in a way that positively impacts user journeys and search engine crawlers.
Internal linking is one of the most crucial elements of site architecture because it determines the structure of a site and directly impacts page depth. This means internal linking will decide which pages lie close to the surface of a website and are, therefore, more easily discoverable for users and search engines.
Internal links can also result in increased indexing of key content and positive user experiences when implemented correctly. However, when links break or aren’t used effectively, this can create roadblocks to a user’s journey, resulting in search engines not indexing your most important pages. It’s crucial to audit your internal linking structure regularly. To get started, take a look at this guide on handling broken and redirected links, and monitor your internal links using your crawling tool of choice.
Something to watch out for within your site architecture is orphaned pages. These are pages that exist on your website that have no internal links pointing to them. Even if they receive traffic, if they aren’t linked to internally then they risk being dropped from Google’s index regardless.
Orphaned pages may be noindexed.
-John Mueller, Google Webmaster Hangout
Orphaned pages can be a great source of opportunities and quick wins. Users may be linking to and engaging with these pages, but if they sit in isolation then the rest of your site won’t receive any benefits from them. Use a crawling tool to link up as many different data sources as possible, such as analytics data, Search Console data, backlink data and log file data, as this will broaden your scope for identifying orphaned pages.
Improve Your Internal Linking with DeepCrawl
Once you’ve found these pages, then link to them wisely using some of the different methods covered in this section of the guide.
Due to the importance of internal links, they should be used with care and only when relevant. Internally linking to as many pages as possible just to flatten your architecture is widely advised against.
Limit the number of links on a page to a reasonable number (a few thousand at most).
-Google Webmaster Guidelines
Internal links can make or break a site’s architecture, so use them wisely.
Here are some internal linking tips that Cyrus Shepard has seen produce the best results firsthand:
There are commonly two site architecture techniques which I typically see deliver the biggest SEO impact, depending on the site and its existing structure.
The first is optimizing internal linking, which includes the site's navigation structure. For large sites, this can be particularly challenging. Determining which pages should be included in a large faceted navigation usually isn't too hard, but deciding what should be included in a sitewide navigation can throw many experienced SEOs for a loop. Getting it right—typically by examining user behavior and search data—can yield big gains.
For smaller sites—or sites that already take advantage of an optimized navigation and flattened architecture—hub pages represent a best case opportunity. Hub pages enjoy good internal linking, either naturally—as in the case of category pages—or through manually building internal links. The "art" of hub pages is turning them into something uniquely valuable to users, and using that value to spread topical relevance (and ultimately, traffic) throughout the site.
Every site is unique and will have its own architecture requirements, however, there are some overarching rules that will apply to most websites.
If you want your site to get as much interest and visitors as the Egyptian pyramids, you should have a site architecture like the pyramids. Here are some tips on how to achieve this:
- Don't put all links on the homepage.
- Use subfolders instead of subdomains. If you are running an e-commerce website this URL architecture will work well for your website. E.g. Homepage > Category Page > Sub-category Page > Product Page.
- Don't forget to use breadcrumbs. These improve user experience and Googlebot crawling.
- Users should be able to find what they’re searching for in just three clicks. No more.
Passing authority and context with links
Internal links pass a variety of different signals and information with them. That’s why they’re so powerful in SEO. An internal link will transfer importance, value, authority and context to the destination URL.
Linking related pages gives Google context.
-John Mueller, Google Webmaster Hangout
An internal link conveys both the importance and contextual meaning of its destination URL, depending on how it is used.
A page that is linked to from the homepage and other key pages on a website will cause Google to take notice and give that page more visibility in the search results. If you’re making a page more prominent and easily findable on your site with your internal linking, this has the knock-on effect of telling Google that that page is relevant and important for users.
Internal links are also the channels through which PageRank flows through a website. They determine which pages receive the benefits of link equity from external pages, and whether or not it reaches the most important pages.
Add internal links to pages with backlinks to pass authority.
-John Mueller, Google Webmaster Hangout
Structuring a website to communicate page importance through internal linking is one of the main signals which influences which pages Google chooses to canonicalise. To learn more about the factors that influence Google’s determination of page importance, take a look at these slides on conflicting website signals.
Internal links play an essential role for SEO, as they have to tell both the user and the search engine what the target page is about within the limited window of its anchor text. Navigational links must use descriptive keyword-focused anchor text to tell users and search engines exactly what kind of page they can expect to visit next.
One of our key concepts here at Portent is the Blank Sheet of Paper Test; that is, if you were to write something on a blank piece of paper, would someone know what you're talking about? This is crucial for site architecture and internal linking. A link's anchor text (or image alt text if you're linking with an image) is like the words on a road sign. Users and crawlers need to know what they're going to find if they follow the link! This is especially the case for navigational links. Don't forget to apply the Blank Sheet of Paper Test to your URLs as well!
Here are some tips on auditing your internal links and anchor text to make sure they’re driving as much value as possible for your business.
We all know that internal links give search engines and users context as they progress through a website, but I find that something often overlooked by SEOs is an auditing process for internal linking in general, and internal link anchor text specifically. Something I like to do on a semi-regular basis is export the report detailing all my unique internal links from DeepCrawl so I can analyse internal anchor text, the pages where they are placed and the target URLs for those links.
The purpose of this exercise is to identify gaps in internal linking – both in terms of clearly signposting the user journey and trying to improve rankings through diversifying the internal link anchor text. For example, you can look at your ranking report in GSC for a keyword at the top of page two then try and find mentions of that keyword in your internal anchor text report. If there aren’t any, a quick way of trying to bump that keyword up onto page one could be to tweak the anchor text of internal links pointing to the relevant landing page so they include that search term – as long as it makes sense for UX.
You can also set up a custom search engine at cse.google.com for your site and find out what pages Google interprets to be most relevant for a particular term. Assuming the landing page you want is at the top of the list for ‘relevancy results’, you can look at the secondary pages Google deems to be relevant to a particular query. You can then cross reference this with your internal link report to ensure there are relevant internal links from these pages to my primary landing page.
Analysing click depth and page distance
The depth of a page within a website refers to how far down it sits within the overall architecture. Page depth is defined by the number of levels within a site, and these levels are determined by the number of clicks needed to navigate between pages.
Level 1 is the homepage, and any pages linked to from it will be on level 2. Any pages linked to from pages on level 2 will be on level 3, and so on. Each click takes you a level deeper within a site’s architecture. Some define a homepage as being level 0, however, so bear this in mind when comparing advice on site levels between different sources.
As explained by Rand Fishkin during a Whiteboard Friday, you can reach as many as 1 million potential pages in three clicks if you include 100 unique links on each page.
Here’s an example of a website that has a healthy amount of pages within three clicks from the homepage relative to the total amount of pages. The highest number of pages found on an individual level is at level 4, which takes three clicks to arrive at.
This is the section of the website’s crawl depth to include your most important pages and internally linked to in a way that compliments your website’s taxonomy and hierarchy.
On this other website, you can see that most of the primary pages are accessible within four clicks, but there are also some that appear at level 9 and 14. The website owner would need to take a look at how important these pages are and whether or not they need to be moved higher up in the website’s crawl depth by adding internal linking to them from a higher level page.
You might think that all the key pages on your website need to be only three clicks away from a user at any given time. However, the UX community have largely disproved the ‘three-click rule,’ which is a belief that users will abandon a website if they can’t find the information they want within three clicks. While there’s no official rule on this, it is best practice to make sure key pages are included as high up in a site’s architecture as possible.
Place links to important pages higher in a site’s hierarchy.
-John Mueller, Google Webmaster Hangout
Similarly to users being impacted by click depth, search engines are affected by site depth. Important pages should be as near to the root, or homepage, as possible to make pages more findable and indicate page importance for search engines. This aids crawl efficiency, helps reduce crawl traps and gets pages crawled and indexed more quickly.
Link new content high up in a site’s architecture.
-John Mueller, Google Webmaster Hangout
Since each site has a crawl limit, making your content hard to find can hurt your organic exposure. According to “The Art of SEO” by Eric Enge, Stephan Spencer and Jessie Stricchiola, “For nearly every site with fewer than 10,000 pages, all content should be accessible through a maximum of four clicks from the home page and/on sitemap page.” While many people want to minimize their design and navigation, they often do so at the expense of a clear structure. By optimizing your site structure, you make it easier for crawlers to find, crawl and index your site. This can increase your sites crawl budget and give your site more exposure in search.
The higher up in a site’s click depth or crawl depth, the more findable it will be for users and search engines.
The different navigation methods
One of the best methods for keeping important pages within easy reach of both users and search engines is with a clear navigation. Global navigation or mega menu links are weighted more heavily than other types of links, so they need to be used with great care. In this section of the guide we’ll discuss how to implement an effective navigation and the different methods you could use.
For users starting from a website’s homepage, the navigation will set out the most important pages on the website and help them to get there easily. For users landing on any other page, the navigation will still be able to orientate a user to the website’s overall content offering and keep them moving forward in their journey to achieve their intent.
A user should never have to go “back” in order to go forward. So make sure your navigation and categorical pages are available from every page, especially knowing for organic search, a user will enter your site and the journey at every level.
-Search Engine Watch
A website’s navigation acts as the anchor which keeps a user from being swept away by all of the different pages on a website and getting lost.
Here are some of the most common ways of structuring the navigation of a website, featuring examples by Caleb Cosper:
- Single-Bar Navigation
- Double-Bar Navigation
- Dropdown Navigation
- Double-Bar Dropdown Navigation
- Dropdown Navigation with Flyouts
- Mega Menus
- Responsive Subnav Menus
- Secondary Left Navigation
- Footer Navigation
The single-bar navigation contains all of the navigation links within one row.
- The items are easily crawlable for search engines.
- The links are clear for users, with no risk of them getting buried.
- A minimal number of links means link equity isn’t spread too thin.
- There is a very limited number of links you can include.
- Only a small website taxonomy can be presented to users.
- Topic grouping is limited without dropdown options.
The double-bar navigation features links on two separate rows. They can either be defined as the primary and secondary navigation, or can both be categorised as the main global navigation.
- There is more space to include links.
- Key pages are still clearly outlined to users.
- More of the website’s taxonomy can be presented than in the single-bar navigation.
- There is still a small amount of items linked so link equity isn’t diluted too much.
- Double-bar navigation design can appear cluttered.
- Topic grouping is limited without dropdown options.
The dropdown navigation contains lists of links which appear vertically underneath each main category when clicked on or hovered over.
- A dropdown navigation can include many more links than a single or double-bar navigation.
- Taxonomies and page groupings can be clearly displayed for users.
- Be aware that the ‘hover’ action doesn’t work for users on touchscreen devices.
- Link equity can be spread too thinly if too many links are included.
Double-Bar Dropdown Navigation
The double-bar dropdown navigation is similar to the standard dropdown navigation, but the dropdown menus appear on two different rows.
- More links and subcategory pages can be included.
- There is more space to map out the taxonomy and page groupings of a site.
- Unless the dropdown is coded properly, the top dropdown might not detract and could block the bottom dropdown.
- Link equity could be spread too thinly as this method could include twice as many links as the traditional dropdown navigation.
Dropdown Navigation with Flyouts
The flyout menu is a version of the dropdown navigation which works horizontally after the initial list of URLs has appeared. The submenus will ‘fly out’ when an item is clicked on or hovered over.
- The hierarchy of content is displayed clearly.
- More links to categories deeper in the architecture can be included.
- Links can be hidden from search engines if they aren’t coded in HTML.
- Users can find flyout navigations frustrating to use as the whole menu will often disappear if the mouse moves away from the list items.
- Users with tremors or impaired dexterity may not be able to use flyout navigation at all.
- The ‘hover’ action isn’t possible for users on touchscreen devices.
A mega menu is a dropdown which shows a number of different subcategory lists all in one area when a top category is clicked on or hovered over.
- Users are able to get a better sense of of your website’s range and content offering.
- Can include a large number of links to related pages.
- There is space to include images to further convey the context of a category to users.
- Users can be overwhelmed by too much choice.
- Mega menus don’t work well on mobile devices due to limited space on the screen.
- Mega menus should only be used if a website has enough useful categories or content to populate them.
- Search engines may not be able to crawl and index these links if they’re not served in the HTML.
- When used incorrectly, mega menus can flatten site architecture too much and reduce the ability to create a hierarchy of content.
Linking to every page from a site's homepage will stop Google from understanding a site’s architecture.
-John Mueller, Google Webmaster Hangout
Oh gosh, totally dependent on the site. But a few key principals:
1. Mega menus should really only link to top performing pages, not every page. Trim the cruft
2. Should be used for navigation, not promotion
3. Group links by topic
4. Can work well, especially for authority sites
— Cyrus (@CyrusShepard) November 9, 2018
Responsive Subnav Menus
Responsive subnav menus use toggle icons to allow a user to expand and collapse particular items vertically within the same list.
- Responsive subnav menus can work well on mobile devices.
- Having submenus listed in one place allows users to survey the whole landscape and taxonomy of the site.
- Users may not know that the menu items are expandable unless there is a clear icon or visual cue.
- With more subcategories displayed, the remaining categories will be pushed further down below-the-fold.
- Collapsing an expanded list isn’t always intuitive and can cause a negative user experience.
Secondary Left Navigation
A secondary left navigation works as a way of helping users navigate between related topics or pages within a set. The links are displayed vertically, with the option for expandable and collapsible subnav menus to be included as well.
- Works well on article or guide pages to split up long pieces of content into more navigable sections.
- Gives clear guidance to users on what page they’re on and which page is ‘next’ in the series.
- The links may not be crawled or indexed by search engines if they’re not served in the HTML.
- Including too many links in the navigation as well as having a large mega menu, for example, can spread link equity too thinly.
The footer navigation includes lists of links at the bottom of a page’s template. This section is usually used to highlight company information.
Add links to site footer that are useful for users not Google.
-John Mueller, Google Webmaster Hangout
- More space to include additional links to key pages.
- Provides users more options for discovering your pages.
- Links to important pages should be included above-the-fold so users can find them easily. Users may not get to the bottom of a page to see these links.
- Footer links are devalued by Google.
- Including too many footer links can be seen as manipulative.
Whichever navigation methods you use, make sure to test them to make sure that search engines will be able to crawl your pages effectively and that users are able to navigate your site with ease. Use a crawling tool and an analytics tool to measure this, and use a CRO tool with click mapping or session recording functionality as well if you have access to one.
Test navigation models to reduce click depth and improve UX.
-John Mueller, Google Webmaster Hangout
How to utilise hub pages
Hub pages are commonly used as a way to answer users’ questions by providing a central resource for them to navigate between relevant and contextually linked content. Once you’ve done the research into who your users are and what they find engaging, you can create compelling hub pages that will increase user experience because they allow users to explore content topically.
A hub page is based on one main page and supplementary pages. These supporting landing pages will be featured and linked to from the main hub page and will dive deeper into the main topic. It’s recommended to link back to the hub page from the supplementary pages to show their connection clearly to search engines and provide strong signals on the relation between the pages. This will demonstrate that the hub page is the most authoritative resource on that particular topic, which will make things more straightforward when it comes to rankings.
If you have a variety of different articles on a similar topic, this may cause confusion when deciding which one to link to internally. This is where the hub page comes in; to provide a core page to link to internally, which will also give that page more chance of ranking by consolidating your efforts into one place.
Hub pages also help to better distribute linking authority across topically relevant pages rather than just those that are newest, or however a website presents its content by default.
Blogs work like a conveyor belt, constantly giving the most internal link equity to the newest content and removing it from older articles.
Content hubs help users get off the never-ending conveyor belt of chronological articles and explore the content they want to see in their own time.
Here are some examples of how brands are utilising hub pages:
These are some of the key things to consider for creating a successful hub page:
- Avoid straying too far from the core topic with the content and links you’re including.
- Link to your hub page from the homepage or main navigation to make it more findable for users and search engines.
- Hub pages must make sense for both users and search engines.
- Ensure your hub page stays relevant by adding new, valuable content whenever possible so Google sees it as an up-to-date resource.
- Include an internal search box somewhere on your hub page to improve UX and provide invaluable data on what your customers are looking for. This is just one of the ways in which you can leverage internal site search data.
It’s clear that the websites that structure their pages to suit the expectations of users in a way that is understandable and crawlable for search engines, are the ones that succeed. A successful site architecture focuses on matching user intent at all times, as well as increasing the findability of the most important pages on a website by keeping them high up in the overall site depth.
One of the most important things to be mindful of is that site architecture optimisation should be an evolving process. If you have mapped out the different page types that are needed on your website, categorised them with information architecture best practice and user insights in mind, and internally linked them together to maximise the findability of key pages, that’s brilliant.
However, your business, your customers and the search engines themselves will develop and change over time, and the site architecture implementation you have now may not work as well further down the road. You need to continually assess your business objectives, the content you need to both fulfil those objectives and engage your target audience, as well as how to structure those pages for smooth user journeys and efficient search engine crawling.
Access the Downloadable PDF Version of this Guide