In this guide we’re going to provide you with an introduction to how search engines work. This will cover the processes of crawling and indexing as well as concepts such as crawl budget and PageRank.
Search engines work by crawling hundreds of billions of pages using their own web crawlers. These web crawlers are commonly referred to as search engine bots or spiders. A search engine navigates the web by downloading web pages and following links on these pages to discover new pages that have been made available.
The Search Engine Index
Webpages that have been discovered by the search engine are added into a data structure called an index.
The index includes all the discovered URLs along with a number of relevant key signals about the contents of each URL such as:
What is The Aim of a Search Engine Algorithm?
The aim of the search engine algorithm is to present a relevant set of high quality search results that will fulfil the user’s query/question as quickly as possible.
The user then selects an option from the list of search results and this action, along with subsequent activity, then feeds into future learnings which can affect search engine rankings going forward.
What happens when a search is performed?
When a search query is entered into a search engine by a user, all of the pages which are deemed to be relevant are identified from the index and an algorithm is used to hierarchically rank the relevant pages into a set of results.
In addition to the search query, search engines use other relevant data to return results, including:
Why Might a Page Not be Indexed?
There are a number of circumstances where a URL will not be indexed by a search engine. This may be due to: