URL Removal Tool in GSC is Fastest Way to Remove a Test Site From Search Results
While there are several ways to remove a staging site from organic search results, including blocking Googlebot from crawling it or returning 404 or 410 error codes, John recommends using the URL Removal tool in GSC to remove it quickly.
Speed Metrics Important for UX Are Different to Metrics Important for Crawling & Indexing
While there is some overlap, speed metrics that are important for UX are completely different to those used for crawling and indexing. For the latter, Google needs to be able to request the HTML pages as quickly as possible and server response time is also important.
Disrupted Index Coverage Report Data in April Due to Indexing Bug
Google’s indexing issues occurred from 9th-25th April and data in the Index Coverage report could not be shown during this time. John hopes to write a post clarifying how Google handles these types of situations internally.
Google Being as Transparent as Possible About Visible Indexing Bugs
Google is being as transparent as possible about their recent bugs, because they have had visible impacts on indexing. It is important for John and the team to keep everyone updated on these issues so that webmasters don’t take actions that could be unnecessary or detrimental to their websites.
Google Doesn’t Differentiate Between UGC & Other Content
Google doesn’t differentiate between UGC and the other content on a website, so it is important to control how low-quality UGC is dealt with.
Indexing API Currently Limited to Jobs & Live Video Content in Limited Countries
The Indexing API is currently limited to jobs and live video content and has only been rolled out to a limited number of countries.
Sites Should No Longer be Impacted by Google Deindexing Bug
Google’s deindexing issue was fully fixed on 16th April. If pages still aren’t being indexed, then this will have been caused by a different issue.
Pages with Internally Duplicated Content Are Indexed Separately but Folded Together in Search
Google will index pages with duplicate blocks of text separately but will work out which of those pages is most relevant to show for each query and will show just one of them in the search results.
Google Can Index Pages Blocked by Robots.txt
Google can index pages blocked in robots.txt if they have internal links pointing to them. In a scenario like this, Google will likely use a title from some the internal links pointing to the page, but the page will rarely be shown in search because Google has very little information about it.