The announcement came alongside significant changes to the Search Console Help Guide to AJAX crawling for webmasters and developers, and Google say this new process could also help other search engines crawl AJAX sites as well.
What does this mean for SEOs?
Although Google’s ability to crawl AJAX websites is a significant development, it doesn’t come without limitations, including:
John did admit that it can be very difficult to find a cause for an issue at the moment, so he suggests pre-rendering as a temporary solution until Googlebot is able to pick it up properly if you really can’t find the problem.
Make sure that Google can crawl all of your CSS and JS files
This now applies for both for desktop and mobile pages. The mobile requirement is to make sure Google can accept your page as mobile-friendly.
FETCH AND RENDER VIEWS MIGHT NOT BE CONSISTENT IF JS/CSS IS DISALLOWED
The Fetch and Render tool shows you two different renders, one for Googlebot, which uses the Googlebot user agent, and one for users, which uses a browser user agent. If JS/CSS is disallowed for Googlebot, it may not be able to render the content in the same way in both views.
Google’s official advice seems to be against pre-rendering for Google only
This is seemingly due to concerns that the pre-rendered content might differ from the version seen by users. However, John Mueller stated in a recent Google Webmaster Hangout that this is OK and Google can use it.
- Google Search Console Help: Guide to AJAX crawling for webmasters and developers
- How to Crawl AJAX Escaped Fragment Websites
- Taming Parameters: Duplication, Design, Handling and Useful Tools
- Angular JS and SEO