JavaScript SEO 

Nowadays more than 97% of all sites are using some kind of JavaScript. This means that Google`s regular crawler can’t see what is happening in those areas because Javascript is executed on the client’s side. If you go to the browser, the changes are happening within the browser and not on the server side. So Google misses all of that. It is very essential to learn JavaScript SEO.

The crawler that Google built ages ago has of course been updated frequently but it is still not capable of rendering a website. That means that if Google hadn’t done something about it, they’d be missing out on all the changes in the frontend on the client’s side.

So, one of the things Google had to do was to build a crawler that was capable of executing and rendering client-side actions, mainly through JS. The goal was for Google to understand what you would be presented within a modern web browser. Google wanted to see that as well while crawling your website.

So essentially, in the past when you looked at the HTML markup, you saw what the crawler saw.

Now it is entirely different. If you look at a website which is using a client-side JS framework, you only see some very cryptic stuff, not the real content itself. If you‘d rendered the site, on the other hand, the content would be injected dynamically. That is what Google was concerned about – what they would eventually miss important content on the web.

So let’s have a look at the process that is happening right now: Currently, we have this old classic crawl still going on. Based on that, we have an instant first wave of indexing based on the classic crawl data. As more resources become available Google has started to render that same website and to add further data taken from the rendering process. Google basically takes the additional information and adds it to the initial data that they have been collecting. So, in a nutshell, this means they still do the regular, old-fashioned text-based crawl. And then on top of it, they have this new and beautiful JS rendering to see what’s going on there as well, as if there is something „hidden“ there that they might have missed with the initial crawl.

Client-side JS means extra work for Google. The process has multiple steps and this second wave is slower, which ultimately leads to delayed indexing. The optimal scenario would be that the main content and all critical links would be directly available in the HTML source. Rel=Canonical and rel=ampHTML etc should be in the markup as well, to be sure that Google picks it up straight away. JS should and can further enhance a page’s functionality, but not replace it.

Also, it’s important to understand that Google right now is using a very old version of Chrome, Chrome 41, to render your site. This version was released back in March 2015, it is literally ancient.

If you compare the features of Chrome 41 with Chrome 66, you will see significant differences. Even if you debug in your current browser and that works well, with Google continuing to use a very old version, there may still be differences. So it would be wise, if you work with JavaScript SEO perspective, to run an old version of Chrome with its Developer Console on your local machine to be able to understand what is going on there.

Also, Google has a Rich Results Testing Tool that shows the computed DOM. If you combine that with regular markup you could take something like diffchecker.com and compare markup vs. computed DOM to see what the major differences are. On the left-hand side you have the HTML source, on the right-hand side, you have the computed DOM out of the rich test tool. Now you can easily spot differences and start debugging and understanding what’s wrong and what not.

There is also great research by Elephate about JS frameworks and compatibility with SEO. If you work with it, be sure not to miss it.

Our key findings from dealing with JavaScript SEO

  • Google algorithms try to detect if a resource is necessary from a rendering point of view. If not, it probably won’t be fetched by Googlebot.
  • Although the exact timeout is not specified, it’s said that Google can’t wait longer than 5 seconds for a script to execute.
  • If your site is slow, Google might have issues related to rendering your content – this can also slow down the crawling process.
  • Use Google Mobile friendly test – it can show you the rendered DOM + the errors that Google encountered while rendering your page.
  • Don’t use the cache for JS sites. What you see when clicking on the cache is how YOUR browser interprets the HTML “collected” by Googlebot. It’s totally unrelated to how Google rendered your page.
  • Generally, you should make sure that any internal and external resources required for rendering are not blocked for the Googlebot.
  • Remember that Googlebot is not a real user, so you can assume that it doesn’t click, doesn’t fill in forms. Re-visit “onclick” events, e.g. show more CTAs, menus, etc.
  • When you want to use canonical tags, make sure they are placed in plain HTML/X-robots tags. Canonical tags injected by JavaScript are considered less reliable, and the chances are that Google will ignore them.
  • JavaScript SEO is a moving target so keep in mind that things change on a daily basis.
  • There is a great guide to JS SEO from ELEPHATE, you should familiarize yourself with it.

Suggested Article

  1. Ultimate Guide To JavaScript SEO

Learn Technical SEO with Digitalgeetam

  1. Website Performance Optimization
  2. XML Sitemap
  3. URL Redirects
  4. Internal Linking
  5. Log File Auditing
  6. Accelerated Mobile Pages
  7. Structured Data
  8. Advanced International SEO
  9. X Robots Header
  10. Post Redirect Pattern
  11. SEO Friendly URL

Leave a Reply

Your email address will not be published. Required fields are marked *