Website Performance Optimization

Introduction to Page Speed 

Google has been pushing for fast loading sites for a while now. From my perspective though, website performance optimization is mostly a user experience topic, if anything. Granted, it helps with SEO, delivering better and faster crawling, indexation and efficiency. But you should build a fast loading site because you care about your users first and foremost and you don’t want them to wait.

According to a Nielsen report, 47% of people expect a website to load within two seconds, and 40% will leave a website if it does not load fully within three seconds.

There is not much room for errors, really. 100ms can make a great difference. Amazon managed to increase performance, e.g. for every 100ms they grew +1% in revenue. These are really impressive numbers.

There are some numbers available in GSC, specifically time spent downloading which measures the time to complete an HTTP request. It‘s an average on files such as CSS, JS, and others – thus, the number is deeply flawed. It can only really be used to understand trends.

Tools to check website page loading speed

In  the last few years, Google has been heavily promoting their Pagespeed Insights Tool. You can plug-in a given URL, and it gives you a very rough overview of where you stand. The tool returns a score from 0 to 100 and makes some recommendations that you might or might not follow – depending on what they find on your site. These recommendations are only partially actionable. Plus, it’s often unclear what the ROI of an improvement from say 87 to 88 would actually be. So yes, it’s a starting point – but that’s about it.

Another solution is Webpagetest.org – and it actually has everything that you need to start performance optimization work. It gives you lots of different metrics, information about compression, caching and special recommendations on how to optimize your images. They also have a super detailed waterfall diagram where you can see how the page is loading and what the dependencies towards each other are. It also visualizes details such as DNS lookups, blocking periods and much, much more. They even have a filmstrip view – a video where you can see how your site is building up and spot issues in rendering straight away. And it’s actually free.

The new kid in town is called Lighthouse. It is a Google tool and has been implemented in the Google Chrome browser. It has a specific focus on auditing for mobile performance. It makes sense as Pagespeed even nowadays is a ranking factor. However, Google just recently announced that by mid-2018 they would be using your mobile site as the one that is basically responsible for website performance optimization measurement and scoring. So, in the past your score was based on or against your desktop site, this will change now. They will look at your mobile performance now and score and rank you accordingly.

Pagespeed is a ranking factor, so keep in mind that it is mostly about how fast everyone is. Am I slower than the rest? If so, there is a problem.

Also, please keep in mind that website performance optimization and measurement it is not a one-off project; you must do it over time. Ideally, you want to compare it against your competitors to understand what’s going on in the industry and to see how you perform against them. On an enterprise level, there are tools like Speedcurve, which can help significantly with that.

If you want to get started with a free tool, there is also the Performance Dashboard from sitespeed.io, which can essentially do the same thing. It is really powerful to just see trendlines over a week or a month and to see how your competitors are doing at the same time.

Website Performance Optimization Basics

So for a start, let me walk you through some of the most common issues that you‘ll encounter when optimizing performance for a given website. It obviously depends on the type of site, on its setup, on all the dependencies, on the system used, on the backend, etc. – but you’ll find lots of commonalities if you do a couple of performance audits.

Let’s start with an HTTP request. What happens when you access the website in your browser? Lots of different components will be downloaded in the background – images, ССS files, JS and others. The sheer amount of them has significantly increased over the years as websites have got more complex. On average we have 300kb of JS code that is used on every single page and up to 6 different ССS files per URL. Which is really a lot. The easiest way to fix it is to think about how to reduce the sheer number of these things by merging, deleting, etc.

You would not need 6 different CCS files; you would surely rather have 1 or maybe 2.

One of the common concepts which should be applied here is minification of files, which means shrinking down CCS files and JS, for example. It shortens variables and removes unnecessary line breaks.

You have to decide, do you really need to request the files when accessing the page or can you do it in the background? Think about asynchronous requests and whether you could use them to benefit your performance. When you request a JS file, for example, it will block the render from happening and the browser from painting something on your screen. You could tell the browser to process JS in the background (what we call asynchronous), but not wait for it to come back. That is a concept used by Google for their Analytics. The idea is to fire the JS but not necessarily wait for them to come back.

Remember:

  • Firstly to reduce the number of requests, try to make the remaining ones as small as possible.
  • Then think about which ones you can do asynchronously to prevent blocking of the render.

Images are roughly 60% of all web traffic. The problem is that often images are not optimized. If you ever have taken an image out of Photoshop and just put it on the web, you know what I am talking about. These pictures have metadata that are usually not properly compressed and sometimes are even the wrong file type. The general rule for images is to put them on a proper diet. There are good tools like tinyPNG & tinyJPG, where you can optimize your images for free and shave off all that unnecessary file size.

However, jpg, gif, png are not the most modern formats. Years ago Google started webP. The idea was to build one format that does everything at once. It is very efficient, smaller than the others while keeping all the features of the previously mentioned formats. It generally works well but is still not supported widely and at the moment only Chrome can output webPs. The workaround would be to use something like an image CDN e.g. cloudinary. With cloudinary, they create modern file types like webP and jpg-xr for you, in the background. They put them on the server, and when someone is accessing your site, they try to understand if the device (in this case Chrome) would support webP and then dynamically serve the new format/file. If it Microsoft’s Edge browser they‘d be serving jpeg-xr instead.

Image optimization and, in particular, the modern formats are great and really helpful if you have an image heavy website, like photo galleries or product listing pages with lots of images. Image optimization can make a massive difference and savings up to 80% become easily achievable.

But make absolutely sure to get the basics right. Investigate what type of images you are using, if they are compressed properly, but also have a look at the new and modern formats.

More Website Performance Optimization

Another thing we need to talk about backend or server side of things – so essentially your infrastructure. Performance optimization also heavily depends on which systems you are running on, how they are interconnected, etc. For less technical marketers it’s particularly important to understand that optimization should not only be done in the frontend as covered before, but also in the backend. So, you need to get your IT involved as well and make sure they have performance optimization topics on their radar.

Some questions to get you started and things you need to understand:

  • What webservers are you using, and are they the best fit for what you want to achieve? For example, Apache is totally different from nginx.
  • What does your database and its structure look like dealing with the queries they have to answer?
  • Are you running a MySQL database and is that using any type of caching?
  • Or are you running fully on HTTPS? Then you should enable OSCP Stapling.
  • Are you sure that you are a levering browser caching properly? Have you even considered EDGE caching?

There are so many different things that you can do that really make a difference. But they mostly depend on the setup, which is always somewhat specific to the individual.

If you have a lot of images and other static files consider using a CDN or at least a heavily optimized asset server. An asset server generally should be cookie-less and optimized for delivering statics ASAP – so nginx could be a good starting point. Generally, a CDN like Cloudflare would be a very good approach to offload static files that you have on your application server as well. Having static files on the application server is usually way slower than putting them on the CDN. This also helps with international visitors as CDNs usually distribute from very different points of the world. The latency to your users goes down significantly.

This ties in quite nicely to something called Time to First Byte or TTFB in short. It measures the responsiveness of a web server: the amount of time between creating a connection & downloading the contents of a web page, and sending it back. Google says they are not using it in search rankings per se, but it is still important. If the latency is super high, nothing is going to happen, and people will just leave – because your site doesn’t respond. No chance they will stick around forever – so clearly TTFB does matter.

One of the really cool and free tools is available at keycdn.com. They have a global TTFB testing tool, which checks the performance of your site from 14 different locations around the world in one glance. They measure DNS time, TTFB and handshakes. If you look at average numbers and you can manage below 200 milliseconds, that would be good. 500 ms is already too long, 1 second or more is bad.

Optimizing TTFB is something that you, as an SEO, can’t do yourself. It’s usually infrastructure-related or server-related, so go and talk to the IT team if your numbers are bad.

You have to be sure that caching works well. Make sure that you send proper caching headers and that you are leveraging the fact that for the second request most of the images requests, СSS, JS etc will be stored locally in the browser.

A good approach would be to go to Webpagetest.org and see how well and efficient your caching rules are set up. If any images are not cached for at least a couple of days, then this should be changed sooner rather than later.

Another powerful concept in making sites faster is considering pre-fetch and pre-render. It is especially true if you depend on 3rd party requests or contents. Do you request data from a CDN or a subdomain, for example? You could pre-fetch the DNS lookup to that third party host. Basically, in the background, it ensures that the IP address for that host has been resolved already, so when the first request goes out, it goes faster as the browser does not have to wait for the DNS lookup to be executed.

If you go one step further and you are running on HTTPS, you could not only pre-fetch but also pre-connect, which takes care of the certificate handshake and the validation process as well. The idea is to be aware of where the data comes from and to do all kinds of necessary work in the background to make things way faster when you are requesting data for the first time.

HTTPS & HTTP/2

Of course, we also need to talk about HTTPS when covering the performance of websites. Not because it directly has anything to do with performance optimization, but because it is required on a browser level to serve pages that are running on the new HTTP/2 protocol. Or the other way around – if you’re on HTTPs there is hardly any reason not to take advantage of the new HTTP/2 protocol.

HTTPS is something that Google has heavily advocated for in the past. The latest statistics from the SEMrush Sensor say that 60% or more of all results for high volume keyword queries in the TOP-3 have already been moved over to run on HTTPS.

Obviously, it depends on the industry; certainly finance is different from publishing, but still, HTTPS was something that Google has been very vocal about. This went as far as them saying that this is a ranking factor or at least they give a tiny boost to pages that have actually been moved over to HTTPs. Google officially stating that something is indeed a ranking factor has only rarely happened in the past.

The other big thing that happened was when Google changed the way it was handling non-secure sites in Chrome. Previously, Google started flagging form fields that were on HTTP URLs but were taking somewhat sensitive data, like personal user data, credit card data, etc Now Google is changing Chrome’s behavior again. They will start to flag every single HTTP URL as not secure. You don’t really want to spend time explaining to your users why Google think you site is not secure – that clearly is going to be a conversion killer. And worst case scenario, it could go so far that Google might reflect this in their search results as well; or in the future may be only HTTPs results will be shown at all?

The good thing is that HTTPs is relatively easy to implement. Also, the migration work involved, because of the protocol change, is relatively straightforward from an SEO perspective. If you do it right and follow best practice you should not expect any significant loss.

A really important thing is that when you switch to HTTPs you must also make sure that you start implementing a rollout of HTTP2 straight away – as by design HTTPS is even slower than HTTP. The main reasons are the certificate validation as well as the handshake to establish a secure connection – which is going to happen at the very beginning. It adds a couple of milliseconds by design and there is nothing you can do about it. Instead, you should brief your IT team properly and make sure that they implement HTTP2 right from the start. For the IT people, it is really not much work. The server does everything, and there is this default fallback anyways. So it is mostly a matter of switching it on – but it makes a lot of sense since the protocol is just so much faster!

There are a couple of things that changed regarding best practice on how to implement performance optimization for HTTP2. The major factor is that HTTP2 works using streams rather than with single requests. In HTTP1.1 we requested different CSS files, JS etc. and they all came back one by one. With HTTP2 the server opens a stream, and this stream can handle multiple files with different priorities all at once. HTTP2 also introduces some new features like server push. There are a couple of things in the background that one has to be aware of to fully benefit from.

There are some very old things like CSS sprites – in those cases, it’s questionable if they still make sense now or not. Definitely, don’t waste time building them. You should also get rid of domain sharding – multiple subdomains that allow more parallel requests at once – this doesn’t really work with HTTP2.

Googlebot is still crawling using the old HTTP1.1 protocol, bear that in mind. There are lots of discussions about when it is going to change. The release date is not confirmed yet. With the integrated fallback from HTTP2 to 1.1. though, it doesn’t really matter.

If you have not yet switched over to HTTPS make sure you do and also introduce HTTP2 straight away. If you are already on HTTPS double check to see if HTTP2 is running in the background.

Measurements and website performance optimization that go way beyond just the basics

One of the great things that happened in the last year is that we moved from having one single metric like the score from Google’s PageSpeed Insights to something way more complex, but also more reflective of how page loading feels to a real user. So the idea is to translate different stages and user experiences into multi-performance metrics.

The Google Chrome team made lots of efforts to introduce metrics that make a difference and reflect a user’s journey.

  • They introduced Time To First Paint which is the point when the browser starts to render something, the first bit of content on the screen.
  • Time to First Contentful Paint marks the point when the browser renders the first bit of content from the DOM, like text, an image etc.
  • The First Meaningful Paint (the hero element) is the paint after which the biggest above-the-fold layout change has happened, and your most important element is visible.

When you go to Youtube, you want to watch a video. The video would be the hero element; you do not really care about how fast the comments are loading or what`s up next in the sidebar.

The easiest way to visualize this is to go to your Chrome Developer Tools. There is a section in performance, and then you select tab profiling. In this section, you can see all the paint timings recorded and browse them as standalone images. You can scroll through the timeline and understand how your site looks at every single millisecond – really cool.

If you want to scale it a bit more and get real-world data you can use the Performance Observer in Chrome. It is an extension to your regular Google Analytics snippet. You just integrate the Google Analytics snippet as usual, and with a couple of lines of additional JS code, you register the Performance Observer.

The Performance Observer itself keeps track of the previously mentioned paint timings; and stores them as custom metrics in your Google Analytics.

You can break it down by URL and see the average paint timing metrics for every single URL. Right now this only works in Chrome, but still it gives a very good perspective on how fast your site is working in the real world based on real users. You can even do it with Google Tag Manager. Keep in mind that the snippet for the Performance Observer has to go directly to the markup because GTM doesn’t support ES6 scripting language. After you do that you can go on and combine it with, which gives you a nice overview of things and you can integrate it to other reports, too.

Recently Google introduced a new metric called First Input Delay (FID), which measures the time from when a user first interacts with your site (i.e. when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to respond to that interaction. We are moving towards measuring things that affect the real world behavior of how users interact with your site.

The tracking for FID is the same as with print timings. You use Google Analytics and extend the snippet accordingly so that JS is able to record it all.

All of this is helpful if you want to optimize your critical rendering path – and you absolutely should, because it’s a super powerful strategy. The concept of the critical rendering path is that we have an initial view that is critical and we have below the fold content that is not critical. The above and below the fold areas obviously vary in size because of different devices and therefore different resolutions. Obviously, the CRP on the desktop it is way bigger than on a mobile phone.

So, let me briefly walk you through how to optimize your CRP and make it super-fast:

Simply speaking, when the browser requests a website, it needs to build a CSS Object Model that is a“map” of stylesheet information, which will be attached to the elements and tags found in the markup of a webpage.

From a rendering perspective, the browser first takes the HTML and builds the DOM, and then the browser takes CSS and builds the previously mentioned CSS object model. Next step is to combine the DOM and CSSOM to create the so-called render tree. Only then can the browser display the web page in the browser. If external CSS files are involved, then the browser needs to wait until the download is happening – which can take some time, depending on how well they are optimized.

If you want to speed up this process, you can use the free tool on Github that is called Critical. It renders your site in different resolutions, so you could take your Analytics top-5 resolutions, and then build two CSS files, one that holds the CSS required for your critical view, and another for everything that is below the fold.

From an implementation point of view, you would inline your critical CSS directly into the markup, but you would also load a non-critical CSS asynchronously and then apply it once it has finished loading with a rel=“preload“ directive, which prevents the browser from blocking.

You can do this since the non-critical CSS is not required when the site starts to render; it is only used when people start to scroll down – so it will only be applied once the onloading has been finished. This is by far the fastest way of optimizing a critical rendering path without having to inline all the CSS, which would make it very hard to maintain.

Learn Technical SEO with Digitalgeetam

  1. XML Sitemap
  2. URL Redirects
  3. Javascript SEO
  4. Internal Linking
  5. Log File Auditing
  6. Accelerated Mobile Pages
  7. Structured Data
  8. Advanced International SEO
  9. X Robots Header
  10. Post Redirect Pattern
  11. SEO Friendly URL

 

Leave a Reply

Your email address will not be published. Required fields are marked *