15 min read

18.04.2022

Increasing Performance in React/Next.js apps for Google Lighthouse

18.04.2022

Performance is highly important for each website. Only if your website quickly responds, is optimized, loads fast, and so on, in most cases, these will be taken for granted by users. Your designers can do incredible work and make your website sweet like candy, but if it loads slowly, believe me, the user will notice it first and leave with an unpleasant aftertaste.

Lighthouse Performance

Google is aware of user's preferences and therefore performance is one of the most important metrics it uses to decide which search page your site is shown on. To determine such indicators, Google has its own Lighthouse service. At the moment, the current version is v8, but some Google documentation already mentions the ninth version. While Lighthouse defines metrics like performance, accessibility, SEO, best practices, PWA, performance is where the most contentious issues arise.

The performance by itself is determined by several sub-metrics – TBT (Total blocking time), LCP (Largest contentful paint), CLS (Cumulative layout shift), TTI (Time to interactive), FCP (First contentful paint), Speed index. I’m listing them in order of performance impact. We will practically not talk about the Speed index, since this is a very difficult metric to understand and change on the part of a programmer.

Lighthouse has two verification options: for desktop and mobile devices. During testing on mobile devices, performance metrics will always be lower. Let's be honest, we are looking for information using our phones more and more often. So you can't just ignore that fact. Your application should run fast enough on any device.

There is another metric that we have not mentioned yet, but it is extremely important. This is the time to the first bit (Time To First Byte – TTFB). TTFB measures the duration between an HTTP request and the beginning of the server's response. Back-end specialists have more opportunities to influence it. However, frontenders can at least avoid requesting unnecessary data, which is especially true if you are using GraphQL.

TOTAL BLOCKING TIME <300ms for mobile devices

This is the time between the first rendering of site content and the moment your application becomes fully interactive. And, no… Your application does not immediately become interactive as soon as you see its first elements. There is a short period when the main thread is blocked and does not respond to any user reactions. How long it will be blocked, entirely is up to you. Each time the main thread encounters a task that takes over 50 milliseconds, that task is automatically considered as too long and the thread is considered as blocked. Fortunately, TBT takes into account only the time greater than 50ms. So, if you have a long-lasting task that takes 53 ms, then TBT will be only 3 ms. But if there are several tasks like this, the extra execution time will be summed up.

TIME TO INTERACTIVE <5s

This is the time on the scale between the first rendering of content and the so-called “quiet window” of 5 seconds. This is the 5 seconds when there are no long-lasting tasks for execution. Server rendering can cause a page to look interactive but it's not actually interactive due to blocking the main thread. Hereby, you see the button, but nothing happens when you click it. It infuriates users, as Google informs that your site is slow. It can lead to unexpected page jumps, so don't be surprised if you don't rank on the first page.

How to optimize TTI and TBT

As you can see, these indicators are interconnected. Therefore, you can optimize the application for both at once. 

  • You should reduce the impact of libraries that are being loaded and processed at that time. For example, Google Analytics. It is often forgotten to turn it off when the business no longer needs it.
  • All blocks that do not have the highest priority for the user, and are not on the first screen, can be loaded dynamically via lazy loading, as well as imported dynamically. 
  • Minimize the duration of the main thread. If you have a lot of different animation – you should not place them on a large number of layers. In order for your browser to understand at all that you want to animate some elements, you should add the will-change CSS property to them. The browser will already notice that this is an animation, but it will be easier for it to find such elements, which will speed up loading. Don't forget to use debounce for your event handlers. Styling inputs inside event handlers also loads the main thread very heavily. Try not to do it.
  • Decrease the Javascript timeout so that it takes less than 2 seconds. This is easy to do by avoiding unnecessary renderers. Try to keep your event handlers as simple as possible. For example, don't write inside the if-else handler, use different handlers for different conditions inside the if-else. Split your code and try to diminish the bundle size.

First Contentful Paint <3s

The first thing a user sees on a page when it loads. According to Lighthouse, it can be any image, not a white <canvas>, as well as SVG elements on the page. The important thing is that everything inside the iframe is not counted. It's also text, and therefore fonts, so make sure they're displayed. Add a property font-display: swap; to your @font-face. This is the fastest way to resolve the issue.

Largest Contentful Paint <2.5s

This is the largest element in the viewport. Usually, people are used to thinking that this is an image because often the biggest element on the first screen is the cover image or image slider, but this is not always the case. For example, here on our site, this is the h1 heading.

LCP can be:

  • <img> elements, including Next.js <Image>
  • Block element with text. Text can be not only symbols but also any inline elements, for example, emojis.
  • <image> inside SVG – they should be large enough and it is better not to do this at all, but as an option – it is possible.
  • background-image inside CSS, which is specified via url(“ “)

 

How to improve the LCP performance

  • Upload the main image in advance
  • Optimize all your images (tinyPNG, SVGOMG, etc.)
  • Optimize fonts
  • Optimize CSS
  • Optimize Javascript on the client. 

Cumulative Layout Shift <0.25

Have you ever visited a site, clicked on any button or product, and as a result, another element appeared on the top? And it turns out that you clicked on a completely different block or even on an advertising banner? Users do not like this and Google, accordingly, too. Block size jumps like this increase the value of the CLS factor, which is bad for performance.

How to avoid and reduce the CLS:

  • Reserve space and set a fixed height for blocks that will be loaded dynamically. Or you can use skeleton to improve your UX even more.
  • Track which device the user came from to show him the correct blocks of the correct size right away. You can do this in native ways with cookies, or you can use ready libraries.

Let’s hack it?

Documentation to help

The easiest way is to read carefully the Next.js documentation, which developers update periodically. There you can find out that:

  • By adding the "priority" property to the next <Image/> you can improve the value of the LCP
  • All links in next.js have prefetch=true by default. Thus, next trying to preload information for pages on the links. But it takes time and resources. Set prefetch=false to at least those links that users rarely click on, such as the privacy policy.
  • You can change the script loading order for the <Script/> component yourself using the strategy property. It can take the values beforeInteractive (for bot detectors, cookies confirmation), afterInteractive (tag managers, Google Analytics), lazyOnLoad (social media widgets, chats support).

 

Bundle master

To make it easier to keep track of the bundle size, there are several tools:

  • Import Cost is an extension for VS Code that tells you how much each imported element takes.
  • Package Phobia/Bundle Phobia are services that will show how much your library has, what libraries it pulls along with it, and also what other analogues you can use. 
  • Webpack Bundle Analyzer is a webpack plugin, which visualizes the look of your bundle, showing its structure. A tutorial on how to use it can be found here.

 

Shallow Routing

There is such an interesting feature in Next.js as shallow routing. This is one of the options you may see when using router.push or router.replace. The default is false. But if set to true, then the whole page will not be reloaded when the URL changes and getServerSideProps will not work and will not load new data again. Be careful, the rerender may not be noticeable in local development, but it will definitely show up in production. If you're using filters based on URL query parameters, it's better to track that the URL has changed and load the data in useEffect again just for the desired block. 

 

SWR – what is and where

Use SWR to load data on the client instead of useEffect. It was specially created by Next.js. It automatically caches and revalidates data, and also allows you to request data from the server at regular intervals. 

Mobile Detect

There are times when you need to load different components depending on the width of the window. If you use conditional rendering and hooks like useWindowSize for this, rather than CSS styles, FCP loading may differ from the final result. In this case, the CLS will be huge, and the user will see a very jumping page at the beginning. This can be avoided if you determine which device and what sizes the user uses on the Next.js server. Package mobile-detect will help you a lot with this and will allow you to reduce CLS to zero.

The most cheating and effective hack

To significantly improve the FCP and LCP values, and with them the rest of the indicators, you need to replace the NextScript import in the custom document _document.js/.ts. Replace it with this file, which have been posted by incredible David Zhao. This hack saved me when I was struggling with huge FCP and LCP stats for days and added about 30 points to my Lighthouse Performance score. 

But we will not mindlessly copy-paste, so I will briefly tell you how it works. Remember how we usually load scripts using pure JS, HTML, CSS? We add them either in the head before the body, or at the end. Only in this case, we already have some kind of layout, the elements are mostly displayed, but interactivity is achieved through JS. So we can afford asynchronous loading of scripts after layout.

However, when we use frameworks like Next.js, we have JS that creates the entire page. That is to say, the script will always be in the head. But inside it also loads the scripts of the libraries, Next.js itself. We saw above that we can change the way some scripts are loaded using the Script component. But with internal scripts and node modules, this will not work.

By default, NextScript loads scripts onto the page using the async property. Thus, HTML is parsed first and the script is loaded. Then you have to wait until the script completes and only then continue parsing HTML.

If we use the defer property, then we get a situation where HTML is parsed first and scripts are asynchronously loaded in parallel, but we do not have to wait until they work to finish parsing HTML. Since most of the scripts are really not needed for the first render, this makes the page loading much faster.