Skip to content
Business Success
Core Web Vitals
Optimization Techniques
Metrics & Charts

Best Practices for Optimizing JavaScript

Byte for byte, no resource affects page speed more than JavaScript. JavaScript affects network performance, CPU processing time, memory usage, and overall user experience. Inefficient scripts can slow down your website, making it less responsive and more frustrating for your users.

Image: Freepik

To ensure your website runs smoothly, it is crucial to optimize your JavaScript. This guide will walk you through some essential techniques for reducing the negative impact of JavaScript on your page performance by focusing on reducing the impact on the initial load, as well as reducing the impact of the actual JavaScript interaction itself.

Reducing loading impact

JavaScript is, by default, parser blocking. That means that when the browser finds a JavaScript resource, it needs to stop parsing the HTML until it has downloaded, parsed, compiled, and executed that JavaScript. Only after all of that is done can it continue to look through the rest of the HTML and start to request other resources and get on its way to displaying the page.

This means that JavaScript creates a massive bottleneck in your initial page load performance. There are a few things we can do to help minimize that.

Wherever possible, don’t use JavaScript

The single best thing you can do with JavaScript: avoid using it whenever possible.

As the web gets more and more powerful, features like CSS animations, HTML attributes for lazy loading and more make a lot of legacy JavaScript solutions unnecessary.

While they initially resisted the idea, we’ve also seen major frameworks come around to the idea that generating markup on the server is a much better approach for performance than relying on client-side JavaScript to generate everything.

Reducing your dependence on JavaScript for the load of your page not only reduces the amount of JavaScript the browser must download, parse, compile, and execute, but it also lets the browser take advantage of its own internal optimizations to get maximum performance.

Make sure JavaScript is minified and compressed

To keep the network cost of your JavaScript down, make sure that all JavaScript has been properly minified and compressed.

Minifying JavaScript involves removing all unnecessary characters (white space, comments, etc) from the code without changing its actual functionality and can, and should, be done from an automated build tool.

Applying proper compression to your already minified files provides even great reduction to the file size and network costs. There are two primary compression methods:

  • Gzip – Has been around for awhile and is the most widely supported method of compression.
  • Brotli – Newer compression algorithm that can provide even greater gains than Gzip. It is also now supported by all major browsers and can be enabled on most modern web servers.

Compression is one of those things that can be applied at the server or content delivery network (CDN) level. It should be enabled on all text-based resources, not just JavaScript.

Make JavaScript non-blocking with async and defer

As mentioned earlier, by default JavaScript is parser-blocking – not only does it block the display of the page, it even blocks the browser from parsing the HTML at all.

To change this default behavior, we can use the async and defer attributes. These attributes allow the browser to continue parsing the HTML while the JavaScript file is being fetched, improving page load performance.

async Attribute

When a script is loaded with the async attribute, it is downloaded in parallel with the HTML parsing. Once the script is downloaded, it is executed immediately. This means it is still possible for the execution of the script to block HTML parsing if the script arrives quickly enough.

<script src="script.js" async></script>

defer Attribute

When a script is loaded with the defer attribute, not only is it downloaded in parallel with HTML parsing, but the actual execution of that script is paused until after the HTML has been completely parsed. This guarantees that the DOM is fully built before the script runs, allowing the browser to quickly display the page.

<script src="script.js" defer></script>

Improving JavaScript execution

Getting JavaScript out of the way of the initial page load is a great start, but we also need to focus on ensuring the actual execution of that JavaScript is as quick as possible.

The new Interaction to Next Paint metric is helping us to see just how costly that slow JavaScript can be. While JavaScript is executing, it is blocking the main thread of the browser, making it impossible for the browser to respond to any user interaction until that execution is complete.

We do have a few key things to keep in mind to help minimize this impact.

Avoid layout thrashing

Every time there is a change to the DOM or any CSS properties, the browser must re-evaluate the layout and visual styling of the page. This process involves two main actions:

  • Reflow – A reflow occurs when something related to the layout of the page (height, width, positioning, etc) changes, causing the browser to determine the new positions and sizes of elements.
  • Repaint – A repaint occurs when something changes that affects the appearance (colors, visibility, etc) but not the actual layout of the page, causing the browser to redraw any affected pixels.

Layout thrashing occurs when JavaScript repeatedly reads from and writes to the DOM, causing a series of reflows and repaints. This can be incredibly costly from a performance perspective.

There are a few things we should try to do to avoid layout thrashing.

Batch DOM read and write operations

Instead of alternating between reading and writing to the DOM, always try to group your read DOM operations together, and then follow with an DOM write operations.

In the example below, we’re alternating between writing to the DOM and reading from it, all within a loop, causing multiple reflows and repaints for each item.

///bad example
let items = document.querySelectorAll('.item');
items.forEach(item => {
    item.style.margin = '10px'; // Write
    let width = item.offsetWidth; // Read
    item.style.padding = '5px'; // Write
});

An improved version might look like the example below. While we’re looping through the items twice, we’re avoiding multiple reflows and repaints making the process much cheaper.

let items = document.querySelectorAll('.item');
let widths = [];

// Batch read operations
items.forEach(item => {
    widths.push(item.offsetWidth); // Read
});

// Batch write operations
items.forEach((item, index) => {
    item.style.margin = '10px'; // Write
    item.style.padding = '5px'; // Write
    // Use the previously read width if needed
    let newWidth = widths[index] + 20; // Example calculation
    item.style.width = newWidth + 'px'; // Write
});

Minimize the use of layout-triggering properties

Certain properties—like offsetTopscrollX.getComputedStyle, and more—require the browser to recalculate the style and layout to make sure it’s returning an accurate value.

Wherever possible, we want to avoid using these properties or batch them together as we discussed above.

You can find a list of layout triggering properties in Paul Irish’s gist.

Use document fragments

When making multiple updates to the DOM, using a document fragment can significantly improve performance. A document fragment is a lightweight container that allows you to perform DOM manipulations off-screen. Once all the changes are made, you can append the fragment to the DOM in a single operation, minimizing reflows and repaints.

In the example below, we’re appending new elements directly to the DOM within a loop, causing multiple reflows and repaints:

///bad example
let list = document.getElementById('list');
for (let i = 0; i < 100; i++) {
    let newItem = document.createElement('li');
    newItem.textContent = `Item ${i}`;
    list.appendChild(newItem); // Causes reflow each time
}

The improved example below uses document fragments to first create the new elements within a single fragment, and then append that fragment to the DOM all at once, causing just a single reflow.

let list = document.getElementById('list');
let fragment = document.createDocumentFragment();
for (let i = 0; i < 100; i++) {
    let newItem = document.createElement('li');
    newItem.textContent = `Item ${i}`;
    fragment.appendChild(newItem); // No reflow yet
}
list.appendChild(fragment); // Single reflow

Yield to the main thread

Long running JavaScript results in long tasks which block the main thread from responding to user interaction and other critical tasks.

If we have JavaScript chunks that are long-running, we need to try to break those up so that the browser has room to breathe and respond to any pending interactions.

The scheduler.yield method can be used to “yield” control back to the browser, letting it run any important tasks that may be stacked up (page rendering, user input, etc).

scheduler.yield is very new, so support is pretty limited, which means we want to provide a fallback. In cases where scheduler.yield is not supported, we can fall back to a setTimeout.

function breakItUp () {
  if ('scheduler' in window && 'yield' in scheduler) {
    return scheduler.yield();
  }

  // Fall back to setTimeout:
  return new Promise(resolve => {
    setTimeout(resolve, 0);
  });
}

The helper method above can be used to break up long running tasks, improving our interaction times.

async function superLongTaks() { // do some stuff await breakItUp(); //do more stuff }

Read Next

Best Practices for Optimizing Images

Images are an important part of providing a rich, user-friendly experience online. It’s critical to optimize how they’re loaded and how much they weigh, making sure your beautiful images aren't hurting your page speed.

Leveraging the Browser Cache for Faster Load Times

In the world of front-end performance, reducing the number of HTTP requests is still the best way to speed up your site. First step: leverage the browser cache.