When it comes to long JavaScript tasks, how long is too long?
The general consensus within the web performance community is that any JS scripting task that takes more than 50ms to execute can affect a user's experience. When the browser's main thread hits max CPU for more than 50ms, a user starts to notice that their clicks are delayed and that scrolling the page has become janky and unresponsive. Batteries drain faster. People rage click or go elsewhere.
No one plans to make a page or web app that sucks the life out of their users' devices, so it's super important to monitor the effect your JS is having. (Yes... I'm looking at you, front-end JS libraries and third-party ads!)
We've recently added a bunch of new JavaScript CPU metrics that help you understand if your scripts are blocking the main thread and getting in the way of a super smooth experience for your users:
Getting visibility into the impact that known third parties have on the user experience has long been a focus in our community. There are some great tools out there – like 3rdParty.io from Nic Jansma and Request Map from Simon Hearne – which give us important insight into the complexity involved in tracking third-party content.
When we released our re-imagined Third Party Dashboard last year, we were excited to be providing site owners with another great tool for managing the unmanageable. Among other things, we took an approach that included:
This provided even more insight into the different ways JavaScript could be causing real headaches for users.
We received a lot of feedback from our customers, who loved the new third-party functionality but REALLY wanted to see similar functionality for their "first party" content as well. We heard this message loud and clear, and today we're happy to announce a few changes to our Synthetic monitoring tool that address this need while preserving the functionality you already know and love.
A while back, our friends at Shopify published this great case study, showing how they optimized one of their newer themes from the ground up – and how they worked to keep it fast. Inspired by that post, I wanted to dig a bit deeper into a few of the best practices they mentioned, which fall loosely into these three buckets:
Keep reading to learn how you can apply these best practices to your own site and give your pages a speed boost.
Ten years ago the network was the biggest problem when it came to making websites fast. Today, CPU is the main concern. This happened because networks got faster while JavaScript moved in the other direction growing 3x in size in the last six years. This growth is important because JavaScript consumes more CPU than all other browser activities combined. While JavaScript and other activities block the CPU, the browser can't respond to user input creating the sensation of a slow, jittery, or broken page, AKA "jank".
To help focus our attention on CPU, several new performance metrics have been defined and evangelized over the last year or three. In this post I'm going to focus on these:
Here's a figure to help visualize these metrics.
As organizations work to improve performance for users around the world on slower networks and devices, the focus on JavaScript continues to grow. LUX's new JavaScript dashboards help to identify the problems and solutions for creating a fast, joyous user experience.
LUX is SpeedCurve's real user monitoring product. We launched it two years ago with four dashboards: Live, Users, Performance, and Design. Today we've added two more LUX dashboards: JavaScript and JS Errors. These new dashboards let you see the impact JavaScript has on your site and on your users with new metrics, including First CPU Idle and First Input Delay, and new features, such as correlation charts that show you how CPU time correlates with bounce rate.
Loading scripts asynchronously is critical for getting pages to render more quickly. We care about rendering because that's what users see; if rendering is slow users have a negative experience. But it's not just about what users see - how the site feels is also important. That's why we focus so much on CPU time. If the CPU is blocked, then browsers are delayed responding to user interactions like scrolling and clicking on links. In other words, the page feels janky. And what consumes the most CPU in browsers? You guessed it: JavaScript!
Over the winter holiday we added a bunch of new metrics to LUX:
We're excited to announce the availability of the First Input Delay metric as part of LUX, SpeedCurve's RUM product.
It's exciting working at SpeedCurve and pushing the envelope on performance monitoring to better measure the user's experience. We believe when it comes to web performance it's important to measure what the user sees and experiences when they interact with your site. A big part of our focus on metrics has been around rendering including comparing TTI to FMP, Hero Rendering, and critical blocking resources.
The main bottleneck when it comes to rendering is the browser main thread getting blocked. This is why we launched CPU charts for synthetic testing over a year ago. Back then it wasn't possible to gather CPU information using real user monitoring (RUM), but the Long Tasks API changes that. Starting today, you can track how CPU impacts your users with SpeedCurve's RUM product.
Often when monitoring and debugging site performance we focus on network activity and individual resources, but what about the CPU? As more and more sites switch to using large Javascript frameworks and manipulating the page using Javascript, the execution time this code takes and the available CPU can instead become the performance bottleneck.
For any of SpeedCurve's Chrome-based tests, including emulated devices, we capture the Chrome Dev Tools timeline. From the browser main thread usage in the timeline we extract how busy the CPU is and what it's spending time on. Is it busy executing Javascript functions or busy laying out elements and painting pixels?
We also measure the CPU usage to different key events in the rendering of the page. SpeedCurve's focus is on the user experience and getting content in front of people as fast as possible, so we show you what the CPU is doing up till the page starts to render. This reflects CPU usage during the browser critical rendering path and can highlight various issues. If there's lots of CPU idle time then you're not delivering your resources efficiently. You want to get the CPU busy nice and early rendering the page, rather than sitting idle waiting for slow resources.
In the test below we see in the first pie chart that the CPU is spending a lot of time on layout up to the start render event, which is quite a different picture from the Fully Loaded CPU usage.