Here are some common questions I’m asked when I talk with people about performance:
Today, I’m very excited to announce the release of a new project that helps answer those questions – and more!
Page Speed Benchmarks is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries – from retail to media.
With Page Speed Benchmarks, you can do things like:
If you already like tools like the HTTP Archive, I think you'll love how you can use Page Speed Benchmarks to complement the insights you're already getting. Keep reading to find out how we set up these benchmarks, and how you can mine our test data – even if you're not a SpeedCurve user – for your own performance research.
For the past two years, the performance.now() conference has been the most valuable performance event I've attended. So valuable, in fact, that I've made some of the talks the cornerstone of this list of performance resolutions for 2020. I'd love to know how many – if any – of these are on your list. As always, I'd love people's feedback!
I confess, I’m not a statistician. While I pride myself on the 'A' I received in my college statistics class, admittedly it was on a pretty steep curve. That said, I’ve been looking at performance data for many years and have found myself on both sides of the debate about whether or not the practice of sampling performance data is inherently a good or bad idea.
When it comes to real user monitoring (RUM), I’m convinced that the marginal cost of collection, computation, storage, etc. is not always great enough to warrant a practice of collecting ALL THE THINGS by default.
Like any experiment, how you sample RUM data – as well as how much data to sample – depends on the answers you seek. While certainly not an exhaustive list, here are some questions you might ask when looking at implementing a sampled approach to real user monitoring...
I’ve joined SpeedCurve! I’m thrilled to share this news and have never been more excited about a career change than I am today. I’ve known this cast of characters for a while and am humbled that they have brought me onto the team. As Tammy put it when she joined, if this crew invited you to work with them, “what would you say?”
Tammy, Steve and Cliff at Velocity Conference circa 2015
As a veteran in the performance industry, I’ve spent a large part of my career helping to build performance culture. I’ve been in countless rooms and discussions defending the case for performance and helping to educate cross-functional teams about the impact of performance on the user experience and ultimately the health of the business.
My journey has taken me to both sides – as a product leader focused on building tools and solutions for customers, and as a practitioner focused on creating a culture of performance for one the world’s largest brands.
Here at SpeedCurve, the past few months have found us obsessing over how to define and measure user happiness. We've also been scrutinizing JS performance, particularly as it applies to third parties. And as always, we're constantly working to find ways to improve your experience with using our tools. See below for exciting updates on all these fronts.
As always, we love hearing from you, so please send your feedback and suggestions our way!
A while back, our friends at Shopify published this great case study, showing how they optimized one of their newer themes from the ground up – and how they worked to keep it fast. Inspired by that post, I wanted to dig a bit deeper into a few of the best practices they mentioned, which fall loosely into these three buckets:
Keep reading to learn how you can apply these best practices to your own site and give your pages a speed boost.
Raise your hand if you've ever poured countless hours into making a fast website, only to have it slowly degrade over time. New features, tweaks, and Super Important Tracking Snippets all pile up and slow things down. At some point you'll be given permission to "focus on performance" and after many more hours, the website will be fast again. But a few months later, things start to slow again. The cycle repeats.
What if there was a way that you could prevent performance from degrading in the first place? Some sort of performance gateway that only allows changes to production code if they meet performance requirements? I think it's time we talked about having performance regressions break the build.
Our third party metrics and dashboard have had an exciting revamp. With new metrics like blocking CPU, you can now see exactly who is really to blame for a crappy user experience. We've also given you the ability to monitor individual third parties over time and create performance budgets for them.
Or is it really you, and not me? We now automatically group all the requests in our third party waterfall chart, letting you easily identify all the third party services used on your website.
For each third party, you get the number of requests and size for each content type. There's also a first party comparison you can toggle on/off to see what proportion of your requests come from first party vs third party.
Ten years ago the network was the biggest problem when it came to making websites fast. Today, CPU is the main concern. This happened because networks got faster while JavaScript moved in the other direction growing 3x in size in the last six years. This growth is important because JavaScript consumes more CPU than all other browser activities combined. While JavaScript and other activities block the CPU, the browser can't respond to user input creating the sensation of a slow, jittery, or broken page, AKA "jank".
To help focus our attention on CPU, several new performance metrics have been defined and evangelized over the last year or three. In this post I'm going to focus on these:
Here's a figure to help visualize these metrics.
As organizations work to improve performance for users around the world on slower networks and devices, the focus on JavaScript continues to grow. LUX's new JavaScript dashboards help to identify the problems and solutions for creating a fast, joyous user experience.
LUX is SpeedCurve's real user monitoring product. We launched it two years ago with four dashboards: Live, Users, Performance, and Design. Today we've added two more LUX dashboards: JavaScript and JS Errors. These new dashboards let you see the impact JavaScript has on your site and on your users with new metrics, including First CPU Idle and First Input Delay, and new features, such as correlation charts that show you how CPU time correlates with bounce rate.