Skip to content

ICYMI: Some of our most exciting product updates of 2024!

Every year feels like a big year here at SpeedCurve, and 2024 was no exception. Here's a recap of product highlights designed to make your performance monitoring even better and easier!

Our biggest achievements this year have centred on making it easier for you to:

  • Gather more meaningful real user monitoring (RUM) data
  • Get actionable insights from Core Web Vitals
  • Simplify your synthetic testing
  • Get expert performance coaching when and how you need it

Keep reading to learn more...

Continue reading...

A Holiday Wish: Core Web Vitals in Safari

Did you know that key performance metrics – like Core Web Vitals – aren't supported in Safari? If that's news to you, you're not alone! Here's why that is... and what we and the rest of the web performance community are doing to fix it.

Bluesky post from Nicole Sullivan aka Stubbornella asking the community if they want CWV for Safari.

Somebody pinch me. Seeing this post and the resulting thread gives me great hope.

Nicole Sullivan (aka Stubbornella, WebKit Engineering Manager at Apple, and OG web performance evangelist) isn't making promises or dangling a carrot. Nonetheless, it's evidence of the willingness for some public discussion on a topic that's been exhaustively discussed in our community for years. Nicole's post has gotten some great responses from many leaders in our community, hopefully shaping a strong use case for future WebKit support for Core Web Vitals.

(If you're new to performance, Core Web Vitals is a set of three metrics – Largest Contentful Paint, Cumulative Layout Shift, and Interaction to Next Paint – that are intended to measure the rendering speed, interactivity, and visual stability of web pages.)

In this post, I'm going to highlight some of the discussion around the topic of Core Web Vitals and Safari, which was a major theme coming out of the recent web performance marathon in Amsterdam that included WebPerf Days, performance.sync(), and the main event, performance.now().

Continue reading...

NEW: Vitals dashboard updates and filter improvements

Our development team recently emerged from an offsite with two wonderful improvements to SpeedCurve. The team tackled a project to unify our filtering, and then they over-delivered with a re-Vital-ized dashboard that I'm finding to be one of the most useful views in the product.

Take a look at the recent updates – and a big thank you to our amazing team for putting so much love into SpeedCurve!

Continue reading...

How to provide better attribution for your RUM metrics

Here's a detailed walkthrough showing how to make more meaningful and intuitive attributions for your RUM metrics – which makes it much easier for you to zero in on your performance issues.

Real user monitoring (RUM) has always been incredibly important for any organization focused on performance. RUM – also known as field testing – captures performance metrics as real users browse your website and helps you understand how actual users experience your site. But it’s only in the last few years that RUM data has started to become more actionable, allowing you to diagnose what is making your pages slower or less usable for your visitors.

Making newer RUM metrics – such as Core Web Vitals – more actionable has been a significant priority for standards bodies. A big part of this shift has been better attribution, so we can tell what's actually going on when RUM metrics change.

Core Web Vitals metrics – like Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) – all have some level of attribution associated with them, which helps you identify what exactly is triggering the metric. The LoAF API is all about attribution, helping you zero in on which scripts are causing issues.

Having this attribution available, particularly when paired with meaningful subparts, can help us to quickly identify which specific components we should prioritize in our optimization work.

We can help make this attribution even more valuable by ensuring that key components in our page have meaningful, semantic attributes attached to them.

Continue reading...

NEW: Paper cuts update!

Paper cut: (literal) A wound caused by a piece of paper or any thin, sharp material that can slice through skin. (figurative) A trivial-seeming problem that causes a surprising amount of pain.

We all love big showy features, and this year we've released our share of those. But sometimes it's the small stuff that can make a big difference. We recently took a look at our backlog of smaller requests from our customers – which we labelled "paper cuts" – and decided to dedicate time to tackle them.

Are they all glamorous changes? Maybe not, though some are pretty exciting.

Are they worthy of a press release? Ha! We don't even know how to issue a press release.

Will they make your day better and put a smile on your face? We sure hope so.

In total, our wonderful development team tackled more than 30 paper cuts! These include:

  • Exciting new chart types for Core Web Vitals and User Happiness
  • Filter RUM data by region
  • Create a set of tests for one or multiple sites or custom URLs
  • Test directly from your site settings when saving changes
  • Usability improvements
  • Better messaging for test failures
  • And more!

Keep scrolling for an overview of some of the highlights. 

Continue reading...

NEW: Improving how we collect RUM data

We've made improvements to how we collect RUM data. Most SpeedCurve users won't see significant changes to Core Web Vitals or other metrics, but for a small number of users some metrics may increase.

This post covers:

  • What the changes are
  • How the changes can affect Core Web Vitals and other metrics
  • Why we are making the changes now

Continue reading...

NEW: RUM attribution and subparts for Interaction to Next Paint!

Now it's even easier to find and fix Interaction to Next Paint issues and improve your Core Web Vitals.

Heatmap table showing INP selector performance sorted by page views

Our newest release continues our theme of making your RUM data even more actionable. In addition to advanced settings, navigation types, and page attributes, we've just released more diagnostic detail for the latest flavor in Core Web Vitals: Interaction to Next Paint (INP).

This post covers:

  • Element attribution for INP 
  • A breakdown of where time is spent within INP, leveraging subparts
  • How to use this information to find and fix INP issues
  • A look ahead at RUM diagnostics at SpeedCurve

Continue reading...

A Complete Guide to Web Performance Budgets

It's easier to make a fast website than it is to keep a website fast. If you've invested countless hours in speeding up your site, but you're not using performance budgets to prevent regressions, you could be at risk of wasting all your efforts.

In this post we'll cover how to:

  • Use performance budgets to fight regressions
  • Understand the difference between performance budgets and performance goals
  • Identify which metrics to track
  • Validate your metrics to make sure they're measuring what you think they are – and to see how they correlate with your user experience and business metrics
  • Determine what your budget thresholds should be
  • Focus on the pages that matter most
  • Get buy-in from different stakeholders in your organization
  • Integrate with your CI/CD process
  • Synthesize your synthetic and real user monitoring data
  • Maintain your budgets

This bottom of this post also contains a collection of case studies from companies that are using performance budgets to stay fast. 

Let's get started!

Continue reading...

Hello INP! Here's everything you need to know about the newest Core Web Vital

After years of development and testing, Google has added Interaction to Next Paint (INP) to its trifecta of Core Web Vitals – the performance metrics that are a key ingredient in its search ranking algorithm. INP replaces First Input Delay (FID) as the Vitals responsiveness metric.

Not sure what INP means or why it matters? No worries – that's what this post is for. :)

  • What is INP?
  • Why has it replaced First Input Delay?
  • How does INP correlate with user behaviour metrics, such as conversion rate?
  • What you need to know about INP on mobile devices
  • How to debug and optimize INP

And at the bottom of this post, we'll wrap thing up with some inspiring case studies from companies that have found that improving INP has improved sales, pageviews, and bounce rate. 

Let's dive in!

 

Continue reading...

Debugging Interaction to Next Paint (INP)

Not surprisingly, most of the conversations I've had with SpeedCurve users over the last few months have focused on improving INP.

INP measures how responsive a page is to visitor interactions. It measures the elapsed time between a tap, a click, or a keypress and the browser next painting to the screen.

Definition of INP

INP breaks down into three sub-parts

  • Input Delay – How long the interaction handler has to wait before executing
  • Processing Time – How long the interaction handler takes to execute
  • Presentation Delay – How long it takes the browser to execute any work it needs to paint updates triggered by the interaction handler

Pages can have multiple interactions, so the INP time you'll see reported by RUM products and other tools, such as Google Search Console and Chrome's UX Report (CrUX), will generally be the worst/highest INP time at the 75th percentile.

Like all Core Web Vitals, INP has a set of thresholds:

INP thresholds for Good, Needs Improvement and Poor
INP thresholds for Good, Needs Improvement, and Poor


Many sites tend to be in the Needs Improvement or Poor categories. My experience over the last few months is that getting to Good is achievable, but it's not always easy.

In this post I'm going to walk through:

  • How I help people identify the causes of poor INP times
  • Examples of some of the most common issues
  • Approaches I've used to help sites improve their INP

Continue reading...

How to find (and fix!) INP interactions on your pages

Andy Davies – fellow SpeedCurver and web performance consultant extraordinaire – recently shared an impressive Interaction to Next Paint (INP) success:

Andy has promised us a more in-depth post on debugging Interaction to Next Paint. While he's working on that, I'll try not to steal his thunder while I share a tip that may help you identify element(s) causing INP issues for your pages.

Continue reading...

Mobile INP performance: The elephant in the room

Earlier this year, when Google announced that Interaction to Next Paint (INP) will replace First Input Delay (FID) as the responsiveness metric in Core Web Vitals in *gulp* March of 2024, we had a lot to say about it. (TLDR: FID doesn't correlate with real user behavior, so we don't endorse it as a meaningful metric.)

Our stance hasn't changed much since then. For the most part, everyone agrees the transition from FID to INP is a good thing. INP certainly seems to be capturing interaction issues that we see in the field.

However, after several months of discussing the impending change and getting a better look at INP issues in the wild, it's hard to ignore the fact that mobile stands out as the biggest INP offender by a wide margin. This doesn't get talked about as much as it should, so in this post we'll explore:

  • The gap between "good" INP for desktop vs mobile
  • Working theories as to why mobile INP is so much poorer than desktop INP
  • Correlating INP with user behavior and business metrics (like conversion rate)
  • How you can track and improve INP for your pages

Continue reading...

Does Interaction to Next Paint actually correlate to user behavior?

Earlier this year, Google announced that Interaction to Next Paint (INP) is no longer an experimental metric. INP will replace First Input Delay (FID) as a Core Web Vital in March of 2024.

Now that INP has arrived to dethrone FID as the responsiveness metric in Core Web Vitals, we've turned our eye to scrutinizing its effectiveness. In this post, we'll look at real-world data and attempt to answer: What correlation – if any – does INP have with actual user behavior and business metrics?

Continue reading...

10 things I love about SpeedCurve (that I think you'll love, too)

This month, SpeedCurve enters double digits with our tenth birthday. We're officially in our tweens! (Cue the mood swings?)

I joined the team in early 2017, and I'm blown away at how quickly the years have flown by. Every day, I marvel at my great luck in getting to work alongside an amazing team to build amazing tools to help amazing people like you!

In the spirit of celebration, I thought it would be fun to round up my ten favourite things to do in SpeedCurve (that I think you'll like, too). Keep scrolling to learn how to:

  1. Fight regressions and stay fast
  2. See the impact of performance on your business
  3. Benchmark your site against your competitors
  4. Track third parties to make sure they're not quietly hurting performance
  5. Make sure you're tracking the best metrics for your pages
  6. Get a prioritized list of performance recommendations
  7. Bookmark and compare synthetic tests and RUM sessions so you can quickly find and fix performance issues
  8. Run A/B tests so you see how code changes affect your performance and user engagement metrics
  9. Get customized weekly reports
  10. Motivate your team with a wall-mounted monitor showing your favourite charts 

Continue reading...

Demystifying Cumulative Layout Shift with CLS Windows

As we all know, naming things is hard.

Google's Core Web Vitals are an attempt to help folks new to web performance focus on three key metrics. Not all of these metrics are easy to understand based on their names alone:

  • Largest Contentful Paint (LCP) – When the largest visual element on the page renders
  • First Input Delay (FID) – How quickly a page responds to a user interaction (FID will be replaced by Interaction to Next Paint in March 2024)
  • Cumulative Layout Shift (CLS) – How visually stable a page is

Any time a new metric is introduced, it puts the burden on the rest of us to first unpack all the acronyms, and then explore and digest what concepts the words might refer to. This gets even trickier if the acronym stays the same, but the logic and algorithm behind the acronym changes.

In this post, we will dive deeper into Cumulative Layout Shift (CLS) and how it has quietly evolved over the years. Because CLS has been around for a while, you may already have some idea of what it represents. Before we go any further, I have a simple question for you:

How do you think Cumulative Layout Shift is measured? 

Hold your answer in your head as we explore the depths of CLS. I'm interested if your assumptions were correct, and there's a poll at the bottom of this post I'd love you to answer.

 

Continue reading...

Exploring performance and conversion rates just got easier

Demonstrating the impact of performance on your users – and on your business – is one of the best ways to get your company to care about the speed of your site.

Tracking goal-based metrics like conversion rate alongside performance data can give you richer and more compelling insights into how the performance of your site affects your users. This concept is not new by any means. In 2010, the Performance and Reliability team I was fortunate enough to lead at Walmartlabs shared our findings around the impact of front-end times on conversion rates. (This study and a number of other case studies tracked over the years can be found at WPOstats.)

Setting up conversion tracking in SpeedCurve RUM is fairly simple and definitely worthwhile. This post covers:

  • What is a conversion?
  • How to track conversions in SpeedCurve
  • Using conversion data with performance data for maximum benefit
  • Conversion tracking and user privacy

Continue reading...

What is page bloat? And how is it hurting your business, your search rank, and your users?

For more than ten years, I've been writing about page bloat, its impact on site speed, and ultimately how it affects your users and your business. You might think that this topic would be played out by now, but every year I learn new things – beyond the overarching fact that pages keep getting bigger and more complex, as you can see in this chart, using data from the HTTP Archive.

In this post, we'll cover:

  • How much pages have grown over the past year
  • How page bloat hurts your business and – at the heart of everything – your users
  • How page bloat affects Google's Core Web Vitals (and therefore SEO)
  • If it's possible to have large pages that still deliver a good user experience
  • Page size targets
  • How to track page size and complexity
  • How to fight regressions

Continue reading...

Farewell FID... and hello Interaction to Next Paint!

Today at Google I/O 2023, it was announced that Interaction to Next Paint (INP) is no longer an experimental metric. INP will replace First Input Delay (FID) as a Core Web Vital in March of 2024. 

It's been three years since the Core Web Vitals initiative was kicked off in May 2020. In that time, we've seen people's interest in performance dramatically increase, especially in the world of SEO. It's been hugely helpful to have a simple set of three metrics – focused on loading, interactivity, and responsiveness – that everyone can understand and focus on.

During this time, SpeedCurve has stayed objective when looking at the CWV metrics. When it comes to new performance metrics, it's easy to jump on hype-fuelled bandwagons. While we definitely get excited about emerging metrics, we also approach each new metric with an analytical eye. For example, back in November 2020, we took a closer look at one of the Core Web Vitals, First Input Delay, and found that it was sort of 'meh' overall when it came to meaningfully correlating with actual user behavior.

Now that INP has arrived to dethrone FID as the responsiveness metric for Core Web Vitals, we've turned our eye to scrutinizing its effectiveness.

In this post, we'll take a closer look and attempt to answer:

  • What is Interaction to next Paint?
  • How does INP compare to FID?
  • What is a 'good' INP result?
  • Will there be differences between INP collected in RUM vs. Chrome User Experience Report (CrUX)?
  • What correlation does INP have with real user behavior?
  • When should you start caring about INP?
  • How can you see INP for your own site in SpeedCurve?

Onward!

Continue reading...

NEW! Lighthouse 10, Core Web Vitals updates, and Interaction to Next Paint

There is a lot of excitement in the world of web performance these days, and April has been no exception! At SpeedCurve, we've been focused on staying on top of the items that affect you the most.

Here is a look at what's new in SpeedCurve:

  • Support for Lighthouse 10, including metric scoring changes as well as audits
  • Updated RUM Core Web Vitals, including the much-anticipated addition of Interaction to Next Paint (INP)

All of this work driven by the community is having a big impact in our collective goal to make performance accessible for everyone.

Read on to learn more about these exciting changes! 

Continue reading...

Why you need to know your site's performance plateau (and how to find it)

"I made my pages faster, but my business and user engagement metrics didn't change. WHY???"

"How do I know how fast my pages should be?"

"How can I demonstrate the business value of performance to people in my organization?"

If you've ever asked yourself any of these questions, then you could find the answers in identifying and understanding the performance plateau for your site.

What is the "performance plateau"?

The performance plateau is the point at which changes to your website’s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you’ve bottomed out in terms of business and user engagement metrics.

In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.

The concept of the performance plateau isn't new. I first encountered it more than ten years ago, when I was looking at data for a number of sites and noticed that – not only was there a correlation between performance metrics and business/engagement metrics – there was also a noticeable plateau in almost every correlation chart I looked at. 

A few months ago someone asked me if I've done any recent investigation into the performance plateau, to see if the concept still holds true. When I realized how much time has passed since my initial research, I thought it would be fun to take a fresh look.

In this post, I'll show how to use your own data to find the plateau for your site, and then what to do with your new insights.

Continue reading...

RSS