I’m beyond excited to announce that I’m joining the SpeedCurve team this year! I’ll still be doing some consulting work, but I’ll be taking on a few less clients this year so I can focus on helping to make an already amazing performance tool even better, working alongside some of my favorite people in the performance community.
Andy Davies – fellow SpeedCurver and web performance consultant extraordinaire – recently shared an impressive Interaction to Next Paint (INP) success:
Andy has promised us a more in-depth post on debugging Interaction to Next Paint. While he's working on that, I'll try not to steal his thunder while I share a tip that may help you identify element(s) causing INP issues for your pages.
Every year feels like a big year here at SpeedCurve, and 2023 was no exception.
Among other things, we turned ten! Ten years is a lot of time to reflect, and over this past year our team has been thinking a lot about not just the "what" and "how" of web performance, but also the "why". Why should we – and you – care about delivering a fast, delightful experience to all your users? This "why" informs all the choices we make about the "what" and "how" of our tools.
Looking back over the past year, if I were to pick a word that defined our goals, that word would be "easier". It's no secret that the past couple of years have been challenging for the tech community. In the current landscape of smaller teams, aggressive goals, and an ever-increasing tech stack, how can we make it easier for you to create impact?
Our biggest achievements this year have centred on making it easier for you to:
Keep reading to learn more...
As highlighted in our December product update, we've been making a lot of improvements in the area of CI/CD. In addition to the new Deployments dashboards and Notes updates, we've launched a new GitHub integration. Our purpose in building this integration is to lower the barrier for getting web performance feedback for your code changes, directly in the environment you are working in.
Follow along below for an example of how you can use this integration in practice to fight web performance regressions and keep your pages fast.
I love LEGO. My kids love LEGO, too, which means that every year I find myself spending a fair bit of time on the LEGO website during the holidays. So I thought it would be fun to spend some time poking around behind the scenes and give the site a holiday performance audit. Keep reading to find out what I learned, and some lessons you may want to apply to your own pages.
Holy cow, it's been a busy few months! SpeedCurve turned ten, we attended (and gave talks at) performance.now(), Firefox added support for Largest Contentful Paint (LCP), and oh yeah... we just shipped a ton of stuff! (We wanted to wait until the dust settled around Black Friday/Cyber Monday for all of our friends in retail, which led to a pretty monumental release.)
So get comfy and check out our holiday updates.
Earlier this year, when Google announced that Interaction to Next Paint (INP) will replace First Input Delay (FID) as the responsiveness metric in Core Web Vitals in *gulp* March of 2024, we had a lot to say about it. (TLDR: FID doesn't correlate with real user behavior, so we don't endorse it as a meaningful metric.)
Our stance hasn't changed much since then. For the most part, everyone agrees the transition from FID to INP is a good thing. INP certainly seems to be capturing interaction issues that we see in the field.
However, after several months of discussing the impending change and getting a better look at INP issues in the wild, it's hard to ignore the fact that mobile stands out as the biggest INP offender by a wide margin. This doesn't get talked about as much as it should, so in this post we'll explore:
Earlier this year, Google announced that Interaction to Next Paint (INP) is no longer an experimental metric. INP will replace First Input Delay (FID) as a Core Web Vital in March of 2024.
Now that INP has arrived to dethrone FID as the responsiveness metric in Core Web Vitals, we've turned our eye to scrutinizing its effectiveness. In this post, we'll look at real-world data and attempt to answer: What correlation – if any – does INP have with actual user behavior and business metrics?
This month, SpeedCurve enters double digits with our tenth birthday. We're officially in our tweens! (Cue the mood swings?)
I joined the team in early 2017, and I'm blown away at how quickly the years have flown by. Every day, I marvel at my great luck in getting to work alongside an amazing team to build amazing tools to help amazing people like you!
In the spirit of celebration, I thought it would be fun to round up my ten favourite things to do in SpeedCurve (that I think you'll like, too). Keep scrolling to learn how to:
As we all know, naming things is hard.
Google's Core Web Vitals are an attempt to help folks new to web performance focus on three key metrics. Not all of these metrics are easy to understand based on their names alone:
Any time a new metric is introduced, it puts the burden on the rest of us to first unpack all the acronyms, and then explore and digest what concepts the words might refer to. This gets even trickier if the acronym stays the same, but the logic and algorithm behind the acronym changes.
In this post, we will dive deeper into Cumulative Layout Shift (CLS) and how it has quietly evolved over the years. Because CLS has been around for a while, you may already have some idea of what it represents. Before we go any further, I have a simple question for you:
How do you think Cumulative Layout Shift is measured?
Hold your answer in your head as we explore the depths of CLS. I'm interested if your assumptions were correct, and there's a poll at the bottom of this post I'd love you to answer.