Experimentation tools that use asynchronous scripts – such as Google Optimize, Adobe Target, and Visual Web Optimizer – recommend using an anti-flicker snippet to hide the page until they've finished executing. But this practice comes with some performance measurement pitfalls:
In this post we'll look at how anti-flicker snippets work, their impact on Web Vitals, and how to measure the delay they add to visitors' experience.
Page Speed Benchmarks is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries – from retail to media – over the past year. This dashboard is publicly available (meaning you don't need a SpeedCurve account to explore it) and is a treasure trove of meaningful data that you can use for your own research.
The dashboard allows you to easily filter by region, industry, mobile/desktop, fast/slow, and key web performance metrics, including Google's Core Web Vitals. (Scroll down to the bottom of this post for more testing details.)
At the time of writing this post, these were the home pages with the fastest Start Render times in key industries:
As you can see, I've included Largest Contentful Paint alongside Start Render in this chart, for reasons I explain below.
(See our more recent page growth post: What is page bloat? And how is it hurting your business, search rank, and users?)
I've been writing about page size and complexity for years. If you've been working in the performance space for a while and you hear me start to talk about page growth, I'd forgive you if you started running away. ;)
But pages keep getting bigger and more complex year over year – and this increasing size and complexity is not fully mitigated by faster devices and networks, or by our hard-working browsers. Clearly we need to keep talking about it. We need to understand how ever-growing pages work against us. And we need to have strategies in place to understand and manage our pages.
In this post, we'll cover:
Shortly before the end of the year, we snuck in a couple of last-minute gifts for 2021. It was a great year for SpeedCurve with a lot of renewed focus on RUM. We couldn't think of a better way to finish out the year than to launch the new Live and Page Views dashboards. Let's take a look!
This is the most important question we ask ourselves regularly here at SpeedCurve:
Does this bring happiness?
"This" can mean anything from design and features to our own internal processes. We apply this litmus test to everything we do, and I believe it's the secret of our success. It's why we have such an incredible team, such wonderful customers, and such a heartfelt commitment to building tools that make your life easier... and ultimately make your users' lives easier, too!
2021 has been a full year. We celebrated our eighth birthday – which I think makes us about eighty-eight in tech years? We've also grown our team, expanded our customer base, completed a full-scale site redesign, launched our in-house consulting practice, and refined our tools and processes. Here are some of the highlights...
Chances are, you're here because of Google's update to its search algorithm, which affects both desktop and mobile, and which includes Core Web Vitals as a ranking factor. You may also be here because you've heard about the most recent potential candidates for addition to Core Web Vitals, which were just announced at Chrome Dev Summit.
A few things are clear:
If you're new to Core Web Vitals, this is a Google initiative that was launched in early 2020. Web Vitals is (currently) a set of three metrics – Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift – that are intended to measure the loading, interactivity, and visual stability of a page.
When Google talks, people listen. I talk with a lot of companies and I can attest that, since Web Vitals were announced, they've shot to the top of many people's list of things to care about. But Google's prioritization of page speed in search ranking isn't new, even for mobile. As far back as 2013, Google announced that pages that load slowly on mobile devices would be penalized in mobile search.
Keep reading to find out:
One of the things I love about SpeedCurve is our commitment to writing help documents that actually help. Every time we release a new feature, we make sure to give you an accompanying support doc – often written by the same team member who led the feature development. Luckily, we have great writers on our team, so our docs are exceptionally clear, concise, and easy to follow (if I do say so).
We just celebrated our eighth birthday – hooray! Eight years of building new features means eight years worth of support docs. That's a lot of docs! Earlier this year, we realized that we had well over a hundred articles in our support centre. Inevitably, some duplication had crept in and some dead wood had accumulated. So we decided to give our docs a complete overhaul. That meant editing, organizing, purging – we gave them the full KonMari treatment.
Our brand-new Support Hub is live! Here's a quick overview of what you can expect to find, including new goodies like our "Web Performance 101" guides, as well as recipes for completing common tasks and our improved and expanded API documentation.
If you want to understand how people actually experience your site, you need to monitor real users. The data we get from real user monitoring (RUM) is extremely useful when trying to get a grasp on performance. Not only does it serve as the source of truth for your most important budgets and KPIs, it help us understand that performance is a broad distribution that encompasses many different cohorts of users.
While real user monitoring gives us the opportunity for unparalleled insight into user experience, the biggest challenge with RUM data is that there's so much of it. Navigating through all this data has typically been done by peeling back one layer of information at a time, and it often proves difficult to identify the root cause when we see a change:
"What happened here?"
"Did the last release cause a drop in performance?"
"How can I drill down from here to see what's going on?"
"Is the issue confined to a specific region? Browser? Page?"
Today we're excited to release a new capability – your RUM Sessions dashboard – which allows you to drill into a dataset and explore those sessions that occurred within a given span of time.
One of the huge benefits of tracking web performance over time is the ability to see trends and compare metrics. Last year we added new functionality that makes it easy for you to bookmark and compare different synthetic tests in your test history. We recently added some additional enhancements to make comparing tests even easier.
With the 'Compare' feature, you can generate side-by-side comparisons that let you not only spot regressions, but easily identify what caused them:
Along the way, we've also made it much more intuitive for you to drill down into your detailed synthetic test results. Let's take a look...
After Google's announcement about Lighthouse 8 this past month, we have updated our test agents. We've gotten a lot of questions about what has changed and the impact on your performance metrics, so here's a summary.