It’s the end of the year, and as we look at our inventory, we thought,…
Why Optimizing Your Lighthouse Score Is Not Enough For A Fast Website — TechRuum
We’ve all had that moment. You’re optimizing the performance of some website, scrutinizing every millisecond it takes for the current page to load. You’ve fired up Google Lighthouse from Chrome’s DevTools because everyone and their uncle uses it to evaluate performance.
After running your 151st report and completing all of the recommended improvements, you experience nirvana: a perfect 100% performance score!
Time to pat yourself on the back for a job well done. Maybe you can use this to get that pay raise you’ve been wanting! Except, don’t — at least not using Google Lighthouse as your sole proof. I know a perfect score produces all kinds of good feelings. That’s what we’re aiming for, after all!
Google Lighthouse is merely one tool in a complete performance toolkit. What it’s not is a complete picture of how your website performs in the real world. Sure, we can glean plenty of insights about a site’s performance and even spot issues that ought to be addressed to speed things up. But again, it’s an incomplete picture.
What Google Lighthouse Is Great At
I hear other developers boasting about perfect Lighthouse scores and see the screenshots published all over socials. Hey, I just did that myself in the introduction of this article!
Open DevTools, click the Lighthouse tab, and generate the report! There are even many ways we can configure Lighthouse to measure performance in simulated situations, such as slow internet connection speeds or creating separate reports for mobile and desktop. It’s a very powerful tool for something that comes baked into a free browser. It’s also baked right into Google’s PageSpeed Insights tool!
And it’s fast. Run a report in Lighthouse, and you’ll get something back in about 10-15 seconds. Try running reports with other tools, and you’ll find yourself refilling your coffee, hitting the bathroom, and maybe checking your email (in varying order) while waiting for the results. There’s a good reason for that, but all I want to call out is that Google Lighthouse is lightning fast as far as performance reporting goes.
To recap: Lighthouse is great at many things!
- It’s convenient to access,
- It provides a good deal of configuration for different levels of troubleshooting,
- And it spits out reports in record time.
And what about that bright and lovely animated green score — who doesn’t love that?!
OK, that’s the rosy side of Lighthouse reports. It’s only fair to highlight its limitations as well. This isn’t to dissuade you or anyone else from using Lighthouse, but more of a heads-up that your score may not perfectly reflect reality — or even match the scores you’d get in other tools, including Google’s own PageSpeed Insights.
It Doesn’t Match “Real” Users
Not all data is created equal in capital Web Performance. It’s important to know this because data represents assumptions that reporting tools make when evaluating performance metrics.
The data Lighthouse relies on for its reporting is called simulated data. You might already have a solid guess at what that means: it’s synthetic data. Now, before kicking simulated data in the knees for not being “real” data, know that it’s the reason Lighthouse is super fast.
You know how there’s a setting to “throttle” the internet connection speed? That simulates different conditions that either slow down or speed up the connection speed, something that you configure directly in Lighthouse. By default, Lighthouse collects data on a fast connection, but we can configure it to something slower to gain insights on slow page loads. But beware! Lighthouse then estimates how quickly the page would have loaded on a different connection.
DebugBear founder Matt Zeunert outlines how data runs in a simulated throttling environment, explaining how Lighthouse uses “optimistic” and “pessimistic” averages for making conclusions:
“[Simulated throttling] reduces variability between tests. But if there’s a single slow render-blocking request that shares an origin with several fast responses, then Lighthouse will underestimate page load time.
Lighthouse averages optimistic and pessimistic estimates when it’s unsure exactly which nodes block rendering. In practice, metrics may be closer to either one of these, depending on which dependency graph is more correct.”
And again, the environment is a configuration, not reality. It’s unlikely that your throttled conditions match the connection speeds of an average real user on the website, as they may have a faster network connection or run on a slower CPU. What Lighthouse provides is more like “on-demand” testing that’s immediately available.
That makes simulated data great for running tests quickly and under certain artificially sweetened conditions. However, it sacrifices accuracy by making assumptions about the connection speeds of site visitors and averages things in a way that divorces it from reality.
While simulated throttling is the default in Lighthouse, it also supports more realistic throttling methods. Running those tests will take more time but give you more accurate data. The easiest way to run Lighthouse with more realistic settings is using an online tool like the DebugBear website speed test or WebPageTest.
It Doesn’t Impact Core Web Vitals Scores
These Core Web Vitals everyone talks about are Google’s standard metrics for measuring performance. They go beyond simple “Your page loaded in X seconds” reports by looking at a slew of more pertinent details that are diagnostic of how the page loads, resources that might be blocking other resources, slow user interactions, and how much the page shifts around from loading resources and content. Zeunert has another great post here on TechRuum that discusses each metric in detail.
The main point here is that the simulated data Lighthouse produces may (and often does) differ from performance metrics from other tools. I spent a good deal explaining this in another article. The gist of it is that Lighthouse scores do not impact Core Web Vitals data. The reason for that is Core Web Vitals relies on data about real users pulled from the monthly-updated Chrome User Experience (CrUX) report. While CrUX data may be limited by how recently the data was pulled, it is a more accurate reflection of user behaviors and browsing conditions than the simulated data in Lighthouse.
The ultimate point I’m getting at is that Lighthouse is simply ineffective at measuring Core Web Vitals performance metrics. Here’s how I explain it in my bespoke article:
“[Synthetic] data is fundamentally limited by the fact that it only looks at a single experience in a pre-defined environment. This environment often doesn’t even match the average real user on the website, who may have a faster network connection or a slower CPU.”
I emphasized the important part. In real life, users are likely to have more than one experience on a particular page. It’s not as though you navigate to a site, let it load, sit there, and then close the page; you’re more likely to do something on that page. And for a Core Web Vital metric that looks for slow paint in response to user input — namely, Interaction to Next Paint (INP) — there’s no way for Lighthouse to measure that at all!
It’s the same deal for a metric like Cumulative Layout Shift (CLS) that measures the “visible stability” of a page layout because layout shifts often happen lower on the page after a user has scrolled down. If Lighthouse relied on CrUX data (which it doesn’t), then it would be able to make assumptions based on real users who interact with the page and can experience CLS. Instead, Lighthouse waits patiently for the full page load and never interacts with parts of the page, thus having no way of knowing anything about CLS.
But It’s Still a “Good Start”
That’s what I want you to walk away with at the end of the day. A Lighthouse report is incredibly good at producing reports quickly, thanks to the simulated data it uses. In that sense, I’d say that Lighthouse is a handy “gut check” and maybe even a first step to identifying opportunities to optimize performance.
But a complete picture, it’s not. For that, what we’d want is a tool that leans on real user data. Tools that integrate CrUX data are pretty good there. But again, that data is pulled every month (28 days to be exact) so it may not reflect the most recent user behaviors and interactions, although it is updated daily on a rolling basis and it is indeed possible to query historical records for larger sample sizes.
Even better is using a tool that monitors users in real-time.
I’ve written about using the Performance API in JavaScript to evaluate custom and Core Web Vitals metrics, so it’s possible to roll that on your own. But there are plenty of existing services out there that do this for you, complete with visualizations, historical records, and true real-time user monitoring (often abbreviated as RUM). What services? Well, DebugBear is a great place to start. I cited Matt Zeunert earlier, and DebugBear is his product.
So, if what you want is a complete picture of your site’s performance, go ahead and start with Lighthouse. But don’t stop there because you’re only seeing part of the picture. You’ll want to augment your findings and diagnose performance with real-user monitoring for the most complete, accurate picture.
(gg, yk)
#Optimizing #Lighthouse #Score #Fast #Website #Smashing #Magazine