When a measure becomes a target, it ceases to be a good measure.<p>Which is to ask: So they improved their Lighthouse score, grats, but did they improve or reduce their user experience? Where's the data on <i>that</i>?<p>This is of course caused by Google's "good intentions" gone awry. They originally created Lighthouse to help better inform developers, so that developers could create better user experiences.<p>But Google is now [ab]using Lighthouse as a <i>target</i> for Google Search page-position preference. Developers in turn given this situation code towards Lighthouse (even into its bugs/quirks) because they <i>need</i> that higher ranking, even if it hurts the UX ultimately.
The LCP metric is particularly brittle. It's concerning Google is linking it to search ranking thereby ensuring everyone caters to it.<p>In our case, our hero image (formerly the LCP element Lighthouse picked up) is an animated image illustrating our product. It starts animating very fast for most of our audience and finishes loading in the background as it continues animating.<p>However, the Lighthouse LCP timestamp is not the time which the image starts animating, instead it's the time the animated image _completely finishes_ loading. So even though the animation starts almost right away and doesn't stutter, our LCP was several seconds or more.<p>We "solved" it by making the animation bounding box size slightly smaller and some text boxes on the page slightly larger so the LCP was tied to the text box loading.
This is a different thing, but it actually is possible for larger JPEGs to be faster (smaller file size) than smaller JPEGs.<p>Back when 4x "retina" resolution displays first hit the market, someone observed that you could take a 4x sized image, crank up the JPEG compression (from say 80% quality to 40%), and the resulting JPEG artifacts would visually disappear. The file size of larger, compressed images are frequently smaller than that of smaller, less-compressed images. And the render quality of "squishing" a 4x image down to 1x on a lower-resolution display similarly looks fine.<p>The images on the article in question are ironically broken, of course, but anyway I've been sampling images this way for years now.
<a href="https://alidark.com/responsive-retina-image-mobile/" rel="nofollow">https://alidark.com/responsive-retina-image-mobile/</a>
There seems to be a misconception that the Lighthouse score is linked to the page experience search rank update. Higher Lighthouse scores won't give you better SEO. Only better results from field data (albeit only Chrome field data) will impact your search results [1].<p>Lighthouse is only intended to be a guide (name checks out) for developers to identify potential opportunities to improve real-user performance. Core Web Vitals is how Google has decided to align lab and field data in a more unified way. Historically, this has been pretty difficult, particularly with interactivity measurements. For example, Total Blocking Time (TBT) is a lab proxy metric for First Input Delay (FID) — they don't measure the same thing. The team at Google has frequently communicated that the only true way to know is from measuring on real users [2].<p>While the metrics aren't perfect, they are taking in feedback to adjust how metrics are measured and weighted, such as with the windowed CLS update [3]. I for one have found the tools and browser support for measuring performance to have improved significantly, even in the last few years. Kudos to the Web Perf community, who I'm sure would appreciate any feedback.<p>[1]: <a href="https://support.google.com/webmasters/thread/104436075/core-web-vitals-page-experience-faqs-updated-march-2021" rel="nofollow">https://support.google.com/webmasters/thread/104436075/core-...</a><p>[2]: <a href="https://web.dev/vitals-tools/#crux" rel="nofollow">https://web.dev/vitals-tools/#crux</a><p>[3]: <a href="https://blog.webpagetest.org/posts/understanding-the-new-cumulative-layout-shift/" rel="nofollow">https://blog.webpagetest.org/posts/understanding-the-new-cum...</a>
KPI's/Goodhart's Law in a nutshell... In theory Lighthouse is a nice resource for finding ways to optimize your site but since it's tied to page rank it means we are encouraged/incentivized to eschew best practices and to use lighthouse practices instead.
I've posted here numerous times on how you can make your site worse and increase your score. It's frustrating that Google is pushing such a broken set of metrics for SEO. It's easy to trick but often in ways that make the user experience worse.
A good question is why Google is using only <i>one tile</i> to measure LCP for sites that have maps taking 50% of the viewport.<p>Is it just a coincidence, or did they make it to avoid ruining LCP metrics for large sites using Google Maps extensively (Airbnb, rent.com, etc.)?
It's interesting to me that the final "here's the new improved layout" screenshot is a 2000px wide screen shot. If you have to have such a large window to not fail cramped just to make some random metric happy, then I think you've failed.<p>The page itself also really seems to fight me moving around the map due to how it manages state and the URL stack. No doubt my browser isn't the same as a normal users, but it strikes me they're chasing the wrong goals here.
I wouldn't be surprised if this is because it reduced the Cumulative Layout Shift (CLS).<p>Having larger images defined reduces the likelihood you'll have layout shifts while the page is painting, which greatly impacts your overall score.<p><a href="https://web.dev/vitals/" rel="nofollow">https://web.dev/vitals/</a>
Clicking one marker on the map triggers almost 100 network requests. Seems crazy, why push all the images for all properties when I only need the first, then lazyload the rest when I interact with the slideshow.<p>Not sure why sorting then reloads the data, as it looks to sort the matched results that are already on page.
LCP is garbage. If you have top navigation and left navigation (think Jira, Gmail, etc) their sum could be your LCP even though none of your actual content has loaded. Good for your LCP score, bad if you want to use it to actually measure performance
What, this is improving your "score", but not actually improving user experience of perceived load time, is it?<p>It's just gaming the score?<p>An odd thing to brag about? or am I missing something?