We started seeing reports about it in GSC early July, when over a single day all our scores turned to crap with no explanation.<p>We are in the yellow, but the biggest culprits for blocking time are...Google Tag Manager, GAds (and Ganalytics where we still have it). So yeah, thanks Google, can't wait to lose on SEO due to your own products. And also, thanks for releasing this without the proper analysis tooling. (<a href="https://web.dev/debug-performance-in-the-field/#inp" rel="nofollow noreferrer">https://web.dev/debug-performance-in-the-field/#inp</a> : this is not tooling, this is undue burden on developers. Making people bundle an extra ["light" library](<a href="https://www.npmjs.com/package/web-vitals" rel="nofollow noreferrer">https://www.npmjs.com/package/web-vitals</a>) with their clients, forcing them to build their own analytics servers to understand what web-vitals complains about...or is often wrong about)
This may be controversial but I think this has the potential to be a brilliant metric because it measures some part of web UX that’s often neglected. It’s time consuming to make every single interaction display some sort of loading message but it really helps make the site feel responsive.<p>As long as they avoid the pattern of adding a global loading spinner that covers the whole screen. That’s just the worst possible loading screen. I suppose it would still pass this metric.<p>Also I’m not sure if I totally understand the metric - I think it’s simply when the next frame is rendered post interaction, which should easily be under 200ms unless you’re<p>1. doing some insane amount of client side computation<p>2. talking over the network far away from your service or your API call is slow / massive<p>and both of these are mitigated by having any loading indication so I don’t understand how this metric will be difficult to fix.
I'm lacking lots of context obviously but:
What good is a sophisticated metric when the pages they index are mostly blogspam SO clones etc? I'm not interested in the "most responsive" SO clone. Seems out of touch with what Google search is struggling with these days.
The real metric: INP with ad blocking enabled.<p>Example: NYTimes.com on Mobile Safari with AdGuard. 18 seconds.<p>Google is being really disingenuous with its so called metrics. A stroke of the pen could make INP 200ms across the top 500 sites.
At first glance, 200ms INP is a pretty high latency for a "good" rating. As a comparison, I believe 200ms is an average https roundtrip. I'd expect most interactions to be much lower than that.
I've generally had no gripes about this or web vitals in general except for one thing: group population[0]. It's unfair to create a blast radius on a small or medium-sized business's website simply because enough data doesn't exist to determine the true extent of the user experience impact.<p>The most recent example I've observed this on was a website with a heavy interactive location finder experience that lived on a single page. Fine, penalize that page. There's a chance users won't initially navigate there anyway. However, because a (very minimal, practically irrelevant amount of) similar content on the rest of the page was present on 18 other pages, the impact was huge.<p>The reality of the web today makes this pretty dire in my mind. Many businesses choose to run websites that are generally fast, but they have to engage with third-party services because they don't have the means to build their own map, event scheduler, or form experience. The punishment doesn't fit the crime.<p>[0]: <a href="https://www.searchenginejournal.com/grouped-core-web-vitals-scoring/407899/#close" rel="nofollow noreferrer">https://www.searchenginejournal.com/grouped-core-web-vitals-...</a>
INP feels like a pretty problematic way to compare sites because INP is going to be way lower on a site that doesn't do client-side rendering eventhough client-side rendering makes interaction with a site faster!
Has anyone been able to demonstrate to their satisfaction that improving Web Vitals scores actually improves their search engine placement? We send web vitals field data to our own analytics servers to track P75, but Google changes its algorithm so much we can't quite prove that our various LCP/CLS/FID/INP changes are actually making any difference.
Meanwhile "Engineering Leader" at Chrome argues that 2.4s to First Contentful Paint is fast: <a href="https://twitter.com/addyosmani/status/1678117107597471745?s=20" rel="nofollow noreferrer">https://twitter.com/addyosmani/status/1678117107597471745?s=...</a><p>Google's one (of many) heads has no idea what another (of many) heads says or does.
To me it sounds like this will help pattern of showing a skeleton screen when loading data. <a href="https://www.smashingmagazine.com/2020/04/skeleton-screens-react/" rel="nofollow noreferrer">https://www.smashingmagazine.com/2020/04/skeleton-screens-re...</a>
Starts strong:<p>> Chrome usage data shows that 90% of a user's time on a page is spent after it loads<p>Clearly impressive, breakthrough, research going on at Google.