Another optimization in the same vein: make sure the first KB contains the <meta charset> element, to avoid false-starts in the HTML parser. In fact, it's actually specified as an _error_ in the spec to have it come after 1KB!<p>It's mentioned in passing on the Lighthouse docs for the charset audit[1], but here's a great breakdown by hsivonen[2].<p>Of course, in the headers is even better. Point is, try not to have too much cruft before the charset.<p>[1] <a href="https://web.dev/charset/" rel="nofollow">https://web.dev/charset/</a><p>[2] <a href="https://github.com/GoogleChrome/lighthouse/issues/10023#issuecomment-575129051" rel="nofollow">https://github.com/GoogleChrome/lighthouse/issues/10023#issu...</a>
That's only doable with a text-centric website. I'm currently finishing a photography section for my personal website, and the gallery pages are several hundred kBs in size, while single photo page is almost 1MB in size (provided you load it on a 28" screen; browser will load smaller variants on smaller screens). Most of that weight is the thumbnails (and almost-full-size photo in the latter case). The only JS I have is 6 lines (before minifying) on single photo pages which allow you to go to next or previous photo with keyboard arrows. I don't use any CSS frameworks, just a single hand-written CSS file. I don't have any tracking, analytics, social media features or anything of this sort.<p>So if even a personal site being done with no deadlines, no pressure from management to include analytics etc. can't do it, because it wants to display a bunch of photos, then I don't think we can expect "web-scale" websites to achieve it.
I wonder if anyone has studied the impact of latency on user behavior while considering the impact of user expectations from their typical connection speed. Whenever I see an article about page speed optimization, the assumption is that a user will give up if a page takes too long to load, ans that everyone gives up after X seconds. Usually X is about 7s based on a Nielson article from years and years ago.<p>The thing is tbough, a user who frequently uses satellite Internet or a 2G mobile connection will learn that pages take a while to download over that connection, and they will adjust their expectations accordingly. Satellite Internet users <i>aren't</i> giving up on pages every 7s. They're waiting because they know the page will load slowly.<p>I suspect most users wait for a page if most pages are slow. So long as your website is no slower than the average then you're probably not losing many visitors.<p>Obviously that's not to say you shouldn't make your website as fast as you can. You should. Everyone will appreciate the effort. But don't assume that knocking a few seconds off the TTI time will actually impact your conversion metrics. It <i>probably</i> won't (but only a proper study can prove it either way).
The relevant QUIC draft recommends a similar window[0], so HTTP/3 looks like it will behave the same.<p>[0] <a href="https://datatracker.ietf.org/doc/id/draft-ietf-quic-recovery-26.html#section-b.1-2.2" rel="nofollow">https://datatracker.ietf.org/doc/id/draft-ietf-quic-recovery...</a>
Love how this person's blog itself consist of single HTTP requests per page load. No extra css, images, scripts, or anything! This blogger cares about web perf!
I'm getting 62 bytes over the network and 31.4 kilobytes uncompressed. This page has more content than most pages I visit that are megabytes in size. I wish there was an incentive to go back to smaller web pages in general.
This is mostly nonsense, you can easily check for yourself.<p>Load up the OP's page with Chrome dev tools network tab open.<p>Connection start: 90ms (60ms of which is SSL handshake)
Request/Response: 30ms request / 30ms response<p>So the whole post is about yak shaving not splitting the 30ms response portion of a request that already takes 5x that (150+ms).<p>Sure it's a bit faster, but your users will not notice the difference between a 14kb page and a 15kb page over https (which you hopefully have on).
> Most web servers TCP slow start algorithm starts by sending 10 TCP packets.<p>Of course big names that run CDNs have fancy custom stuff that directly puts data into mmaped regions for the network card to use, but generally, it's the OS handling TCP connections, and this means that the web server, a user space process, only interacts with the TCP connection through a streaming API. It can't issue individual TCP packets or ACK packets or anything. Raw socket access requires superuser privileges in unix.<p>Outside of that it's a great article. Didn't know about this particular trick :).
A typical HTTPS RSA certificate is about 3.9kb. ECDSA certificates with ECDSA cross-signed with an RSA root will be around 2.9kb. So this 14kb of HTML response should leave some room for the certificates too.
> There is an idea that the 14kb rule no longer holds true when using HTTP/2. I've read all I can about this without boring myself to death — but I haven't seen any evidence that servers using HTTP/2 have stopped using TCP slow start beginning with 10 packets.<p>HTTP/3 formally replaces TCP with QUIC.[0] Google have been using QUIC in production for quite a while (since 2013!) and it’s enabled by default in every browser except Safari[1] so it’s understandable how there could be some confusion here.<p>[0] <a href="https://datatracker.ietf.org/doc/html/rfc9114" rel="nofollow">https://datatracker.ietf.org/doc/html/rfc9114</a><p>[1] <a href="https://caniuse.com/http3" rel="nofollow">https://caniuse.com/http3</a>
I like the idea mentioned in the article of increasing the number of packets sent in the slow start - as far as I know you could just crank that from the server side TCP stack to something much larger, right?
Shameless self-promotion: the homepage of plaintextsports.com is 5.2kb today [1], an in-progress WNBA game (4th quarter) is 11.2kb [2], and an extra inning MLB game is 8.8kb [3]. I wasn't aware of this size threshold, and I'm not at this level of optimization, but I'm always pleased to find more evidence of my playful claim that it's the "fastest website in the history of the internet".<p>[1]: <a href="https://plaintextsports.com/all/2022-08-24/" rel="nofollow">https://plaintextsports.com/all/2022-08-24/</a><p>[2]: <a href="https://plaintextsports.com/wnba/2022-08-24/conn-dal" rel="nofollow">https://plaintextsports.com/wnba/2022-08-24/conn-dal</a><p>[3]: <a href="https://plaintextsports.com/mlb/2022-08-24/laa-tb" rel="nofollow">https://plaintextsports.com/mlb/2022-08-24/laa-tb</a>
This is a really great breakdown of the TCP slow start algorithm. I always try to keep sites as lean as possible, but this was an aspect of TCP that I wasn't familiar with but will definitely be keeping it in mind in the future.
Some sites break the TCP standard by sending the whole contents of the landing page without waiting for the first ACK even it's more than 10 packets.
Shameless plug<p><a href="https://blog.cloudflare.com/when-the-window-is-not-fully-open-your-tcp-stack-is-doing-more-than-you-think/" rel="nofollow">https://blog.cloudflare.com/when-the-window-is-not-fully-ope...</a><p>I recently wrote a piece about specifically this mechanism (though looked from the receiver side).<p>Basically on linux you can work around this initcwnd limit (if you have to, for whatever reason), by tuning buffer sizes, initcwnd obviously, and rcv_ssthresh.
Are there any reason why we cant have TCP slow start initial window to 100 packets or higher?<p>I could easily see 95% of internet could be 150KB page on first load.
"b" in kb means bits. While the uppercase letter "B" denotes bytes. There is almost an order of magnitude difference between these two units! Usually 1B = 8b.
Tangentially related: <a href="https://js13kgames.com" rel="nofollow">https://js13kgames.com</a> is currently going on -- the challenge is to build an HTML/CSS/JS game which compresses down to 13,312 bytes or less.
So cute to see this in the era of JavaScript frameworks.<p>I remember 10 years ago this being a thing and google keeping theirs at 14kb.<p>Today, if not server side rendered, you'll need react lib or equivalent to load your site and boi, that's a little over 14kb.
Headline restates the sizes in a way that makes them somewhere between ambiguous and wrong. The article gets the unit correct (B for bytes) while the headline swapped in b for B, which is generally bits.
It's true but not realistic as soon as the website is more than a blog or a static content website. The goal of good web building is to manage to deliver the most value to consumers and the business, and removing what's not needed, and if you add anything, you make it on a performance oriented way. Bloat will always infiltrates itself, so you must clean again and again. I operate e-commerce websites and we went through multiple iterations with two goals in minds : performance ( speed / core vitals / seo ), and developer productivity (I'm basically the only tech guy, and I'm also the CEO managing 10 people). Our current e-commerce stack runs 99% on Phoenix Live view. As our market is only in 1 country (Philippines) we optimize for round trip network with servers the closest possible ( no decent hosting company in the country so we host in HK at datapacket on a beefy dedicated ). Site loads in less than a second, navigation is nearly immediate thanks to live view. We removed most JS trackers as we built our own server side tracking pipeline in Elixir that sends unified data where it's needed ( it took us like 2 days to build ). Since that move Google loves us and we are the top ranking website for our niche in the country on all our key products.
One key thing also is that our target market is wealthy so they enjoy fast data / connection, this helps in terms of determining our target.
Performance is not absolute. It's relative to your product, your market and your location.
I read it. This is new to me, and I guess one needs to remove so many tags and tracking tools. I hope it only counts to a single server. What happens with the data loaded from CDNs?
Or just go absolute foolishly extreme and have your website under 1kB total ;)<p><a href="https://1kb.club" rel="nofollow">https://1kb.club</a>
so the hot takeaway here is that CSS must absolutely be embedded in HTML header so that browser does not make two requests to start rendering page in question. Also if page is using TLS and it most likely is, this all falls apart because initial handshake will do at least one round trip and kill all the speed of loading resource.
Aside from the raw size, there are more optimizations that can be done while still having a modern-ish visual result<p>(Shameless self plug: <a href="https://anonyfox.com/blog/need-for-speed/" rel="nofollow">https://anonyfox.com/blog/need-for-speed/</a> )
This is very cool. In the age if bloated JS frameworks and Bulky Desktop sites loaded on Mobile devices, its refreshing to see someone putting efforts to make the pages fit in a single MTU.<p>However, just page size is half the story.<p>Look at the screenshots below -<p>#1 - Here you can see this page (9KB) - 110ms - <a href="https://i.imgur.com/qeT2Az0.jpg" rel="nofollow">https://i.imgur.com/qeT2Az0.jpg</a><p>#2 - Another page, 29 KB in size - 42ms. <a href="https://i.imgur.com/tWsLGr1.jpg" rel="nofollow">https://i.imgur.com/tWsLGr1.jpg</a><p>Both on same network (Internet), same computer.<p>1st one (This article) is served by Netlify and AWS (Static hosting).<p>2nd is an ecommerce store on Dukaan (ecommerce platform for India), I am affiliated with.
This made me check my own site[1], the page itself is tiny(3kb). It's the images that get me, and they're svgs. Gotta be something wrong there too, an svg shouldn't be 75kb.<p>edit: nevermind, svg is 13kb, don't know what I was mistaking there.<p>[0] - www.reciped.io
<a href="https://sumi.news" rel="nofollow">https://sumi.news</a> HTML is ~14KB when transferred with compression. The CSS is ~30KB. I could probably slash that in half if I optimized.
Honestly, "should" is a bit of a click-baity exaggeration. In this day and age where internet speeds are faster and more stable than ever, these kinds of tips should be at the very bottom of your optimisations checklist. I don't care if your website takes 3 seconds to load or 5, what I do care about is that once the website has loaded, my inputs respond as quickly as possible. Reddit for example is total garbage when it comes to responsiveness, clicking on a post literally freezes the page for 1+ seconds on a fairly capable PC.
Great piece. took me back to the figuring out TCP at 2400 bps back in the dial-up era. The bit on satellites made me wonder if there's room for storage servers/relays in space.
Unrelated, but I found reading this post very easy. Something about the colors and font choices worked well for my brain which struggles recently to parse most long-form content..
I think 14KB 'rule' is less relevant these days, but is a good mnemonic to "put the most critical data at the start of the page". Even if this page has to be large, browsers are streaming it and can start processing before it is fully consumed.<p><a href="https://www.tunetheweb.com/blog/critical-resources-and-the-first-14kb/" rel="nofollow">https://www.tunetheweb.com/blog/critical-resources-and-the-f...</a>
Some kind of actual measurements/tests would be nice, like put up a 14kb and 15kb+ page and measure them to demonstrate the apparent speed difference really exists.
<p><pre><code> Once you lose the autoplaying videos, the popups, the cookies, the cookie consent banners, the social network buttons, the tracking scripts, javascript and css frameworks, and all the other junk nobody likes — you're probably there.
</code></pre>
Wouldn't videos and images (I guess css/js files as well?) loaded separately and be part of other messages?
The section on satellite internet should probably be updated to clarify that LEO satellite networks like Starlink orbit at 1/100 the distance of GEO satellites, so the parts of the latency calculation involving uplink and downlink become much less important. (The rest of the numbers in the latency calculation still apply.)
Interesting find. On the other hand, it's not a one size shoe fits all metric. If the interaction with your website is mainly reading text, then sure it's a valid take. Otherwise you should really just forget about it and use focus on other best practices for providing a good early user experience.
Nice article! However some numbers are a bit off:<p>The IPv4 overhead is normally 20 bytes but can reach 60 bytes with many options. For TCP, it's between 20 and 60 bytes as well.<p>Just ran a quick tcpdump on Linux and curl's TCP connection uses 32 bytes TCP headers (12 bytes of options).
This is to minimize the round trips, I think a saner solution is have content served from many edge locations nearer the user so the impact of a roundtrip is so much less. Consumers and business users definitely want jazzy websites and 14kb is not able to do much.
What does this means, if anything, for API calls between services in a microservices context ?<p>Should we worry about this specific size threshold when making calls between services on kubernetes or the kubernetes ecosystem is smart enough to avoid this slow start problem ?
I find my own site [0] to be quite fast, since most of it is pretty aggressively cached and its hand-written. Mobile is slightly broken, but still.<p>[0] <a href="https://kortlepel.com" rel="nofollow">https://kortlepel.com</a>
content-length: 31367 for this page, so looks like the author couldn't do it either. How can us mere mortals ?<p>A more realistic target: <a href="https://512kb.club/" rel="nofollow">https://512kb.club/</a>
The athletic.com routinely takes 2-5 seconds to load a page on my mobile phone connected to a 600MBPS down line (when it doesn't 500).<p>I still use it.<p>So, sure, this is awesome but it might not be something worth optimizing for if you want to make money.
My site is not under 14KB (15.17KB of HTML, to be precise), but it loads pretty damn fast and I'm proud of it.<p><a href="https://tsk.bearblog.dev/" rel="nofollow">https://tsk.bearblog.dev/</a>
Neat, I was wondering why <a href="https://hackernews.onlineornot.com" rel="nofollow">https://hackernews.onlineornot.com</a> loads so fast (<14kb)
So 14kb for 1 file? (Index.html) meaning we should ensure that the supporting assets (css, js, images) are set non-blocking so at least people can see the content first?
I wonder... am I good if the static render is < 14kb, but I load React etc. and hydrate for progressive addition of interactivity?<p>Probably for a blog, if the readable content is in that static render, it would be a reasonable experience. A couple of seconds later the cool interactive parts come to life.<p>===<p>PageSpeed Insights scores:<p>Performance: 100%
First Contentful Paint: 0.9 s
Time to Interactive: 0.9 s
Speed Index: 0.9 s
Total Blocking Time: 0 ms
Largest Contentful Paint: 0.9 s
Cumulative Layout Shift: 0<p>Meh. Not bad.<p>Ok it's very good. Perfect. When you click around, it doesn't seem like a traditional client/server app, but like a SPA. Without being one!
Is the post joke? Are there any network technical who can explain that the packet can have different size and modified by different routers inside chain of path from source to destination? I mean some kind of absurd topic. And everyone instead of reading ethernet standard and teaching materials about this OSI level, start to debate about this thing.
Of course with HTTP3, we'll get TCP out of the equation as that uses UDP/Quic underneath. If you pick the right CDN/hosting/web server, you can benefit from that right now. Supported on essentially all relevant phones, browsers, etc. So, why wait? If you care about performance, upgrading your infrastructure is probably the first thing you should do, not the last thing.<p>Mostly, the only effect of bigger download sizes is a higher chance for things to go wrong on flaky networks (e.g. on mobile) and a slight delay with the application initializing. On the first visit. The second visit, you can have all the assets cached and it matters much less. At that point the only thing slowing you down is what the application does.<p>It's 2022. An article arguing how to get the most out of obsolete versions of HTTP (<3) and TCP seems a bit redundant as you shouldn't be using either if you can avoid it. Also, anything using less bytes than the commodore 64 that I had 40 years ago had in memory is interesting but also a bit of an exercise in being overly frugal. You should reflect on the value of your time and effort. There's a notion of diminishing returns that are very expensive. Such is the nature of optimization. Sometimes a millisecond is priceless. But mostly it's irrelevant. People have 5G and 4G phones these days capable of downloading many orders of magnitude more data per second than that.<p>Download size is probably the wrong thing to focus on for most applications. I get it, engineers are deeply passionate about optimization and minimalism. And I appreciate what can be done with very little CSS and HTML from a technical point of view. But it's just not relevant to me on my work projects. I'd rather spend time on adding value than obsessing over saving a few kilo bytes here and there. People pay for one thing and barely even notice the other thing.<p>I ship a quite bloated SPA to customers. It comes in around 10MB and takes a couple of seconds to get going. It's fine; it's not a problem. I could double that size and nothing would happen. Nobody would complain. We sell it for lots of money because of what it does, not because of how large or small it is. The value of halving that size is close to 0$ for us. The price of doubling the size is also close to that. The price of sacrificing features on the other hand would be a massive loss of value. Our sales team would object. And yes, we use a Google Cloud's load balancer and their CDN that does all the right things. If it mattered, I might find slightly faster options. But it just doesn't.<p>And 10MB is actually not that bad on a decent network and nothing compared to what our application does after it loads in any case. Which would be doing lots of json API traffic, downloading map tiles and images, etc. In short, if you are not on a good network, using our app is going to suck. The initial download size would be the least of your problems in that case. And if you are on a decent network, the app loads quickly and feels very responsive. Our app simply requires a decent network. And decent networks are a widely available commodity.
There's much more to a fast website than ttfb/fmp on a single page load with a cold cache. The fact this kilobyte fetishism on HN is still so rife in 2022 is ridiculous.<p>Edit: I wrote this after reading some comments. The article is interesting and not an attack on its author.