As someone with recent experience using a relatively slow Android phone, it can be absolutely brutal to load some web pages, even ones that only appear to be serving text and images (and a load of trackers/ads presumably). The network is never the bottleneck here.<p>This problem is compounded by several factors. One is that older/slower phones cannot always use fully-featured browsers such as Firefox for mobile. The app is takes too many resources on its own before even opening up a website. That means turning to a pared-down browser like Firefox Focus, which is ok except for not being able to have extensions. That means no ublock origin, which of course makes the web an even worse experience.<p>Another issue is that some sites will complain if you are not using a "standard" browser and the site will become unusable for that reason alone.<p>In these situations, companies frequently try to force an app down your throat instead. And who knows how much space that will take up on a space-limited device or how poorly it will run.<p>Many companies/sites used to have simplified versions to account for slower devices/connections, but in my experience these are becoming phased out and harder to find. I imagine it's much harder to serve ads and operate a full tracking network to/from every social media company without all the javascript bloat.
Dan's point about being aware of the different levels of inequality in the world is something I strongly agree with, but that should also include the middle-income countries, especially in Latin America and Southeast Asia. For example, a user with a data plan with a monthly limit in the single-digit GBs, and a RAM/CPU profile resembling a decade-old US flagship. That's good enough to use Discourse at all, but the experience will probably be on the unpleasantly slow side. I believe it's primarily this category of user that accounts for Dan's observation that incremental improvements in CPU/RAM/disk measurably improve engagement.<p>As for users with the lowest-end devices like the Itel P32, Dan's chart seems to prove that no amount of incremental optimization would benefit them. The only thing that might is a wholesale different client architecture that sacrifices features and polish to provide the slimmest code possible. That is, an alternate "lite/basic" mode. Unfortunately, this style of approach has rarely proved successful: the empathy problem returns in a different guise, as US-based developers often make the wrong decisions on which features/polish are essential to keep versus discarded for performance reasons.
I like how most people blame bosses or scary big companies. No developers appear willing to admit that there is a large cohort of not that great web programmers who don’t know much (and appear to not WANT to know much) about efficiency. They’re just as to blame for the sad world of web software as the big boss or corporate overlord that forced someone to make bad software.
I only recently moved from a 6-year old LG flagship phone to a shiny new Galaxy, and the performance difference is staggering. It shouldn't be - that was a very high-end phone at release, it's not <i>that</i> old, and it still works like new. I know it's not just my phone, because the Galaxy S9s I use to test code have the same struggles.<p>I would like to have seen Amazon in the tests. IME Amazon's website is among the absolute worst of the worst on mobile devices more than ~4 years old. Amazon was the only site I accessed regularly that bordered on unusable, even with relatively recent high-end mobile hardware.
Related: Too much of technology today doesn't pay attention or even care to the less technologically adept, either.<p>Smartphones in my opinion are a major example of this. I can't tell you the number of people I've meet who barely even or don't even know how to use their devices. It's all black magic to them.<p>The largest problem is the over-dependence on the use of "Gesture Navigation" which is invisible and thus non-existent to them. Sure, they might figure out the gesture bar on an iPhone, but they have no conception of the notification/control center.<p>It's not that these people are dumb either, many of them could probably run circles around me in other fields, but when it comes to tech, it's not for a lack of trying, it's a lack of an intuitive interface.
This article is basically unreadable for me 48 y/o on desktop). In the dev tools I added the following to the body to make it readable:<p><pre><code> font-size: 18px;
line-height: 1.5em;
max-width: 38rem;
</code></pre>
Now look how readable (and beautiful) it is. I read a lot of Dan Luu's posts, and each time I have to do this sort of thing to make it readable.<p>Seriously, techies, it's an extra <i>64 Bytes</i> to make your page more readable.
as a data point youtube is <i>unusable</i> on raspberry pi 3. This happened within the last year, because prior to that you could "watch" videos at about 10-15FPS which is enough, for instance, to get repair videos in a shop setting (ask me how i know). When the raspberry pi model B - the first one released - came out, you could play 1080p video from storage, watch youtube, play <i>games</i>.<p>I'm not sure what youtube is doing (or everyone else for that matter.)<p>If we're serious about this climate crisis/change business, someone needs to cast a very hard look at google and meta for these sorts of shenanigans. eating CPU cycles for profit (ad-tech would be my off the cuff guess for why youtube sucks on these low power devices) should be loudly derided in the media and people should use more efficient services, even if the overall UX is worse.
That Discourse guy is a classic example of someone designing their product for the world they wished existed instead of the world we actually live in. Devices with Qualcomm SoCs exist in billions, and will keep existing and keep being manufactured and sold for the foreseeable future. No amount of whining will change that. Get over it and optimize for them. People who use these devices won't care about your whining, they'll just consider you an incompetent software developer because your software crashes.
I'm normally a fan of Dan Luu's posts but I felt this one missed the mark. The LCP/CPU table is a good one, but from there the article turns into a bit on armchair psychology. From some random comments coming from Discourse's founder, readers are asked to build up an idea of what attitudes software engineers supposedly have. Even Knuth gets dragged into the mud based on comments he made about single vs multi-core performance and comments about the Itanium (which is a long standing point of academic contention.)<p>This article just felt too soft, too couched in internet fights, to really stand up.
Every company stopped caring, especially the companies who were at the forefront of standards and good web design practices, like Google and Apple.<p>Google recently retired their HTML Gmail version, mind you, it still worked on a 2008 256MO RAM Android phone with an old Firefox version and it was simply fast... of course the new JS bloated version doesn't, it just kills the browser. That's an extreme example, yet low budget Phones have 2GB of RAM, you simply cannot browser the web with these and expect reasonable performances anymore.<p>Mobile web sucks, an it's done on purpose, to push people to use "native" apps which makes things easier when it comes to data collection and ad display for companies such as Apple and Google.
Using <a href="https://www.mcmaster.com/" rel="nofollow">https://www.mcmaster.com/</a> makes me wish I were a hardware engineer. Makes every other e-commerce site feel like garbage. If amazon were this fast, I’d be broke within days. Why haven’t other sites figured this out?
Where "users with slow devices" equals "anyone trying to keep hardware running more than a few years", it seems. It's enforced obsolescence.<p>I've said for a long time, devs should be forced to take a survey of their users' hardware, and then themselves use the slowest common system, say, the 5th-percentile, one day a week. If they don't care about efficiency now, maybe they will when it's sufficiently painful.
<i>> Surely, for example, multiple processors are no help to TeX</i><p>But TeX was designed to run on a single CPU-core, so no surprise here. I wonder what TeX could become if all Knuth had at the time a multicore machine with cores managing maybe 0.1 MIPS each (or even lower). Like what the world would become if we lived in a counterfactual world where Intel and its buddies starting in 1970s boosted not the frequency and instruction per second per core but number of cores?<p>My take we'd switched to functional-style programming at 1980s with immutable data, created tools to describe multistage pipelines with each stage issuing tasks into a queue, while cores concurrently picking tasks from the queue. TeX would probably have a simplified and extra fast parser that could cut input into chunks to feed them into a fullblown and slow parser which would be a first stage of a pipeline, and then these pipelines somehow would converge into an output stream. TeX probably would prefer to use more of lexical scoping, to reduce interaction between chunks, or maybe it would make some kind of a barrier for pipelines where they all stop and wait for propagation of things like `\it` from its occurrence to the end.<p>This counterfactual world seems much more exciting to me than the real one, though maybe I wouldn't be excited if I lived there.
There's also a huge tendency to design for fast, high quality connectivity. Try using any Google product on airplane wifi. Even just chat loads in minutes-to-never and frequently keels over dead, forcing an outrageously expensive reload. Docs? Good luck.<p>I wish software engineers cared to test in less than ideal conditions. Low speeds, intermittent connectivity, and packet loss are real.
I often use a Thinkpad X220 (which still works for a lot of my usage and I'm not too concerned about it being stolen or damaged) and the JS web is terrible to use on it. Mostly resulted in my preference of using native software (non-electron), which generally works perfectly fine and about as well as on my "more modern" computer.
If one cares about accessibility of a website to people with much slower devices, particularly living in less developed parts of the world, I guess there are more considerations:<p>- using more clear English with simple sentence structures should make the content more accessible to people who don’t read English with the fluency of an educated American<p>- reducing the number of requests required to load a page as latency may be high (and latency to the nearest e.g. cloudflare edge node may still be high)
It's a shame that NodeBB was not included in the list of forums tested.<p>We worked really hard to optimize our forum load times, and it handedly beats the pants off of much we've tested against.<p>But that's not much of a brag, the bar is quite low.<p>Dan goes on and lambasts (rightfully so) Atwood for deriding Qualcomm and assuming slow phones don't exist.<p>Well, let's chat, and talk to someone whose team really does dogfood their products on slower devices...
These sites can and should be much better. Yes. Definitely.<p>At the same time, while a 10s load time is a long time & unpleasant, it doesn't seem catastrophic yet.<p>The more vital question to me is what the experience is like after the page is loaded. I'm sure a number of these sites have similarly terrible architecture & ads bogging down the experience. But I also expect that some of those which took a while to load are pretty snappy & fast after loading.<p>Native apps probably have plenty of truly user-insulting payloads they too chug through as they load, and no shortage of poor architectural decisions. On the web it's much much easier to see all the bad; a view source away. And there is seemingly less discipline on the web, more terrible and terribly inefficient cases of companies with too many people throwing whatever the heck into Google Tag Manager or other similar offenses.<p>The latest server-side react stuff seems like it has a lot of help to offer, but there's still a lot of questions about rehydration of the page. I'm also lament see us shift away from the thick-client world; so much power has been embued to the users from the web 9.9 times out of 10 just being some restful services we can hack with. In all, I think there's a deficiency in broad architectural patterns for how the thick client should manage it's data, and a really issue with ahead-of-time bundles versus just-in-time & load behind code loading that we have failed to make much headway on in the past decade, and this lack is where the real wins are.
>Many pages actually remove the parts of the page you scrolled past as you scroll<p>There is a special place in hell for every web developer who does that.
Not only the user is affected by this.<p>The difference between a 2MB and a 150KB CSS file can be a lot of bandwidth.<p>The difference between a bad and good framework can be a lot of CPU power and RAM.<p>Companies pay for this. But I guess most have no clue that these costs can be reduced.<p>And some companies just don't care as long as money is coming in.
If you don't have a good phone and a high speed connection, you don't have any money to spend on either the sites products or the products of their advertisers.<p>When looked at from that angle, bloat is a feature.<p>It's not reasonable to have an expectation of quality when it comes to the web.
I feel like there's a good point made by the Discourse CEO about Qualcomm (and competitors) - the product decision to segment their CPU line by drastic differences in single-threaded CPU perf is a highly anti-consumer one.<p>In contrast AMD and Intel use the same (or sameish) CPU arch in all of their lineup in a given generation, the absolute cheapest laptop I could find used a Pentium 6805, which still has a GB6 score of well over 1000, sold in a laptop that's cheaper than most budget smartphones.<p>In contrast, Qualcomm and Mediatek will sell you SoCs that don't even have half of that performance as a latest-gen 'midrange' part.
It’s not just slow devices, it’s also any time you have any kind of weak connectivity.<p>I think every OS now has tools to let you simulate shitty network performance these days so it’s inexcusable that so many sites and even native apps fail so badly anytime you have anything less than a mbit connection or greater than 50ms latency :-/
> Something I've observed over time, as programming has become more prestigious and more lucrative, is that people have tended to come from wealthier backgrounds and have less exposure to people with different income levels. An example we've discussed before, is at a well-known, prestigious, startup that has a very left-leaning employee base, where everyone got rich, on a discussion about the covid stimulus checks, in a slack discussion, a well meaning progressive employee said that it was pointless because people would just use their stimulus checks to buy stock. This person had, apparently, never talked to any middle-class (let alone poor) person about where their money goes or looked at the data on who owns equity. And that's just looking at American wealth. When we look at world-wide wealth, the general level of understanding is much lower. People seem to really underestimate the dynamic range in wealth and income across the world.<p>Perhaps the falling salaries for programming in the US could be a good thing in that regard. So many people get into this career because they want to make it big, which seems to drive down the quality of the talent pool.
Relating to the aside about opportunities in different countries: the comparison between potential programming career prospects between a poor American and middle class Pole feels reasonable for someone born around the same time as the OP (early ’80s I guess) but I suspect it’s since shifted in Poland’s failure.<p>I think the relative disadvantages of a poor American compared to their wealthier peers have increased as there’s more competition (as the degree is seen as more desirable by motivated wealthy parents) and the poor student likely won’t even have a non-phone computer at home where all their wealthier peers probably will. Possibly they could work around the competitiveness of computer science by going via some less well-trodden path (eg mathematics or physics) except that university admission isn’t by major. They may also be disadvantaged by later classism in hiring. Meanwhile a middle class Pole will have access to a computer and, provided they live sufficiently near one of the big cities, access to technical schools which can give them a head start on programming skills (and on competitive programming which is a useful skill for passing the current kind of programming interview questions). To get the kind of good outcome described in the OP, they then need to get hired somewhere like Google in Zurich (somewhat similar difficulty to in the US except the earlier stages were easier (in the sense of being more probable) for the hypothetical Pole) and progress from there (maybe impeded by initially not being at the headquarters / fewer other employment opportunities to get career advancement by changing jobs). Class will be less of a problem as the hypothetical middle class pole isn’t so different in wealth from other middle class Europeans and you get much less strong class-selection than when (e.g.) Americans are hiring Americans.
When websites pack in too many high-res images, videos, and complex scripts, it’s like they’re trying to cram that overstuffed suitcase into a tiny space. Your device is struggling, man. It’s like it’s running a marathon with a backpack full of bricks.<p>So, what happens? Your device slows down to a crawl, pages take forever to load, and sometimes, it just gives up and crashes. It’s like being stuck in traffic when you’re already late for work. And let’s not even talk about the data usage. It’s like your phone’s eating through your data plan like it’s an all-you-can-eat buffet.<p>Now, if you’re on the latest and greatest tech, you might not notice much. But for folks with older devices or slower connections, it’s a real pain. It’s like everyone else is zooming by on a high-speed train while you’re chugging along on a steam engine.<p>So, what can we do? Well, we can start by being mindful of what we put on our websites. Keep it lean, mean, and clean, folks. Your users will thank you, and their devices will too. And hey, maybe we’ll all get where we’re going a little faster.
> Another example is Wordpress (old) vs. newer, trendier, blogging platforms like Medium and Substack. Wordpress (old) is 17.5x / 10x faster (LCP* / CPU) than Medium and 5x / 7x faster (LCP* / CPU) faster than Substack on our M3 Max ...<p>It's a persistent complaint among readers of SlateStarCodex (a blog which made a high-profile move to Substack from an old WordPress site). Substack attributes the sluggishness to the owner's special request to show all comments by default, but the old WordPress blog loads all comments by default and was fine even on older devices.<p><a href="https://www.reddit.com/r/slatestarcodex/comments/16xsr8w/substack_makes_my_computer_crawl_to_a_halt/" rel="nofollow">https://www.reddit.com/r/slatestarcodex/comments/16xsr8w/sub...</a><p><a href="https://www.reddit.com/r/slatestarcodex/comments/1b9p55g/anyone_else_finding_the_website_astralcodexten_a/" rel="nofollow">https://www.reddit.com/r/slatestarcodex/comments/1b9p55g/any...</a>
He mentions Substack, which is maybe the most egregious example of bloat I regularly encounter. Like I cannot open Scott Alexander's blog on my phone because it comes to a crawl.<p>But the Substack devs are <i>aware of this</i>. [They know it's a problem](<a href="https://old.reddit.com/r/slatestarcodex/comments/16xsr8w/substack_makes_my_computer_crawl_to_a_halt/k34jjmn/" rel="nofollow">https://old.reddit.com/r/slatestarcodex/comments/16xsr8w/sub...</a>).<p>>I'm much more of a backend person, so take this with somewhat of a grain of salt, but I believe the issue is with how we're using react. It's not necessarily the amount of content, but something about the number of components we use does not play nicely with rendering content at ACX scale.<p>>As for why it takes up CPU after rendering, my understanding is that since each of the components is monitoring state changes to figure out how to re-render, it continues to eat up CPU.<p>They know—but they do nothing to fix it. It's just an impossibility, rendering all those comments.
Nobody cares about people with older devices. We've shifted to a mode where companies tell their customers what they have to do, and if they don't fit the mold they are dropped. It's more profitable that way - you scale only revenue and don't have to worry about accessibility or customer service or any edge cases. That's what big tech has gotten for us.
Some years ago I tested real world web sites, turned out only about 30% of the javascript they load was actually invoked by the user's browser (even for sites optimied with Closure Compiler, that has some dead code elimination):<p><a href="https://github.com/avodonosov/pocl">https://github.com/avodonosov/pocl</a><p>The unused javascript code can be removed (and loaded on demand). Although I am not sure how valuable that would be for the world. It only saves network traffic, parsing time and some browser memory for compiled code. But js traffic in the Internet is neglidgible comparing to, say, video and images. Will the user experience be signifiqanty better if browser is the saved from the unnesessary js parsing? I don't know of a good way to measure that.
I'm glad people remember what WW in WWW means. :)<p>It makes me very sad to see that reddit's new design is so heavy it can't even be accessed by part of the world. It's like parts of the internet are closing theirs doors just so they can have more sliding effects that nobody wants.<p>Or maybe I'm just a weird one who prefers my browser to do a full load when I click a link.<p>Btw there was a time everyone kept talking about "responsive" web design and, having used only low-end smartphones and tablets, I kept finding it weird that there was such focus on the design being responsive for mobile devices when those mobile devices were so extremely slow to respond to touch to begin with. Of course I know that's not what they meant, but it still felt weird.
That exchange with Jeff Atwood makes me somewhat angry. It's one thing to be annoyed at a hardware vendor (justified or not), quite another to take it out on the users of said hardware.<p>And while I appreciate that engineers can often afford to be blunter than people in other disciplines, I also think that a founder of two successful companies should have a bit more restraint when posting. Writing "fuck company X" (unless we're talking about a grossly unethical company, maybe) just seems like very immature behaviour to me.
The most interesting part of this is the comments about software shifting from a normal career to a prestige target for wealthy families, and that this demographic shift has massive consequences on technology design and services.
I think it would be useful to separate data & code here. What if you kept the code the same, and downgraded the assets so the overall package is smaller/easier to process/execute? Or maybe tweaked the renderer so the same code & data can render quicker and slightly worse image quality consuming fewer CPU cycles? Basically I'm envisioning something like a game where the same game data+code can support multiple performance targets (except in this case the different CDN hookups to get the assets out, rather than everyone getting the bloated data download)
It's interesting research, but at the end of the day, the websites are there to make money. Well, looking at the table, maybe the author's own isn't, but the rest is. And so, I think the businesses don't optimize more because there isn't much more money to be made that way. Instead, the same effort is better spent elsewhere, like marketing, having a software that's quickly adaptable, that's easy to get interchangeable developers for. So they are optimized, just not for the speed on low-end devices. Different goals.
Dan, I respect you and I feel your pain, but...<p>> Another common attitude on display above is the idea that users who aren't wealthy don't matter.<p>If you want to make money, then this is the correct attitude. You need to target the users who have the means to be on the bleeding edge. It may not be "fair" or "equitable" or whatever, but catering to the masses is a suicide mission unless you have a lot of cash/time to burn.<p>This post reminds me of the standard Stallman quip "if everyone used the GPL, then our problems would be solved"
As someone who makes bloated sites I can only say that management doesn't give a fuck about bloat as long as features are checked of in due time. So please don't blame me
I really wish he compared an m3 Mac to a 6 year old intel chip and not some random processor I’ve never seen or experienced that I’m not sure is even available in the usa
This is one of the reasons I've started building <a href="https://formpress.org" rel="nofollow">https://formpress.org</a>. Seeing the bloat in many form builder apps/services, I've decided there is need for a lightweight and open source alternative.<p>How we achieve lightweightness? Currently our only sin is, our inclusion of jquery, that is just to have some cross browser way of interacting with DOM, then we hand craft required JS code based on features used in the form builder. We then ship a lightweight runtime, whose whole purpose is to load necessary JS code pieces to have a functional form that is lightning fast. Ps: we havent gone to the last mile in optimizations, but we definteley will. Even with current state, it is the most lightweight form builder out there.<p>It is open source, MIT licensed, built on modern stack(react, node.js, Kubernetes and Google Cloud) and we are also hosting a freemium version.<p>I think, there will be ever increasing need and market for lightweight products, as modern IT means a lot of products coming together. So each one should minimize their overhead.<p>Give our product a go and let us know what you think?
A problem that recently started in Feb 2024 for me is probably unrelated to the topic, but close enough that I'm posting in the hopes someone has an idea of what is happening.<p>I am running on a relatively new Lenovo Legion (~ 18 months old) with 64kb of ram running windows 11. About 6 weeks ago I began getting the BSOD every time I streamed a live hockey game (I watch maybe 3 games a week from Oct to Jun via Comcast streaming or 'alternative' streams).<p>The crashes happened multiple times every game. After maybe 10 games of this, I began closing and reopening the browser during every game break. I've experienced zero crashes since doing that.<p>When the crashes started, I was using Chrome - but I still experienced BSOD crashes when I switched and tested Fox and Brave. Just very odd to start happening suddenly without any changes to my machine that I could pinpoint - no upgraded bios or nvidia that I can recall.
I would still add that users running out of monthly mobile data volume are still a big issue, likely bigger than slow phones. They can't load most websites with 64 kbit/s, because they are multiple megabytes large, often without good reason.<p>For example, when Musk took over Twitter, he actually fixed this issue for some time, I tested it. But now they have regressed again. The website will simply not show your timeline on a slow connection. It will show an error message instead. Why would slow connections result in an error message?!<p>A simple solution that e.g. Facebook (though apparently not Threads) and Google use, is to first load the text content and the (large) images later. But many websites instead don't load anything and just time out. Probably because of overly large dependencies like heavy JavaScript libraries and things like that.
I believe HTML, CSS and JS needs an overhaul. There’ll be a point where maintaining backwards compatibility will result in more harm than benefit. Make a new opt-in version of the three which are brutally simplified. Deprecate the old HTML/CSS/JS, to be EOL’d in 2100.
I was expecting this to go one level deeper and point out that bloated sites that are critical, like: banking, medical, government -- can lead to problems paying bills or getting timely information (especially in the case of medical situations that aren't quite emergencies but close to it).
Mind that not only low-end or old phones have slow CPUs.<p>Both the $999 Librem 5 and the $1999 Liberty Phone (latest models) have an i.MX8M, which means they have similar processing power as the $50 phones the article is talking about.<p>I tried to log into Pastebin today. The Cloudflare check took several minutes.
It also impacts users with fast devices.<p>When I load a bloated website on an iPhone 15 Pro Max over a Unifi AP 7 Pro access point connected to a 1.2Gb WAN, it’s still a slow bloated website.<p>If you build websites, do as much as you possibly can on the server.<p>As an industry, how can we get more people to understand this?
I think bloat could be prevented if it was noticed the moment it is introduced.<p>After application evolves bloated, it's difficult to go back and un-bloat it.<p>Bloat is often introduced accidential/y, without need, and unnoticed just because developers test on modern and powerful devices.<p>If developer's regular test matrix included a device with minimal hardware pewer that was known to run the product smoothly in the past, the dev could immediately notice the newly introduced bloat and remove it.<p>A bloat regression testing.<p>I call this "ecological development".<p>We should all do this. No need to aim for devices that already have trouble running your app / website. But take a device that works today and test that you do not degrade with respect to this device.
Really need to see some lighter weight CSS tools combined with HTMX for a lot of general use.<p>That or better crafting for web applications. It feels painful to me when I see payloads in excess of 4mb of compressed JS. It makes me really want to embrace wasm frameworks like yew or leptos. Things are crazy and nobody seems to notice or care.<p>I run a relatively dated phone (Pixel 4a) with text set to max size. So many things are broken or unusable.
The web is a communication medium: having bad delivery is going to impact the efficacy of the message. I've worked as both a developer and a designer, and as a developer I've certainly had to push back against content-focused people requesting things they didn't realize were, frankly, bananas. Tech isn't their job, so it was my job to surface those problems before they arose. However, as a designer, I've also had to push back against developers that refused to acknowledge that technical purity is a means to an end, not an end in itself. Something looking the same in lynx and firefox isn't a useful goal in any situation I've encountered, and the only people that think a gopher resource has better UX than a modern webpage stare at code editors all day long.<p>No matter who it is, when people visualize how to solve a problem, they see how their area of concern contributes more clearly than others'. It's easy to visualize how our contributions will help solve a problem, and also hard to look past how doing something else will negatively impact your tasks. In reality, this medium requires a nuanced balance of considerations that depend on what you need to communicate, why, and to whom. Being useful on a team requires knowing when to interject with your professional expertise, but also know when it's more important to trust other professionals to do their jobs.
I travel a lot and experience a wide range of internet connection speeds and latencies. Hotel Wi-Fi can be horrible.<p>The web is clearly not designed for or tested on slow connections. UIs feel unresponsive and broken because no one thought that an action might take seconds to load.<p>Even back home in Germany, we have really unreliable mobile internet. I designed the interactive bits of All About Berlin for people on the U-Bahn, not just office workers on M3 Macbooks with fiber internet.
This is why I'm excited for Web Assembly. Writing an efficient high performance, mutli-threaded GUI in Rust or Go would be awesome.<p>Just waiting on it to be practically usable
Highly Gamed === It is better if users with slow devices see a white screen for 30 seconds vs an indication that something is happening, because ... reasons?
>Just as an aside, something I've found funny for a long time is that I get quite a bit of hate mail about the styling on this page (and a similar volume of appreciation mail)<p>Yes! I've definitely <i>felt</i> like this while using his website. Of course, today I just fixed it with<p>main {
max-width: 720px;
margin: 0 auto;
}<p>but tbh, I don't want to install an extension to customise the css on this one site...
I've always wondered why people removed parts of the page when they were scrolled out. Like, don't you think the browser would already optimize for that? And even if it's not stored in the DOM, it's still being stored in the JavaScript's memory. It's frustrating when people try to reimplement optimizations that the browser already does better.
YouTube is one of the slowest websites I have ever used.<p>It takes several seconds to load, even with moderate hardware and fast internet connections.
Some might be interested in pre-compressing their sites:<p><pre><code> https://gitlab.com/edneville/gzip-disk
</code></pre>
It doesn't stop client CPU burn, but it might help get data to the client device without on-the-fly compression a bit quicker, which in my experience is helpful from the server side too.
Compare with one of my projects, [1]<p>It is a minimal, though modern-looking web chat.
The HTML, CSS and JS together is 5024 bytes.
The Rust backend source is 2801 bytes.
It does not pull in anything from anywhere.<p>[1] <a href="https://github.com/coolcoder613eb/minchat">https://github.com/coolcoder613eb/minchat</a>
I think Web bloat started with pretty urls, they provide nothing on top of traditional urls yet every request has to parse them unnecessarily. It's such a waste on a huge scale, especially for slow languages plus the expensive regex processing as well.
Also related: Performance Inequality Gap 2024 <a href="https://infrequently.org/2024/01/performance-inequality-gap-2024/" rel="nofollow">https://infrequently.org/2024/01/performance-inequality-gap-...</a>
>There are two attitudes on display here which I see in a lot of software folks. First, that CPU speed is infinite and one shouldn't worry about CPU optimization. And second, that gigantic speedups from hardware should be expected and the only reason hardware engineers wouldn't achieve them is due to spectacular incompetence, so the slow software should be blamed on hardware engineers, not software engineers.<p>Not just the quote but the whole piece. I am glad this was brought out by Dan, and gets enough attentions to be upvoted. ( Although most are focusing on Server Rendering vs Client Side Rendering; Sigh)
A lot of what it said would have been downvoted to oblivion on HN. A multi billion dollar company CTO once commented on HN, why should I know anything about CPU or Foundry as long as they give performance improvements every few years.<p>Not only Jeff Atwood, there are plenty of other Software developers, from programming languages authors, backend and Frontend Frameworks authors, with hundreds of thousands of followers, continue to pump out views like Jeff on social media. Without the actual understanding of hardware nor the business or selling IPs or physical goods.<p>Hardware Engineers has to battle with Physics. And yet gets zero appreciation. Most of the appreciations you see <i>now</i> around Tech circle are completely "new". For a long time no one heard of TSMC. ASML wasn't even known until Intel loss its leading node. Zero understanding of CPU design nor even basic development cycles. How it will takes years just to get a new CPU out. And somehow hate Qualcomm because they didn't innovate. A company that spends the highest percentage of revenue on R&D in tech industry.
I've had this same experience with low-bandwidth situations while traveling: more than a few times I've cursed Apple for not making iOS engineers test with 3G or even 2G connections.
The entire "Premature optimization is the root of all evil" notion should be considered harmful. That one idea has completely destroyed the end user experience.
My own recent experience with this - i run a small sass web app and about a year ago i decided to partner with advertising company to help with the grow.<p>Part of the plan was that they will remake our static homepage in Wordpress bc it will be easier to manage it for them and also easier to add a blog, which was part of the new plan. I know Wordpress is slow and i would say unnecessary also but i said yes bc i did not want to micromanage them.<p>A year later we parted our ways and i was left with WP where the page load was abysmal(3-5 seconds) and about 10Mb of bs. There was something called "Oxy" or "Oxy builder" which would add a tons of style,js and clutter to the markup and kind of SPA page load style but horribly failing.<p>So now i migrated the site to Jekyll, got rid of all the bs and it's back fast. And for me also again possible to really improve.<p>So for my businesses i'm not touching WP ever again and that will be a huge bloat reduction in itself
Browsers should only display documents, not apps.<p>That's what operating systems are for.<p>Just give native apps what made the web popular in the first place:<p>• Ability to instantly launch any app just by typing its "name"<p>• No need to download or install anything<p>• Ability to revisit any part of an app just by copy/pasting some text and sharing it with anyone.<p>All that is what appears and matters to users in the end.<p>--<p>But I suppose people who would disagree with this really want:<p>• The ability to snoop and track people across apps (via shit like third-party cookies etc)
I have modern I7, 64GB RAM, RTX3090, a 7gbps NVME SSD and a 1Gbps internet connection. Can run pretty much any game maxxed out in 4k with 100 fps. Download 100GB files in few minutes. Can do all sorts of tasks and workloads. Can calculate the 20th billionth number of PI in a microsecond. What I cant do however is use twitter without stutters and hitches, or windows, or any shopping website.<p>Nice work, webdevelopers!
> As sites have optimized for LCP, it's not uncommon to have a large paint (update) that's completely useless to the user, with the actual content of the page appearing well after the LCP<p>Aahh yes, the “I’ve loaded in my 38 different loading-shimmer-boxes, now kindly wait another 30 seconds while each of them loads more”<p>Can we go back to “your page is loaded when _everything _ finishes loading” and not these unhelpful micro-metrics web devs are using to lie to themselves and users about the performance of their site?
the web is a pile of horse shit why is this even news. the best part is how all the SJW apple tesla cloud smart tech yuppies in tech dont care about how 99% of the world who cant afford to buy a new machine every year have an experience on their product worse in every way than dial up as they force every formerly paper transaction onto web. just opening firefox with blank home page can take deciseconds and even minutes. even opening a new blank tab is unresponsive and lags up the UI. on anything but mid-high range <i>desktop</i> hardware.<p>how does this even have 200 upvotes? i cant count more than 1 or 2 websites that doesnt have infinite bloat for useless nonsense like the cookie popup social media whatever 10 meme frameworks and 100 js libs injected into the page. HNers just read "bad stuff bad", respond "yup" like a zombie, and continue doing bad stuff
Nobody, nobody, nobody cares about old hardware, performance, users, etc.
if anyone cared, React wouldn't be a success. The last time I tried to use the react website on an old phone, it was slow as hell.<p>LetsEncrypt is stopping serving Android 7 this year. Android 7 will be blocked from 19% of the web: <a href="https://letsencrypt.org/2023/07/10/cross-sign-expiration" rel="nofollow">https://letsencrypt.org/2023/07/10/cross-sign-expiration</a>
The option is to install Firefox.<p>Users with old hardware are poor people. Nobody wants poor people around, not even using their website.<p>“Fuck the user”, that's what we heard from a PO when we tried to defend users, imagine if we tried to defend poor users.
What I noticed more and more is me using alternative front-end or deliberately changing my user-agent to some old browser in some sites that still have some legacy version
> While reviews note that you can run PUBG and other 3D games with decent performance on a Tecno Spark 8C, this doesn't mean that the device is fast enough to read posts on modern text-centric social media platforms or modern text-centric web forums. While 40fps is achievable in PUBG, we can easily see less than 0.4fps when scrolling on these sites.<p>Remember this the next time marketing asks the frontend team to implement that new tracking script and everyone assumes that users won't even be able to tell the difference.
Since 2000, I've observed the internet shift from free sharing of information to aggressive monetization of every piece of knowledge. So I suspect that is the culprit. If you use the mobile web on the latest iPhone you'll find its unusable without an ad-blocker.
Missing text styling impacts all users. The text is hardly legible. You really don't need much styling (bloat) to get a good result, as demonstrated on <a href="http://bettermotherfuckingwebsite.com" rel="nofollow">http://bettermotherfuckingwebsite.com</a>
re: Wordpress - with which theme? benchmarked on default theme they give away free like "2024" or whatever ?<p>obvs a good coder optimizes their own theme to get 100% score on lighthouse.