To borrow a reference from Alex Crompton, building software is both "Math hard" and "Bodybuilding" hard. Coding itself is then recursively "Math" hard and "Bodybuilding" hard.<p>Math hard is when to get something to work it's really hard "mentally" to achieve/produce/figure out, but when it works it's not a problem anymore and you can explain it to kids (like Pythagorean theorem). Bodybuilding hard is when there is clear and simple "process" to achieve a thing, but it's really hard to stick to it, because to not do so is way easier.<p>In coding e.g, it's boring to refactor and optimize code and not put all stuff in one place and write shitty functions.<p>In actual software development/management (one level "above") it seems it's really really hard for the team/company/PM/CEO to fight bloat. It's hard not to go for building new stuff and adding features. There is nothing to talk about and waste meetings on and add administrative and marketing staff early on if you're not working to "add new features". Mainly because there is no definable primary feature, no actual purpose for the software, no really good coders, and in 99% of cases no actual product and / or underlying business.
There is an easy explanation for this: developers optimize their code when the product owner tells them to spend time on this. The product owner will tell them to optimize when product owner notices that the software is getting too slow.
When new hardware is released, slowdowns that would be visible on older hardware won't be easily noticeable, so programmers will work on other features rather than optimizing code. Until the code becomes too slow to run on current hardware, then they work until it becomes usable again.
Part of the problem is that software engineering time is more expensive than just buying better hardware or letting the end user deal with the slow software.<p>So business just buy better available hardware rather than spending time optimizing and end user want usually the cheapest product.<p>There are only a few organizations like Sublime HQ who focus on fast software. But they are producing niche products, catering to other software developers who appreciate the fast nature of their software and have the money to spend on it. I personally bought Sublime Merge and my IT friends who make a lot of money are perplexed by my decision spending 100 on an amazing development tool.<p>I also sometimes dream with my programming buddy about creating a company which creates fast software like Sublime HQ, but it is really hard to sell this kind of software.
<i>> We carefully monitor startup performance using an automated test that runs for almost every change to the code. This test was created very early in the project, when Google Chrome did almost nothing, and we have always followed a very simple rule:<p>This test can never get any slower.</i><p>That's actually very similar to how Safari was built.<p><i>After Steve showed the Safari icon, he clicked to the next slide. It had a single word: Why? Steve felt the need to say why Apple had made its own browser, and his explanation led with speed. Some may have thought that touting Safari performance was just marketing, a retrospective cherry-picking of one browser attribute that just happened to turn out well. I knew better. I had been part of the team that had received the speed mandate months earlier, and I had participated in the actions he now described which ensured the speed of our browser.</i>" [1]<p>[1] Kocienda, Ken. Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs
> If a feature is going to cost start-up time, I would rather not have it. Is it that we don't care about start-up time? Or is it that we don't have the choice?<p>We still care about start-up time, but there's diminishing returns to this sort of optimization. Moreover, the developer teams have to balance these factors with a variety of other product features and with optimizations for developer productivity.<p>Gains in hardware performance might be wasted by slow software, but "fast software" can certainly reap the benefits. By "fast software", I mean computation heavy workloads like ML training, image processing, etc. The article touches on this.<p>> But doing the same thing should never become slower. Starting up an application should never take longer than it used to.<p>I would not consider starting up Xcode in 2008 and starting up Xcode in 2020 to be "doing the same thing". Through many years of updates, this application has become very different. I personally don't use Xcode, but I would hope that Xcode 2020 has more features than Xcode 2008.<p>Now, whether Xcode 2020 actually NEEDS all those new features is a different questions, especially if it comes at a cost to start-up time. As individual users, it may feel like we have no choice in these decisions.<p>However, the optimist in me wants to think that we DO have a choice. We can still "vote with our behavior". If Xcode is too slow, users will stop using it and prefer alternatives that provide the performance that we want. However, if users continue to use Xcode in spite of slow start-up time, then perhaps there's something more important that's keeping users around.
"Software is getting slower more rapidly than hardware is becoming faster"<p><a href="https://en.wikipedia.org/wiki/Wirth%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Wirth%27s_law</a>
Personally I couldn't give a shit about startup time. I start most of my apps when I boot my machine and I rarely if ever close any of them or turn my machine off.<p>I care about how fast said applications do their job, that they remain responsive and don't suffer from instability.<p>Maybe I am just old/desktop user. My machine is a 9900K + 32GB RAM which is definitely fast but it's not absurd. With this amount of RAM I never feel the need to close anything other than to tidy up my list of open tabs so I can see the icons without scrolling again.
I’m not sure if this qualifies as an “unpopular opinion”, but I get a predictable amount of flak whenever I bring it up: I think the assumption that a company should outfit their developers with “the best” machines is problematic. If my company could give me a brand new 13” MacBook Pro from a couple years ago, I’d take it over the 16” 2020 model I have. To many the hardware a company gets its engineers is some sort of inviolable measure of how much they love you, and a company that doesn’t buy you the beefy model of the latest laptop is clearly a horrible place to work. But personally I tend to use moderately old hardware and then upgrade every once in a while to the latest model, and every time I’m always interested to see how some software can go from being utterly unusable to “reasonable”. I read articles about how Discord made their app’s scrolling 60 FPS and I go “yeah, right” because my iPhone SE gets nowhere near that…but on my iPhone 12 mini it kind of actually does. Trying to run any sort of Swift Playgrounds or previewing on my Early 2015 MacBook Pro is an exercise in frustration…but throw two extra cores at it and give the process a couple more gigabytes of RAM and it’s usable.<p>It’s clear that all of these things never get tested beyond the high-powered machine that the developer used. And I don’t see the trend of developers being completely out of touch of what devices their users are using going away anytime soon. But maybe we should start considering “experience correction” tools as part of a regular testing methodology. A friend of mine once lamented that the web would be a much better place if opening the Chrome developer tools automatically enabled the network throttler, and I think I’d have to agree with him.
The author says that he switched to Chrome and never looked back because when it came out, it booted up in under a second.<p>> Firefox developers famously attempted to fix the five second startup time in 2010 via bug #627591[6]. They fixed many issues but not enough to warrant a second try.<p>It would be a great point if Chrome still started up in under one second. In one data point, there isn't much difference. Plus I typically (re)launch the browser less than once per day. The time difference on start up isn't the deciding issue for me.
This seems like a case of the Jevons Paradox, where the "resource" is CPU cycles: <a href="https://en.wikipedia.org/wiki/Jevons_paradox" rel="nofollow">https://en.wikipedia.org/wiki/Jevons_paradox</a>
I'm under the impression that you can run pretty much anything in web assembly, which is like a whole tiramisu of abstraction above the hardware. Not sure how many more cycles that ads but I bet it's quite a few.<p>Is it possible that we're inventing all this encapsulation because we think we can escape an ugly truth at a higher layer? Is it possible that we're wrong about that?
In many circumstances, it’s better to optimize for developer time over compute time. I’m glad that I can choose from zillions of (slower than necessary) web/electron apps instead of a smaller number of “optimized” apps written by “real engineers”.
I don't get the scorn against the 'faster than 98% of PC laptops' line. The linked article doesn't further the point either. Pretty easy to guess they are comparing to 98% of laptops <i>sold</i> - how many units of Alienware or RTX 2060 laptops are actually sold?
New shiny libraries, frameworks, and platforms draw in developers. Those new things are typically not well polished or optimized. By the time they get optimized, developers move on to newer and shinier things.<p>Java, PHP, .NET, and Javascript were all slow until they got fast (although with Javascript, things got really complicated and slow again).<p>I say this as a developer that uses all those now-fast things and avoids the shiny and new for these reasons (and more, including lack of tooling).
> But doing the same thing should never become slower<p>But none of the described software <i>is</i> doing the same thing it was before. This author is saying "well, you know, it's still a photo editor, same thing" and "well, you know, it's still a web browser, same thing". But those are <i>gross</i> oversimplifications of how a major piece of software evolves over the course of a <i>decade</i>.
Why am I not surprised that several of the top comments here immediately turn around and blame their bosses? As though the problem doesn't also exist in FOSS software!<p>Yeah, lets just say it is all management's fault while we develop with an eye towards leaving for greener pastures in 2 years. Ooh! Is that a new shiny framework?<p>Disgusting.
> Chrome also had these. But when I saw it boot nearly instantly instead of the five seconds it took to Firefox, I immediately switched and never looked back.<p>It really was incredible how fast Chrome seemed when it first came out; and not just in terms of startup speed; rendering speed was superb compared to the competition. Now Firefox seems much closer to Chrome in terms of rendering speed, and for certain web-pages (some company web-pages with huge tables at least) Firefox renders them noticeably faster.
About 5 years ago, I had this Windows 95 PC laying around and started it to see if it still worked. To my surprise that some old hardware felt very fast for many tasks.<p>That was the moment, I realized that we are doing something wrong today and that performance should always be evaluated.
> Is it that we don't care about start-up time?<p>Not particularly. If it takes 13s to load Photoshop once a day, then I can access all these great features, that's alright with me.
Some arguments below blaming software bloat on nefarious action by big tech and platform owners. Maybe, but there's a simpler explanation: your computer's speed is a public good, shared by the hosted applications.<p>When you run multiple applications, if one of them is bloated and slow, it takes up processor time and memory. This is then not available to the other applications. So your whole phone/laptop slows down. You don't blame the app. You blame the phone/laptop, and think about buying a new one. As a result, there is little incentive for app developers to create lean, speedy applications. This is a standard public goods problem.<p>Apple may perhaps be deliberately slowing their computers down with software upgrades, but they probably don't need to. They can just wait for app developers to do it for them.<p>This argument also challenges the article's optimism that slow apps create a market opening for faster apps. In particular, while faster startup is something users notice (and blame on the app), memory bloat is not. So developers are likely to (over-)optimise for startup speed, at the cost of being memory-hungry, which in time slows the whole computer down.
I think hardware architects might hate hearing this but their work enables lower skill engineers to enable more functionality in less time. This wasn't quite mentioned in the article. Having faster hardware means you can run automated more security checks and more functional checks, in the same functions that run on less powerful hardware. The performance of your program can also survive some useless checks you've thrown in or some slow algorithm like a bubble sort. This slows down the end program but means that it was much easier to program in the first place. We want to imagine that the extra processing power we're building hardware for will be used to do awesome things like machine learning or PQC. But in reality it just lets some lazy idiot like me finish their project faster, which isn't a bad thing! Computer technology is expanding like crazy and it's almost impossible to keep up with it. Easing the transition from idea to code is worth wasting a bunch of cycles imo.
If companies like Apple didn't try to make things as incompatible as possible, maybe software engineers wouldn't have to keep adding new layers of abstraction and things would run a bit faster. There's basically a never ending war between platform developers trying to create lock-in and app developers trying to escape with new cross-platform technologies.
> It is normal that doing more takes more resources. Modern raytracers require more processing power because they generate better images. Likewise, compilers of 2020 have awesome static code analyzer which they did not have in 2009.<p>> But doing the same thing should never become slower. Starting up an application should never take longer than it used to. If a feature is going to cost start-up time, I would rather not have it.<p>No feature, no matter how great, could ever justify a slowdown to start-up time? That seems like an absurd position to take. Also, if you read the link to the “law” about software taking up more compute resources over time, you’ll find that the origin is Intel’s Andy Grove complaining that Microsoft wasn’t taking advantage of newer, more powerful chips.<p>Also the title is just kind of rude and dismissive to software engineers - it’s not true that noone at all cares about performance.
With the modern cloud computing paradigm, it's a lot easier to produce code and let it scale horizontally than to write optimised code, thus speeding up the delivery cycle.
“For every cycle a hardware engineer saves, a software engineer will add two instructions”<p>To save dev time, people are no longer building native. And that bloat keeps adding up, masquerading as abstraction
The SuperHuman webmail client claims to have a 100ms rule. <a href="https://www.acquired.fm/episodes/superhuman" rel="nofollow">https://www.acquired.fm/episodes/superhuman</a><p>It'd be nice to work for an org like that.<p>My entire career, any performance tweaking has had to be skunkworks, has never been rewarded.<p>At my very first tech dev job, my coworker and I speed up a novel workflow, as in brand new technology (think OCR for blueprints), about 4x faster. Made our new service practical. We were so thrilled. Boss man was unimpressed. "You should have thought of that before." I'm still angry about it. Clearly.<p>Ok. Wait. That's not entirely correct. A recent team did claim to have targets for P90, P99. So we measured. But it wasn't systematic, nor global. None of the other web services (teams) bothered, so our effort didn't actually matter. And even then my team wasn't willing to sacrifice our sacred cows to consistently attain the P90, P99 targets.
I remember spending two days hunting a bug in my Turbo Pascal code when I upgraded from a 286 machine (12Mhz) to my a Pentium-66.<p>It was a while loop with one extra zero at the end.<p>The loop with the bug run in twos second in the 286. Easy to notice.<p>In the Pentium it didn't run in the blink of an eye.<p>This way of benchmarking performance, with human wall clock time, makes performance problems add up: They add up faster, the faster your development machine is.<p>I guess a good benchmarck would be number of cycles run; but if the software runs fast enough it's hard to justify working more time on it unless there a business need to do it faster (like you're billed by milisecond, by intruction run like in a blockchain, or it is part of the core layer of your product and performance matters to capture market share).
It is interesting that the example given for startup times are browsers when that is one of few applications where I don't care that much about startup times because I have the browser always running anyway.
I write native code, in Swift; using the built-in APIs. I also will require iOS 13 or greater in new apps (probably 14, for the ones I think may take many months before release).<p>I try to use fairly modern techniques and patterns in my development (but I am not yet satisfied that SwiftUI is up to the task for my more complex work).<p>It is my hope that this means that my apps will be as performant as possible.<p>I may get an M1 Mini, as a test device, sooner or later. There’s no hurry.<p>I am still planning on waiting for the M2 processors before upgrading most of my systems. By then, I think that most apps will have sorted themselves out.
I don't see how this is tied only to the new M1 CPU.<p>All systems and OSes suffer the same faith eventually. That's why you don't use your machine from early 2000s with modern apps.<p>For me it seems that his statement shows some envy.
I'm not sure what the author tries to say, maybe that's the entire point. The article does lay out some observations that I agree with and are interesting.<p>I take away from it, that there are opportunities in the fact that big-bloat will continue, and that the opportunities are in building fast and light software that performs exceptionally on what is considered low-range hardware of today, and incredibly on high-end hardware.<p>What is your takeaway?
> But when I saw it boot nearly instantly instead of the five seconds it took to Firefox, I immediately switched and never looked back.
Chrome didn't beat Firefox just because it was faster. It got popular because it was promoted on a Google's home page and google products that were used by million users already.
Maybe Apple could use their walled garden to tax developers based on the amount of resources their app uses. You need $100 to put your App in the store; you need to pay $1000 to use more than 10% of CPU performance; you need to pay $10000 to have an app bundle more than 10MB; and so on -- charging ridiculous amounts for access to high-performance battery-draining features. For dealing with app developers with deep pockets like Google -- you could charge the developer something egregious like $10 per install. At some point the screws are tight enough that you just build your app to behave better.<p>It would be absolutely dastardly -- but could actually have the effect of forcing app developers to care about performance :) They won't care unless they have to. Web developers have to care about performance because of SEO (and click metrics...) but it only goes so far because it's not forced.<p>I disagree completely with the walled garden model of iOS (and the direction of macOS) but, if it nonetheless exists, I'd like to see them do something hardcore like this. The performance and battery gains would be out of this world.
People want more behavior and more data. more data needs Better algorithms for processing and it's very easy to fall off the performance cliff with a bad algo that doesn't scale well. Add multiple layers and it's not always obvious what the total complexity is.
Browsers seem to be one of the few apps where they compete on speed. I remember Firefox was peeled out from the Mozilla browser and liking it because it was much faster and lighter.<p>Similar to the quote about Chrome in the article, speed came up quite a bit when Safari was developed almost a decade before Chrome.<p>> The number one goal for developing Safari was to create the fastest web browser on Mac OS X. When we were evaluating technologies over a year ago, KHTML and KJS stood out. [1]<p>> Don Melton: That's just perfect for me. In another life, I was a QA engineer. I run the tests, and if I detected we were slower, I closed the tree. That was the beginning of my policy, that policy that I pulled out of my hiney. Boy, talk about making yourself popular on your team doing that! [2] (on his no regressions policy)<p>[1] <a href="https://marc.info/?l=kfm-devel&m=104197092318639&w=2" rel="nofollow">https://marc.info/?l=kfm-devel&m=104197092318639&w=2</a><p>[2] <a href="https://www.imore.com/debug-111-don-melton-blink-servo-and-more" rel="nofollow">https://www.imore.com/debug-111-don-melton-blink-servo-and-m...</a>
The test that never can get slower - I'm curious how they do it.. In my experience tracking performance regressions is much bigger challenge than solving performance issues.
Uninitiated, outsider, novice etc question in relation to the topic - is there a reason banking websites are a bloated nightmare to navigate, regardless of my devices capabilities?<p>Chase is the worst. So slow I've missed credit card payments over it.
I’d love a new Mac mini but 16GB of RAM just doesn’t cut it any more.<p>If it was 32GB I’d buy.<p>But a fast CPU is only marginally useful when my machine grinds from memory swapping.