TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How might software development have unfolded if CPU speeds were 20x slower?

118 点作者 EvanWard97大约 1 年前
I was pondering how internet latency seems to be just barely sufficient for a decent fast-paced online multiplayer gaming experience. If human cognition were say, 20x faster relative to the speed of light, we&#x27;d be limited to playing many games only with players from the same city. More significantly, single-threaded compute performance relative to human cognition would effectively be limited to the equivalent of 300 MHz (6 GHz &#x2F; 20), which I suspect makes it a challenge to run even barebones versions of many modern games.<p>This led me to wondering how software development would have progressed if CPU clock speeds were effectively 20x slower.<p>Might the overall greater pressure for performance have kept us writing lower-level code with more bugs while shipping less features? Or could it actually be that having all the free compute to throw around has comparatively gotten us into trouble, because we&#x27;ve been able to just rapidly prototype and eschew more formal methods and professionalization?

37 条评论

red_admiral大约 1 年前
I feel like every time CPU speeds double, someone comes up with a Web UI framework that has twice as much indirection. With 20x slower compute, we might not have UIs that fire off an event and maybe trigger an asynchronous network request every time you type a character in a box, for example.<p>Windows 95 could do a decently responsive desktop UI on an 80386. Coding was a lot less elegant in one way - C code that returns a HWND and all that - but with the number of levels of indirection and abstraction these days, we&#x27;ve made some things easier at the cost of making other things more obfuscated.
评论 #39977752 未加载
评论 #39981765 未加载
评论 #39978364 未加载
评论 #39980516 未加载
评论 #39977799 未加载
评论 #39985771 未加载
评论 #39977826 未加载
moshegramovsky大约 1 年前
I write C++ for high-performance Windows desktop applications that are used on a wide variety of form factors. This means that I still optimize a lot of things, such as what happens when a user edits a property in an edit box. How can that edit be minimized? How do I make sure that commands operate in less than a second? How can we hide latency when a long execution time can&#x27;t be avoided? 99% of the time, optimizations are about doing less, not doing something faster or with lower-level code. You&#x27;ll never write faster code than code that doesn&#x27;t run.<p>I think the GPU would do a lot more work in most applications than it does today. If a process needs to be super fast, when applicable, I write a compute shader. I&#x27;ve written ridiculous compute shaders that do ridiculous things. They are stupidly fast. One time I reduced something from a 15 minute execution time to running hundreds of times per second. And I didn&#x27;t even do that good of a job with the shader code.
评论 #39977910 未加载
评论 #39977436 未加载
评论 #39977832 未加载
评论 #39977695 未加载
JaumeGreen大约 1 年前
&gt; Might the overall greater pressure for performance have kept us writing lower-level code with more bugs while shipping less features?<p>Are you living in the same world as the rest of us? Nowadays programs are shipped with plenty of bugs, mostly because patching them afterwards is &quot;cheap&quot;. In the old days that wasn&#x27;t as cheap.<p>So having lower powered computers would have made us write programs with less features, but also less bugs. Formal coding would be up, and instead of moving fast and break things most serious business would be writing coq or idris tests for their programs.<p>Bootcamps also wouldn&#x27;t be a thing, unless they were at least a couple of years long. We&#x27;d need people knowing about complexity, big O, defensive programming, and plenty of other things.<p>And plenty of things we take for granted would be far away. Starting with LLMs and maybe even most forms of autocomplete and automatic tooling.
评论 #39977557 未加载
bruce511大约 1 年前
One way to answer this question is to look at the software produced when clock speeds were 20x slower.<p>The limitations, and features we had then are a minimum starting point.<p>So I&#x27;m thinking around the era of a 486 100mhz machine. We&#x27;d have at least that (think mylti-player Doom and Quake era as a starting point.)<p>We had Windows, preemptive multi threading, networks, internet, large hard drives, pretty much the base bones of today.<p>Of course cpu-intensive things would be constrained. Voice recognition. CGI. But we&#x27;d have a lot more cores, and likely more multi-thread approaches to programming in general.
评论 #39982180 未加载
评论 #39978126 未加载
评论 #39977783 未加载
tuyiown大约 1 年前
Since people brought the &quot;a few decade earlier was like that&quot; response:<p>Old software on older hardware was «responsive» because library they used came with much less built-in capabilities (nice ui relayout, nice font rendering, internationalization, ui scaling), and also, less code means less memory, and rotating disk swap meant huge slow downs when hit, so being memory hungry was just not an option.<p>People that remember fast software was just people that could afford renewing their computer a year or so in the 20% top bracket prices, and don&#x27;t realize that today mere inconvenient slugginess in 6-7 years computer was just impossible to imagine back then.<p>For the «let&#x27;s imagine current day from that past», I would say we would be mostly in the same place, without AI, with much less abondance of custom software, and more investments in using and building properly designed software stack. Eg, we would have proper few UI libraries atop of web&#x2F;dom and not the utter mess of today, and much more native apps. Android might not have prevailed has it has, it relied a lot on cheap CPU improvements for its success.<p>Still safe language like rust would have emerged, but the roadblock in fixing compiler performance would have slowed down things a bit, but interest would have emerge even faster and stronger.
eterm大约 1 年前
I&#x27;m not sure I understand the premise, because CPU speeds were 20x slower. Just go back a decade or two.<p>They weren&#x27;t some halcyon days of bug-free software back then, quite the opposite.
评论 #39977498 未加载
评论 #39977834 未加载
farseer大约 1 年前
More C&#x2F;C++ based business apps that run locally. Cloud would be less relevant. No large browser engines, which means a lot less JS and of-course no Electron :)
评论 #39977944 未加载
评论 #39977590 未加载
crubier大约 1 年前
Everything would be exactly the same, except 8.64years later.<p>Moore&#x27;s law show that CPU speeds double every 2years. 2years * log2(20) = 8.64years, so we&#x27;d just be 8.64years late, that&#x27;s it, literally no reason for anything to be any different apart from that.<p>95% of comments seem to completely overlook this fact and go into deep explanations about how everything would be different. It&#x27;s pretty surprising that even a pretty sciencey community like Hacker News still doesn&#x27;t get exponentials.
评论 #39977988 未加载
评论 #39981931 未加载
评论 #39977849 未加载
Am4TIfIsER0ppos大约 1 年前
I&#x27;d probably be out of a job because we wouldn&#x27;t be doing this crap in software.<p>You wouldn&#x27;t have people wasting cpu cycles on pointless animation. You&#x27;d have people thinking about how long it takes to follow a pointer. You&#x27;d have people seriously thinking about whether the Specter and Meltdown and subsequent bugs really need to be worked around when it costs you 50% of the meager performance you still have.<p>I might ask if everything else is 20x times slower too. GPU speeds, memory bandwidth, network bandwidth.
ggm大约 1 年前
We&#x27;d still be using triple-DES to protect data, arguing that the NIST time to break it was still far out beyond. And hash functions would be like the CRC32 in TCP, not the modern stuff.<p>CISC computers which did more in parallel per instruction would be common because they existed for concrete reasons: the settling time for things in a discrete logic system was high, you needed to try and do as much as possible inside the time. (thats a stretch argument. they were what they were, but I do think the DEC 5 operand instruction model in part reflected &quot;god, what can we do while we&#x27;re here&quot; attitudes) -We&#x27;d probably have a lot more Cray-1 like parallelism where a high frequency clock drove simple logic to do things in parallel over matrices, so I guess thats GPU cards.
评论 #39977327 未加载
评论 #39977449 未加载
评论 #39982034 未加载
Taniwha大约 1 年前
Only 20x? I started my career programming on a mainframe system with a 1MHz memory cycle time (think of this as it&#x27;s &#x27;clock speed&#x27;) - it had 3 megabytes of memory and supported 40 timeshare users (on terminals) and batch streams. At one point we upgraded by adding 1.5Mb, it cost $1.25M<p>Compared to a modern CPU it was maybe 5000x slower, the early Vax systems that Unix ran on were maybe 6 times faster.<p>People certainly wrote smaller programs, we&#x27;d just stopped using cards and carrying more than a box around (1000) was a chore. You spent more time thinking about bugs (compiling was a lot slower, and they went in a queue, you were sharing the machine with others).<p>But we still got our work done, more thinking and waiting
mnw21cam大约 1 年前
One thing to consider is that the resolution and colour space of your computer&#x27;s display also depends on available clock speed, so if you reduce that by a factor of 20, you&#x27;ll also have to reduce the number of pixels in your display by the same factor. So, we&#x27;ll have worse displays as well as worse compute.<p>As with all else - just look back to computers about 20 years ago, and that&#x27;ll give you a good idea of what it&#x27;d be like. I guess the <i>main</i> difference is that we might have still been able to miniaturise the transistors in a chip as well as we do now, so you&#x27;d still have multi-core computers, which they didn&#x27;t really do very often 20 years ago.
评论 #39977639 未加载
r0ckarong大约 1 年前
HN answer:<p>To stick with your analogy: There would be more optimization and the rate of releasing stuff would be slower because it would have to be tested. That&#x27;s it. Remember catrdige based console games? How many patches or day one updated did you have to install there? How many times would they crash or soft-lock themselves? People tested more and optimized more because there were constraints.<p>Today we have plenty of resources and thus you can be wasteful. Managers trade speed over waste. If you can make it work unoptimized, ship a 150 GB installer and 80 GB day1 patch do it NOW. Money today, not when you&#x27;re done making it &quot;better&quot; for the user.<p>Sci-Fi answer: We wouldn&#x27;t be playing the same type of games. Why would we have to rely on something like our representation of graphics? If the cognition would be 20x faster and more powerful we probably wouldn&#x27;t need abstractions but would have found a way to dump data into the cognition stream more directly.<p>I think the idea that 20x faster cognition would just mean &quot;could watch a movie at 480fps&quot; is too limited. More like you could play 24 movies per second and still understand what&#x27;s going on.
评论 #39977907 未加载
mikewarot大约 1 年前
I think there are plenty of ways to make far better use of the hardware we currently enjoy. If you don&#x27;t focus on web based stuff, but go with just what&#x27;s possible in a Win32 environment, for example... it was all there in the late 1990s, VB6, Delphi, Excel, etc.<p>We&#x27;ve had quite a ride from 8 bit machines with toggle switches and not even a boot rom, nor floating point, to systems that can do 50 trillion 32 bit floating point operations per second, for the same price[1].<p>Remember that Lisp, a high level language, was invented in 1960, and ran on machines even slower than the first Altair.<p>The end of &quot;free money&quot; is over, as is the era of ever more compute. It&#x27;s time to make better use of the silicon, to get one last slice of the pie.<p>[1] The Altair was $500 assembled in 1975, which is $2900 today. I&#x27;m not sure how best to invest $2900 to get the most compute today. My best guess is an NVidia RTX 4080.
koliber大约 1 年前
No electron apps.
throwitaway222大约 1 年前
A lot more chess games online instead.<p>Probably higher IQ as the IQ lowering social media we use would barely work.
评论 #39977600 未加载
评论 #39976573 未加载
评论 #39976478 未加载
AirMax98大约 1 年前
Look no further than developers on Ethereum, who are still doing shit like voluntarily writing assembly for basic software to account for compute constraints. I can say from some brief experience, it’s a reality that I’m glad we don’t all occupy.
pjmlp大约 1 年前
Easy, we would enjoy the software practices that were common with compiled languages until the mid-2010&#x27;s, when people started using scripting languages for application development instead of OS scripting activities, with Zope, Django, Rails and friends, ending up in mostrosities like Electron, despite Active Desktop and XUL failure.
divbzero大约 1 年前
That’s not a hypothetical, is it? Given Moore’s Law, just look back a decade or so and you’ll get a sense of what software development was like when CPU speeds were 20x slower. And if you take it even further, looking back six decades or so, you’ll see things like the Story of Mel that would never happen in software development today.
j45大约 1 年前
Maybe software would have been more efficient to do the same, and software developers would still begin with an understanding of hardware and what&#x27;s happening at a lower level (Assembly) before sending it instructions in an interpreted language.<p>The sharding of the developer has made things more inefficient in some ways.
TrevorFSmith大约 1 年前
At any time there are platforms with 20x more or less speed or space than the average, from tiny embedded processors through PCs and on up to clusters and mainframes. So, to see how a 20x slower computing platform could be you can look at small and power limited device development.
fungiblecog大约 1 年前
I think a better question would be “how fast would our software be if it was programmed by people who didn’t waste all that cpu power on frameworks, terrible algorithms, and layer after layer after layer of cruft”
mkl95大约 1 年前
AAA games would still look like Quake. The web would be much more static.
mamcx大约 1 年前
I think more than the speed itself, it is how fast we get there.<p>If instead of getting major hardware wins each year it will be a decade things will be much better because now there is pressure to make it so.
amelius大约 1 年前
We wouldn&#x27;t have AI.
RetroTechie大约 1 年前
Problem is one of mentality, imho.<p>See eg. the countless HN posts &quot;hey look! I&#x27;ve used X to do Y&quot; showing off some cool concept.<p>The proper thing would be to take it as that: a concept. Play with it, mod it, test varieties.<p>Like it? Then take the <i>essential</i> functionality, and implement in resource-efficient manner using appropriate programming language(s). And take a looong, hard look at &quot;is this necessary?&quot; before forcing it onto everyone&#x27;s PCs&#x2F;mobile devices.<p>But what happens in practice? Proof-of-concept gets modded, extended, integrated as-is into other projects, resource frugality be damned. GHz cpus &amp; gobs of RAM crunch trough it anyway, right? And before you know it, Y built on top of X is a staple building brick that 1001 other projects sit on top of. Rinse &amp; repeat.<p>A factor 20 is &#x27;nothing&#x27;. And certainly not <i>the</i> issue here. Just look what was already possible (and done!) when 300 MHz cpus were state-of-the-art.<p>Wirth&#x27;s law very much applies.
mbfg大约 1 年前
Didn&#x27;t we have just that?, i&#x27;m sure there are tons of history that can fill in any gaps you may have in what it was like.
fifteen1506大约 1 年前
Not to be a jerk but it&#x27;s a question of allocation of resources, which is basically what capitalism does. It was used because it existed.<p>If there is a prolonged economic slowdown (not crash, please!), then resources will be allocated to optimizing CPU cycles and all that hype-based developments will have less resources allocated to them.<p>It can be for some of us an imperative to fight for efficiency but we shouldn&#x27;t do it in a way which is in a all or nothing approach. Know its advantages and disadvantages and work within that knowledge-framework.
rullopat大约 1 年前
We wouldn&#x27;t have software that does the same things, at the same speed, but with 10000x faster hardware
Ratiofarmings大约 1 年前
We&#x27;re definitely prioritizing features and just more applications and use cases over optimization. If CPUs were 20x slower, we&#x27;d probably see quite a few of the things that are possible right now. But with a lot more well optimized custom solutions rather than bloated frameworks.<p>And in some cases, multi-threading would be the only way to do things. Where right now, single-threaded file copy, decompression or draw-calls are largely a thing because it&#x27;s way easier to do and there is no need to change it outside professional applications.<p>Also, some things might actually be better than they are right now. Having to wait for pointless animations to finish before a UI element becomes usable should not be a thing. If there was no CPU performance for this kind of nonsense, they wouldn&#x27;t be there.<p>Please don&#x27;t mix clockspeeds with performance. A Athlon™ 5350 from 2014 is &gt;20x slower single threaded than a Core i9-14900K. Yet it&#x27;s 2 GHz vs. 5.8 GHz. Architecture, Cache and Memory Speed matter A LOT.
oliwarner大约 1 年前
&gt; internet latency seems to be just barely sufficient<p>What? I played Quake 1-3, TFC over 56k with 300ms latency, on a CPU <i>at least</i> 20x slower than modern CPUs. Tribes 2 with 63 other players. Arguably <i>more fun</i> than the prescriptive matchmaking in games these days.<p>Games are a product of their environment. You don&#x27;t let a pesky think like lag stop people having fun.
PeterZaitsev大约 1 年前
This would be modern version of Steampunk :)
ChrisArchitect大约 1 年前
Ask HN:
lulznews大约 1 年前
Same as now. Driven by incompetent management.
egberts1大约 1 年前
One word: DELIBERATELY.
sydbarrett74大约 1 年前
One thing that slows down our machines is all the trackers that run in the background as we browse the web. Surveillance capitalism FTW! &#x2F;s
andsoitis大约 1 年前
&gt; If human cognition were say, 20x faster relative to the speed of light<p>What would that even mean, being 20x faster than the speed of light? What does it imply?
评论 #39976567 未加载