TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The RISC Deprogrammer

84 点作者 g0xA52A2A超过 2 年前

18 条评论

ithkuil超过 2 年前
&gt; Back in the 1980s, most of the major CPUs in the world were big-endian, while Intel bucked the trend being little-endian. The reason is that some engineer made a simple optimization back when the 8008 processor...<p>The article started talking about the VAX and how it was the gold standard everybody competed against.<p>The VAX is little endian.<p>Little endian is not a hack. It&#x27;s a natural way to represent numbers. Its just that most languages in earth write words left to right while writing numbers right to left
评论 #33373802 未加载
评论 #33369764 未加载
评论 #33370047 未加载
评论 #33370204 未加载
noobermin超过 2 年前
So, this is interesting so far and I&#x27;m bookmarking it for the depth of the author&#x27;s historical knowledge, but saying &quot;horizontal microcode&quot; was the main difference that &quot;no one talks about&quot;...I mean I was told this was the very difference that make RISC distinct from x86 and friends (what I see now were VAX-like archs), the simpler transistor logic without the crazy micro-programs and the pipelining, I thought this was common knowledge. Is there some other context people think of when they talk about RISC that I&#x27;m unaware of?
评论 #33370661 未加载
klelatti超过 2 年前
The author seems to have been quite wound up by claims that &quot;my high end desktop &#x2F; server (insert ARM &#x2F; RISC-V to taste) is better than your x86 because &#x27;RISC&#x27;&quot;.<p>Fair enough. It&#x27;s an 80&#x27;s debate really. ISA is probably by a long margin not the most important factor in these comparisons.<p>But he says it makes no difference at all without evidence.<p>And he ignores that there is a whole world of simpler (especially in-order) cores where ISA probably does matter a lot.
评论 #33369297 未加载
KingOfCoders超过 2 年前
&quot;In contrast, 16-bit processors could only address 64-kilobytes of memory, and weren&#x27;t really practical for real computing.&quot;<p>?
评论 #33369303 未加载
评论 #33368858 未加载
atan2超过 2 年前
I cannot believe I lived to read the expression &quot;anti-VAX&quot; in this context! :)
moomin超过 2 年前
I don’t doubt the author knows a lot about this, but the case being made constantly has things that even a cursory reading highlights as nonsense. Like RISC requiring a high-level language compiler and operating system, while the dominant “RISC” chip on the planet originally had an operating system written entirely in assembly. And the technical distinction between “real” and “not real CPUs” quietly ignoring the fact that the 80s and 90s were completely dominated by “not real” computers.
snvzz超过 2 年前
The article (like most RISC hit pieces) neglects the implicit value of simplicity.<p>Complexity needs to be justified, and the article does a very poor job there.
评论 #33369051 未加载
评论 #33381964 未加载
Taniwha超过 2 年前
completely ignores the reason why RISC architectures took over .... the L1 I-cache moved on chip, suddenly the main reason for complex instruction encodings went away
评论 #33370026 未加载
IshKebab超过 2 年前
So x86 chips are only inefficient because they&#x27;re fast? Or because Intel only makes laptop and desktop chips?<p>So how did they fail sooo badly at breaking into the mobile CPU market? Their Android phones were notoriously slow and inefficient.<p>Also isn&#x27;t one of the reasons the M1 is so fast because it has so many instruction decoders which is much easier because of the ISA?<p>The author clearly knows a lot of history but it wasn&#x27;t an especially convincing argument. Especially the idiotic ranting about what makes something a &quot;real&quot; computer.
评论 #33370980 未加载
childintime超过 2 年前
Strongly opinionated with a real message, I loved it.<p>Through the RISC story we pay a cultural debt we owe to RISC. It is story telling, about a time long gone, and the tale is mythical in nature. In opposition to the myth, as the article states, RISC by itself is no longer an ideal worth pursuing.<p>This is relevant to the other Big Myth of our tech times, the Unix Story, and by extension to Linux. UNIX is mythical, having birthed OS and file abstractions, as well as C. It was a big idea event. But its design is antithetical to what a common user today needs, owning many devices and installing software that can&#x27;t be trusted, at all, yet needs to be cooperative.<p>When Unix was born, many users had to share the same machine, and resources were scarce to the point there was an urgent need to share them, between users. Unix created the system administrator concept and glorified him. But today Unix botches the ideals it was once born of, the ideals of software modularity and reusability. Package managers are a thing, yet people seem blind to the fact they actually bubble up from hell. Many PM&#x27;s have come already and none will ever cure the disease.<p>Despite this the younger generations see Unix through rosy glasses, as the pinnacle of software design, kinda like a Statue of Liberty, instead of the destruction of creative forces it actually results in. I posit Linux&#x27;s contribution to the world is actually negative now. We don&#x27;t articulate the challenges ahead, we&#x27;re just procrastinating on Linux. It&#x27;s the only game in town. But the money is still flowing, servers are still a thing, and so the myth is still alive.<p>The Unix Myth has become a toxic lie, and as collateral Linus has become a playmate for the tech titans. I&#x27;m waiting for him to come out and do the right thing, for it is evil for the Myth to continue to govern today&#x27;s reality.
评论 #33370107 未加载
cestith超过 2 年前
32-bit versions of OS&#x2F;2 and multiple versions of Unix ran on the 80386 and 80486 long before Windows NT ever ran on most desktops. Client PCs were mostly Windows 95&#x2F;98&#x2F;ME until the XP era. Servers and some professional workstations were NT 3.1, 3.51, and 4.0 then Windows 2000. Few business desktops and home computers ran NT&#x2F;2000 at all.
mpweiher超过 2 年前
This &quot;debunking&quot; is itself mostly plausible-sounding bunk.<p>It gets a lot of details simply wrong. For example, the 68030 wasn&#x27;t &quot;around 100000 transistors&quot;, it was 273000 [1]. The 80386 was very similar at 275000 [2]. By comparison, the ARM1 was around 25000 transistors[3], and yet delivered comparable or better performance. That&#x27;s a factor of 10! So RISC wasn&#x27;t just a slight re-allocation of available resources, it was a massive leap.<p>Furthermore, the problem with the complex addressing modes in CISC machines wasn&#x27;t just a matter of a tradeoff vs. other things this machinery could be used for, the problem was that compilers weren&#x27;t using these addressing modes at all. And since the vast majority of software was written in high-level language and thus via compilers, the chip area and instruction space dedicated to those complex instructions was simply wasted. And one of the reasons that compilers used sequences of simple instructions instead of one complex instruction was that even on CISCs, the sequence of simple instructions was often faster than the single complex instruction.<p>Calling the seminal book by Turing award winners Patterson and Hennessy &quot;horrible&quot; without any discernible justification is ... well it&#x27;s an opinion, and everybody is entitled to their opinion, I guess. However, when claiming that &quot;Everything you know about RISC is wrong&quot;, you might want to actually provide some evidence for your opinions...<p>Or this one: &quot;These 32-bit Unix systems from the early 1980s still lagged behind DEC&#x27;s VAX in performance. &quot; What &quot;early 1980s&quot; 32-bit Unix systems were these? The Mac came out in 1984, and it had the 16 bit 68000 CPU. The 68020 was only launched in 1984, I doubt many 32 bit designs based on it made it out the door &quot;early 1980s&quot;. The first 32 bit Sun, the 68020-based Sun-3 was launched in September of 1985, so second half of the 1980s, don&#x27;t think that qualifies as &quot;early&quot;. And of course the Sun-3 was faster than the VAX 11. The VAX 8600 and later were introduced around the same time as the Sun-3.<p>Or &quot;it&#x27;s the thing that nobody talks about: horizontal microcode&quot;. Hmm...actually everybody talked about the RISC CPUs <i>not having microcode</i>, at least at the time. So I guess it&#x27;s technically true that &quot;nobody&quot; talked about horizontal microcode...<p>He seems to completely miss one of the major simplifying benefits of a load&#x2F;store architecture: simplified page fault handling. When you have a complex instruction with possibly multiple references to memory, each of those references can cause a fault, so you need complex logic to back out of and restart those instructions at different stages. With a load&#x2F;store architecture, the instruction that faults is a load. Or a store. And that&#x27;s all it does.<p>It also isn&#x27;t true that it was the Pentium and OoO that beat the competing RISCs. Intel was already doing that earlier, with the 386 and 486. What allowed Intel to beat superior architectures was that Intel was always at least one fab generation ahead. And being one fab generation ahead meant that they had more transistors to play with (Moore&#x27;s Law) and those transistors were faster&#x2F;used less power (Dennard scaling). Their money generated an advantage that sustained the money that sustained the advantage.<p>As stated above, the 386 had 10x the transistors of the ARM1. It also ran at significant faster clock speed (16Mhz-25Hmz vs. 8Mhz). With comparable performance. But comparable performance was more than good enough when you had the entire software ecosystem behind you, efficiency be damned Advantage Wintel.<p>Now that Dennard scaling has been dead and buried for a while, Moore&#x27;s law is slowing and Intel is no longer one fab generation ahead, x86 is behind ARM and not by a little either. Superior architecture can finally show its superiority in general purpose computing and not just in extremely power sensitive applications. (Well part of the reason is that power-consumption has a way of dominating even general purpose computing).<p>That doesn&#x27;t mean that everything he writes is wrong, it certainly is true that a complex OoO Pentium and a complex OoO PowerPC were very similar, and only a small percent of the overall logic was decode.<p>But I don&#x27;t think his overall conclusion is warranted, and with so much of what he writes being simply wrong the rest that is more hand-wavy doesn&#x27;t convince. Just because instruction decode is not a big part doesn&#x27;t mean it can&#x27;t be important for importance. For example, it is claimed that one of the reasons the M1 is comparatively faster than x86 designs is that it has one more instruction decode unit. And the reason for that is not so much that it takes so much less space, but that the units can operate independently, whereas with a variable length instruction stream you need all sorts of interconnects between the decode units, and these interconnects add significant complexity and latency.<p>Right now, RISC, in the from of ARM in general and Apple&#x27;s MX CPUs in particular, is eating x86&#x27;s lunch, and no, it&#x27;s not a coincidence.<p>I just returned my Intel Macbook to my former employer and good riddance. My M1 is sooooo much better in just about every respect that it&#x27;s not even funny.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Motorola_68030" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Motorola_68030</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;I386" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;I386</a><p>[3] <a href="https:&#x2F;&#x2F;www.righto.com&#x2F;2015&#x2F;12&#x2F;reverse-engineering-arm1-ancestor-of.html" rel="nofollow">https:&#x2F;&#x2F;www.righto.com&#x2F;2015&#x2F;12&#x2F;reverse-engineering-arm1-ance...</a>
评论 #33376652 未加载
评论 #33370199 未加载
erwan577超过 2 年前
One of the idea I take from the piece is that CPU design success is intimately tied to the software ecosystem of the day and Memory Management Units were a big thing for C langage multitasking.<p>I wonder if Rust or similar could make the MMU transistors and energy budget redondant.<p>Disclaimer: I am a 68k fan.
评论 #33369895 未加载
评论 #33369456 未加载
评论 #33369336 未加载
评论 #33369397 未加载
评论 #33369492 未加载
评论 #33369369 未加载
nuc1e0n超过 2 年前
So my takeaway from this article is this: RISC largely displaced CISC except in legacy situations as you could get better throughput for the same number of transistors by moving work into the compiler. In turn Out-of-Order execution largely displaced RISC as you could get better throughput for the same number of transistors by moving more work into the compiler.<p>How else might processor topology design dogma be hindering the performance we could get by having better compilers? This is especially important now the transistor budget isn&#x27;t nearly so flexible.
评论 #33374589 未加载
pencilguin超过 2 年前
The article is almost completely right, aside from missing that VAX was little-endian.<p>But if 68k was really a 16-bit design, then Z-80 was a really 4-bit chip, because that was the size of its ALU. What matters, <i>really</i>, is the register size, and how much work you can do in one instruction. Federico Faggin (&quot;fajjeen&quot;, btw) recognized that the Z-80 did not need its 8-bit result in the next click cycle anyway, so took two 4-bit cycles, and nobody was the wiser.
programmer_dude超过 2 年前
The article does clear up a few things. Could have been a little less acerbic though.
评论 #33368736 未加载
djmips超过 2 年前
This article is a polemic. Don&#x27;t take it personally. Enjoy! Also it&#x27;s clear they are well versed in the topic. You may not agree but it is great food for thought.
评论 #33369287 未加载
评论 #33370999 未加载
评论 #33369374 未加载
signa11超过 2 年前
[dupe] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33332202" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33332202</a>
评论 #33369387 未加载
评论 #33369382 未加载