TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

I want a good parallel computer

233 点作者 raphlinus大约 2 个月前

31 条评论

deviantbit大约 2 个月前
&quot;I believe there are two main things holding it back.&quot;<p>He really science’d the heck out of that one. I’m getting tired of seeing opinions dressed up as insight—especially when they’re this detached from how real systems actually work.<p>I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with. There’s a reason it didn’t survive.<p>What amazes me more is the comment section—full of people waxing nostalgic for architectures they clearly never had to ship stable software on. They forget why we moved on. Modern systems are built with constraints like memory protection, isolation, and stability in mind. You can’t just “flatten address spaces” and ignore the consequences. That’s how you end up with security holes, random crashes, and broken multi-tasking. There&#x27;s a whole generation of engineers that don&#x27;t seem to realize why we architected things this way in the first place.<p>I will take how things are today over how things used to be in a heart beat. I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.
评论 #43442103 未加载
评论 #43446601 未加载
评论 #43442259 未加载
评论 #43447079 未加载
评论 #43442025 未加载
评论 #43442359 未加载
评论 #43452182 未加载
评论 #43441978 未加载
评论 #43444206 未加载
评论 #43467708 未加载
评论 #43442549 未加载
评论 #43445656 未加载
grg0大约 2 个月前
The issue is that programming a discrete GPU feels like programming a printer over a COM port, just with higher bandwidths. It&#x27;s an entirely moronic programming model to be using in 2025.<p>- You need to compile shader source&#x2F;bytecode at runtime; you can&#x27;t just &quot;run&quot; a program.<p>- On NUMA&#x2F;discrete, the GPU cannot just manipulate the data structures the CPU already has; gotta copy the whole thing over. And you better design an algorithm that does not require immediate synchronization between the two.<p>- You need to synchronize data access between CPU-GPU and GPU workloads.<p>- You need to deal with bad and confusing APIs because there is no standardization of the underlying hardware.<p>- You need to deal with a combinatorial turd explosion of configurations. HW vendors want to protect their turd, so drivers and specs are behind fairly tight gates. OS vendors also want to protect their turd and refuse even the software API standard altogether. And then the tooling also sucks.<p>What I would like is a CPU with a highly parallel array of &quot;worker cores&quot; all addressing the same memory and speaking the same goddamn language that the CPU does. But maybe that is an inherently crappy architecture for reasons that are beyond my basic hardware knowledge.
评论 #43441933 未加载
评论 #43441758 未加载
评论 #43441918 未加载
评论 #43445595 未加载
评论 #43442155 未加载
评论 #43442177 未加载
评论 #43446657 未加载
评论 #43453693 未加载
评论 #43441790 未加载
评论 #43441762 未加载
评论 #43441779 未加载
评论 #43446310 未加载
评论 #43447575 未加载
IshKebab大约 2 个月前
Having worked for a company that made a &quot;hundreds of small CPUs on a single chip&quot;, I can tell you now that they&#x27;re all going to fail because the programming model is too weird, and nobody will write software for them.<p>Whatever comes next will be a GPU with extra capabilities, not a totally new architecture. Probably an nVidia GPU.
评论 #43441498 未加载
评论 #43441816 未加载
评论 #43441418 未加载
评论 #43441149 未加载
评论 #43449861 未加载
评论 #43441220 未加载
评论 #43441390 未加载
评论 #43442069 未加载
armchairhacker大约 2 个月前
&gt; The GPU in your computer is about 10 to 100 times more powerful than the CPU, depending on workload. For real-time graphics rendering and machine learning, you are enjoying that power, and doing those workloads on a CPU is not viable. Why aren’t we exploiting that power for other workloads? What prevents a GPU from being a more general purpose computer?<p>What other workloads would benefit from a GPU?<p>Computers are so fast that in practice, many tasks don&#x27;t need more performance. If a program that runs those tasks is slow, it&#x27;s because that program&#x27;s code is particularly bad, and the solution to make the code less bad is simpler than re-writing it for the GPU.<p>For example, GUIs have been imperceptibly reactive to user input for over 20 years. If an app&#x27;s GUI feels sluggish, the problem is that the app&#x27;s actions and rendering aren&#x27;t on separate coroutines, or the action&#x27;s coroutine is blocking (maybe it needs to be on a separate thread). But the rendering part of the GUI doesn&#x27;t need to be on a GPU (any more than it is today, I admit I don&#x27;t know much about rendering), because responsive GUIs exist today, some even written in scripting languages.<p>In some cases, parallelizing a task intrinsically makes it slower, because the number of sequential operations required to handle coordination mean there are more forced-sequential operations in total. In other cases, a program spawns 1000+ threads but they only run on 8-16 processors, so the program would be faster if it spawned less threads because it would still use all processors.<p>I do think GPU programming should be made much simpler, so this work is probably useful, but mainly to ease the implementation of tasks that already use the GPU: real-time graphics and machine learning.
评论 #43442472 未加载
评论 #43441319 未加载
morphle大约 2 个月前
I haven&#x27;t yet read the full blog post but so far my response is you can have this good parallel computer. See my previous HN comments the past months on building an M4 Mac mini supercomputer.<p>For example reverse engineering the Apple M3 Ultra GPU and Neural Engine instruction set and IOMMU and pages tables that prevent you from programming all processor cores in the chip (146 cores to over ten thousand depending on how you delineate what a core is) and making your own Abstract Syntax Tree to assembly compiler for these undocumented cores will unleash at least 50 trillion operations per second. I still have to benchmark this chip and make the roofline graphs for the M4 to be sure, it might be more.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Roofline_model" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Roofline_model</a>
dekhn大约 2 个月前
There are many intertwined issues here. One of the reasons we can&#x27;t have a good parallel computer is that you need to get a large number of people to adopt your device for development purposes, and they need to have a large community of people who can run their code. Great projects die all the time because a slightly worse, but more ubiquitous technology prevents flowering of new approaches. There are economies of scale that feed back into ever-improving iterations of existing systems.<p>Simply porting existing successful codes from CPU to GPU can be a major undertaking and if there aren&#x27;t any experts who can write something that drive immediate sales, a project can die on the vine.<p>See for example <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cray_MTA" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cray_MTA</a> when I was first asked to try this machine, it was pitched as &quot;run a million threads, the system will context switch between threads when they block on memory and run them when the memory is ready&quot;. It never really made it on its own as a supercomputer, but lots of the ideas made it to GPUs.<p>AMD and others have explored the idea of moving the GPU closer to the CPU by placing it directly onto the same memory crossbar. Instead of the GPU connecting to the PCI express controller, it gets dropped into a socket just like a CPU.<p>I&#x27;ve found the best strategy is to target my development for what the high end consumers are buying in 2 years - this is similar to many games, which launch with terrible performance on the fastest commericially available card, then runs great 2 years later when the next gen of cards arrives (&quot;Can it run crysis?&quot;)
Animats大约 2 个月前
Interesting article.<p>Other than as an exercise, it&#x27;s not clear why someone would write a massively parallel <i>2D</i> renderer that needs a GPU. Modern GPUs are overkill for 2D. Now, 3D renderers, we need all the help we can get.<p>In this context, a &quot;renderer&quot; is something that takes in meshes, textures, materials, transforms, and objects, and generates images. It&#x27;s not an entire game development engine, such as Unreal, Unity, or Bevy. Those have several more upper levels above the renderer. Game engines know what all the objects are and what they are doing. Renderers don&#x27;t.<p>Vulkan, incidentally, is a level below the renderer. Vulkan is a cross-hardware API for asking a GPU to do all the things a GPU can do. WGPU for Rust, incidentally, is an wrapper to extend that concept to cross-platform (Mac, Android, browsers, etc.)<p>While it seems you can write a general 3D renderer that works in a wide variety of situations, that does not work well in practice. I wish Rust had one. I&#x27;ve tried Rend3 (abandoned), and looked at Renderling (in progress), Orbit (abandoned), and Three.rs (abandoned). They all scale up badly as scene complexity increases.<p>There&#x27;s a friction point in design here. The renderer needs more info to work efficiently than it needs to just draw in a dumb way. Modern GPSs are good enough that a dumb renderer works pretty well, until the scene complexity hits some limit. Beyond that point, problems such as lighting requiring O(lights * objects) time start to dominate. The CPU driving the GPU maxes out while the GPU is at maybe 40% utilization. The operations that can easily be parallelized have been. Now it gets hard.<p>In Rust 3D land, everybody seems to write My First Renderer, hit this wall, and quit.<p>The big game engines (Unreal, etc.) handle this by using the scene graph info of the game to guide the rendering process. This is visually effective, very complicated, prone to bugs, and takes a huge engine dev team to make work.<p>Nobody has a good solution to this yet. What does the renderer need to know from its caller? A first step I&#x27;m looking at is something where, for each light, the caller provides a lambda which can iterate through the objects in range of the light. That way, the renderer can get some info from the caller&#x27;s spatial data structures. May or may not be a good idea. Too early to tell.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;linebender&#x2F;vello&#x2F;" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;linebender&#x2F;vello&#x2F;</a>
评论 #43443732 未加载
评论 #43443825 未加载
评论 #43445160 未加载
评论 #43448553 未加载
评论 #43441938 未加载
评论 #43455142 未加载
评论 #43454084 未加载
ip26大约 2 个月前
<i>I believe there are two main things holding it back. One is an impoverished execution model, which makes certain tasks difficult or impossible to do efficiently; GPUs … struggle when the workload is dynamic</i><p>This sacrifice is a purposeful cornerstone of what allows GPUs to be so high throughput in the first place.
bee_rider大约 2 个月前
It is odd that he talks about Larabee so much, but doesn’t mention the Xeon Phis. (Or is it Xeons Phi?).<p>&gt; As a general trend, CPU designs are diverging into those optimizing single-core performance (performance cores) and those optimizing power efficiency (efficiency cores), with cores of both types commonly present on the same chip. As E-cores become more prevalent, algorithms designed to exploit parallelism at scale may start winning, incentivizing provision of even larger numbers of increasingly efficient cores, even if underpowered for single-threaded tasks.<p>I’ve always been slightly annoyed by the concept of E cores, because they are so close to what I want, but not quite there… I want, like, throughput cores. Let’s take E cores, give them their AVX-512 back, and give them higher throughput memory. Maybe try and pull the Phi trick of less OoO capabilities but more threads per core. Eventually the goal should be to come up with an AVX unit so big it kills iGPUs, haha.
评论 #43441336 未加载
评论 #43441450 未加载
Retr0id大约 2 个月前
Something that frustrates me a little is that my system (apple silicon) has unified memory, which in theory should negate the need to shuffle data between CPU and GPU. But, iiuc, the GPU programming APIs at my disposal all require me to pretend the memory is <i>not</i> unified - which makes sense because they want to be portable across different hardware configurations. But it would make my life a lot easier if I could just target the hardware I have, and ignore compatibility concerns.
评论 #43448595 未加载
评论 #43441741 未加载
svmhdvn大约 2 个月前
I&#x27;ve always admired the work that the team behind <a href="https:&#x2F;&#x2F;www.greenarraychips.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.greenarraychips.com&#x2F;</a> does, and the GA144 chip seems like a great parallel computing innovation.
api大约 2 个月前
I implemented some evolutionary computation stuff on the Cell BE in college. It was a really interesting machine and could be very fast for its time but it was somewhat painful to program.<p>The main cores were PPC and the Cell cores were… a weird proprietary architecture. You had to write kernels for them like GPGPU, so in that sense it was similar. You couldn’t use them seamlessly or have mixed work loads easily.<p>Larrabee and Xeon Phi are closer to what I’d want.<p>I’ve always wondered about many—many-many-core CPUs too. How many tiny ARM32 cores could you put on a big modern 5nm die? Give each one local RAM and connect them with an on die network fabric. That’d be an interesting machine for certain kinds of work loads. It’d be like a 1990s or 2000s era supercomputer on a chip but with much faster clock, RAM, and network.
scroot大约 2 个月前
When this topic comes up, I always think of uFork [1]. They are even working on an FPGA prototype.<p>[1] <a href="https:&#x2F;&#x2F;ufork.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ufork.org&#x2F;</a>
throwawayabcdef大约 2 个月前
The AIE arrays on Versal and Ryzen with XDNA are a big grid of cores (400 in an 8 x 50 array) that you program with streaming work graphs.<p><a href="https:&#x2F;&#x2F;docs.amd.com&#x2F;r&#x2F;en-US&#x2F;am009-versal-ai-engine&#x2F;Overview" rel="nofollow">https:&#x2F;&#x2F;docs.amd.com&#x2F;r&#x2F;en-US&#x2F;am009-versal-ai-engine&#x2F;Overview</a><p>Each AIE tile can stream 64 Gbps in and out and perform 1024 bit SIMD operations. Each shares memory with its neighbors and the streams can be interconnected in various ways.
andrewstuart大约 2 个月前
AMD Strix Halo APU is a CPU with very powerful integrated GPU.<p>It’s faster at AI than an Nvidia RTX4090, because 96GB of the 128GB can be allocated to the GPU memory space. This means it’s doesn’t have the same swapping&#x2F;memory thrashing that a discrete GPU experiences when processing large models.<p>16 CPU cores and 40 GPU compute units sounds pretty parallel to me.<p>Doesn’t that fit the bill?
评论 #43442401 未加载
评论 #43441566 未加载
评论 #43441531 未加载
Quis_sum大约 2 个月前
Clearly the author never worked with a CM2 - I did though. The CM2 was more like a co-processor which had to be controlled by a (for that age) rather beefy SUN workstation&#x2F;server. The program itself ran on the workstation which then sent the data-parallel instructions to the CM2. The CM2 was an extreme form of a MIMD design (that is why it was called data parallel). You worked with a large rectangular array (I cannot recall up to how many dimensions) which had to be a multiple of the physical processors (in your partition). All cells typically performed exactly the same operation. If you wanted to perform an operation on a subset, you had to &quot;mask&quot; the other cells (which were essentially idling during that time).<p>That is hardly what the author describes.
评论 #43455157 未加载
sitkack大约 2 个月前
This essay needs more work.<p>Are you arguing for a better software abstraction, a different hardware abstraction or both? Lots of esoteric machines are name dropped, but it isn&#x27;t clear how that helps your argument.<p>Why not link to Vello? <a href="https:&#x2F;&#x2F;github.com&#x2F;linebender&#x2F;vello" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;linebender&#x2F;vello</a><p>I think a stronger essay would at the end give the reader a clear view of what Good means and how to decide if a machine is closer to Good than another machine and why.<p>SIMD machines can be turned into MIMD machines. Even hardware problems still need a software solution. The hardware is there to offer the right affordances for the kinds of software you want to write.<p>Lots of words that are in the eye of beholder. We need a checklist or that Good parallel computer won&#x27;t be built.
评论 #43442220 未加载
评论 #43448657 未加载
SergeAx大约 1 个月前
The thing is that most of our everyday software will not benefit from parallelism. What we really have a use for is concurrency, which is a totally different beast.
Wumpnot大约 2 个月前
I had hoped the GPU API would go away, and the entire thing would become fully programmable, but so far we just keep using these shitty APIs and horrible shader languages.<p>Personally I would like to use the same language I write the application in to write the rendering code(C++). Preferably with shared memory, not some separate memory system that takes forever to transfer anything too. Somelike along the lines of the new AMD 360 Max chips, but graphics written in explicit C++.
muziq大约 2 个月前
I was always fascinated by the prospects of the 1024-core Epiphany-V from Parallella.. <a href="https:&#x2F;&#x2F;parallella.org&#x2F;2016&#x2F;10&#x2F;05&#x2F;epiphany-v-a-1024-core-64-bit-risc-processor&#x2F;" rel="nofollow">https:&#x2F;&#x2F;parallella.org&#x2F;2016&#x2F;10&#x2F;05&#x2F;epiphany-v-a-1024-core-64-...</a> But it seems whatever the DARPA connection was has led to it not being for scruffs like me and is likely powering god knows what military systems..
mikewarot大约 2 个月前
Any computing model that tries to parallelize von Neumann machines, that is, has program counters or address space, just isn&#x27;t going to scale.
评论 #43447181 未加载
nickpsecurity大约 2 个月前
There are designs like Tilera and Phalanx that have tons of cores. Then, NUMA machines used to have 128-256 sockets in one machine with coherent memory. The SGI machines let you program them like it was one machine. Languages like Chapel were designed to make parallel programming easier.<p>Making more things like that with lowest, possible, unit prices could help a lot.
amelius大约 2 个月前
Isn&#x27;t the ONNX standard already going into the direction of programming a GPU using a computation graph? Could it be made more general?
评论 #43442039 未加载
0xbadcafebee大约 2 个月前
If we had distributed operating systems and SSI kernels, your computer could use the idle cycles of other computers [that aren&#x27;t on battery power]. People talk about a grid of solar houses, but we could&#x27;ve had personal&#x2F;professional grid computing like 15 years ago. Nobody wanted to invest in it, I guess because chips kept getting faster.
评论 #43442404 未加载
nromiun大约 2 个月前
What about unified memory? I know these APUs are slower than traditional GPUs but still it seems like the simpler programming model will be worth it.<p>The biggest problem is that most APUs don&#x27;t even support full unified memory (system SVM in OpenCL). From my research only Apple M series, some Qualcomm Adreno and AMD APUs support them.
joshu大约 2 个月前
Huh. The Blelloch mentioned n the thinking machines section taught my parallel algorithms class in 1994 or so.
eternityforest大约 2 个月前
I wonder if CDN server applications could use something like this, if every core had a hardware TCP&#x2F;TLS stack and there was a built-in IP router to balance the load, or something like that.
casey2大约 2 个月前
I think Tim was right, it&#x27;s 2025, Nvidia just released their 50 series, but I don&#x27;t see any cards, let alone GPUs.
dragontamer大约 2 个月前
There&#x27;s a lot here that seems to misunderstand GPUs and SIMD.<p>Note that raytracing is a very dynamic problem, where the GPU isn&#x27;t sure if a ray hits a geometry or if it misses. When it hits, the ray needs to bounce, possibly multiple times.<p>Various implementations of raytracing, recursion, dynamic parallelism or whatever. Its all there.<p>Now the software &#x2F; compilers aren&#x27;t ready (outside of specialized situations like Microsofts DirectX Raytracing, which compiles down to a very intriguing threading model). But what was accomplished with DirectX can be done in other situations.<p>-------<p>Connection Machine is before my time, but there&#x27;s no way I&#x27;d consider that 80s hardware to be comparable to AVX2 let alone a modern GPU.<p>Connection Machine was a 1-bit computer for crying out loud, just 4096 of them in parallel.<p>Xeon Phi (70 core Intel Atoms) is slower and weaker than 192 core Modern EPYC chips.<p>-------<p>Today&#x27;s machines are better. A lot better than the past machines. I cannot believe any serious programmer would complain about the level of parallelism we have today and wax poetic about historic and archaic computers.
评论 #43443739 未加载
评论 #43454755 未加载
pikuseru大约 2 个月前
No mention of the Transputer :(
Ericson2314大约 2 个月前
Agreed with the premise here<p>I have never done GPU programming or graphics, but what feels frustating looking from the outside is the designs and constraints seems so arbitrary. They don&#x27;t feel like they come from actual hardware constraints&#x2F;problems. It just looks like pure path dependency going all the way back to the fixed-function days, with tons of accidental complexity and and half-finished generalizations ever since.