TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Is anyone working on neuromorphic CPU architectures for AI?

4 点作者 hliyan超过 1 年前
Right now, we're emulating neural networks (which in nature, are largely asynchronous) on clock-signal-driven von Neumann computers solving matrix math. I'm not a machine learning expert, but I wonder whether the power consumption needs of this approach will become a barrier to faster, more efficient large language models. After all, the human brain seems to operate over 80 billion neurons on approximately 20 watts. Are there organisations, companies or labs working on neuromorphic CPU architectures that are better suited to run LLMs than general computing loads?

2 条评论

mikewarot超过 1 年前
I&#x27;ve been pursuing a path that is decidedly edgy... and might work out great, or might be a miserable failure... the BitGrid[1]. It&#x27;s dead nuts simple... a cartesian grid of 4 bit input, 4 bit output LUTs, latched and clocked in 2 phases (like the colors on a checkerboard) to prevent race conditions. It&#x27;s a turing complete architecture that doesn&#x27;t have the routing issues of an FPGA because there&#x27;s no routing hardware in the way. But it is also nuts because there&#x27;s no routing fabric to get data rapidly across the chip.<p>If you can unlearn the aversion to latency that we&#x27;ve all had since the days of TTL and the IMSAI, you realize that you could clock an array of 16 Billion cells <i>slowly</i> to save power at 1Mhz, and still get a million tokens&#x2F;sec.<p>It&#x27;s all a questiron of programming. (Which is where I&#x27;m stuck right now, analysis paralysis)<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;mikewarot&#x2F;Bitgrid">https:&#x2F;&#x2F;github.com&#x2F;mikewarot&#x2F;Bitgrid</a>
p1esk超过 1 年前
Yes. Try googling “neuromorphic ai accelerators” and you’ll see a bunch of them.