TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why your brain is 3 milion more times efficient than GPT-4

158 pointsby sebastianvoelkl11 months ago

27 comments

cynusx11 months ago
The comparison doesn&#x27;t really hold.<p>He is comparing energy spend during inference in humans with energy spend during training in LLM&#x27;s.<p>Humans spend their lifetimes training their brain so one would have to sum up the total training time if you are going to compare it to the training time of LLM&#x27;s.<p>At age 30 the total energy use of the brain sums up to about 5000 Wh, which is 1440 times more efficient.<p>But at age 30 we didn&#x27;t learn good representations for most of the stuff on the internet so one could argue that given the knowledge learned, LLMs outperform the brain on energy consumption.<p>That said, LLM&#x27;s have it easier as they are already learning from an abstract layer (language) that already has a lot of good representations while humans have to first learn to parse this through imagery.<p>Half the human brain is dedicated to processing imagery, so one could argue the human brain only spend 2500 Wh on equivalent tasks which makes it 3000x more efficient.<p>Liked the article though, didn&#x27;t know about HNSW&#x27;s.<p>Edit: made some quick comparisons for inference<p>Assuming a human spends 20 minutes answering in a well-thought out fashion.<p>Human watt-hours: 0.00646<p>GPT-4 watt-hours (openAI data): 0.833<p>That makes our brains still 128x more energy efficient but people spend a lot more time to generate the answer.<p>Edit: numbers are off by 1000 as I used calories instead of kilocalories to calculate brain energy expense.<p>Corrected:<p>human brains are 1.44x more efficient during training and 0.128x (or 8x less efficient) during inference.
评论 #40766270 未加载
评论 #40766778 未加载
评论 #40766262 未加载
评论 #40766335 未加载
评论 #40766697 未加载
评论 #40766807 未加载
评论 #40768842 未加载
评论 #40767615 未加载
评论 #40767477 未加载
评论 #40766520 未加载
评论 #40766273 未加载
评论 #40766260 未加载
评论 #40766540 未加载
assimpleaspossi11 months ago
I don&#x27;t care.<p>I&#x27;ve come to the conclusion that gpt and gemini and all the others are nothing but conversational search engines. They can give me ideas or point me in the right direction but so do regular search engines.<p>I like the conversation ability but, in the end, I cannot trust their results and still have to research further to decide for myself if their results are valid.
评论 #40766684 未加载
评论 #40767353 未加载
评论 #40766366 未加载
评论 #40766602 未加载
评论 #40766964 未加载
评论 #40766435 未加载
评论 #40766872 未加载
评论 #40767706 未加载
评论 #40766390 未加载
kvdveer11 months ago
I feel the author is comparing an abstract representation of the brain to a mechanical representation of a computer. This is not a fair or useful comparison.<p>If a computer does not understand words, neither does your brain. While electromagnetic charge in the brain does not at all correspond with electromagnetic charge in a GPU, they do share an abstraction level, unlike words vs bits.
评论 #40767104 未加载
评论 #40766021 未加载
评论 #40766044 未加载
madsbuch11 months ago
There is an immensely strong dogma that, to my best knowledge, is not founded in any science or philosophy:<p><pre><code> First we must lay down certain axioms (smart word for the common sense&#x2F;ground rules we all agree upon and accept as true). One of such would be the fact that currently computers do not really understand words. ... </code></pre> The author is at least honest about his assumptions. Which I can appreciate. Most other people just has it as a latent thing.<p>For articles like this to be interesting, this can not be accepted as an axiom. It&#x27;s justification is what&#x27;s interesting,
评论 #40766906 未加载
评论 #40767241 未加载
评论 #40766206 未加载
评论 #40766495 未加载
lukan11 months ago
I was expecting a simple trivial calculation with comparing energy demand for LLMs and energy demand of the brain and lots of blabla around it..<p>But it rather seems a good general introduction into the realm aimed at beginners. Not sure if it gets everything right and the author clearly states he is not an expert and would like correction where he is wrong, but it seems worth checking out, if one is interested in understanding a bit about the magic behind it.
mordae11 months ago
That&#x27;s a whole lot of hand waving. Also, field effect transistors deal with potential, not current. Current consumption stems mostly from charging and discharging parasitic capacitance. Also, computers do not really process individual bits. They operate on whole words. Pun intended.
proneb1rd11 months ago
Call me lazy but I couldn’t get through the wall of text to learn what on earth vectored database is. Way too much effort spent talking about binary and how ascii works and whatnot - such basics that it feels that the article is for someone with zero knowledge about computers.
评论 #40767497 未加载
mihaic11 months ago
Genuinely curious who upvoted this and why. The title is clickbait, the writing is long and rambling and it seems to me like the author doesn&#x27;t have a profound understand of the concepts either, all just to recommend Qdrant as a vector database.
评论 #40766412 未加载
kingsleyopara11 months ago
What often gets overlooked in these discussions is how much of the human brain is hardwired as a consequence of millions of years of evolution. Approximately 85% of human genes are used to encode the structure of the brain [0]. I find this particularly impressive when I consider how complex the rest of the body is. To relate this to LLMs, I&#x27;m tempted to think this is more like pre-training rather than straightforward model design.<p>[0] <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;tp2015153" rel="nofollow">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;tp2015153</a>
评论 #40766711 未加载
tromp11 months ago
&gt; run on the equivalent of 24 Watts of power per hour. In comparison GPT-4 hardware requires SWATHES of data-centre space and an estimated 7.5 MW per hour.<p>power per hour makes no sense, since power is already energy (in Joule) per unit of time (second).
评论 #40766674 未加载
lll-o-lll11 months ago
Maybe, but I bet GPT-4 can spell million.
tonyoconnell11 months ago
The performance issues with pgvector were fixed when they switched HNSW. It’s now 30x faster. It’s wonderful to be able to store vectors with Postgres Row Level security, for example if someone uploads a document you can create a policy that it appears only to them in a vector search.
Reason07711 months ago
I guess this explains why the machines in <i>The Matrix</i> went to so much effort to create the matrix and “farm” humans for their brain energy.<p>It’s just so much more efficient than running their AI control software on silicon-based hardware!
评论 #40766806 未加载
joehogans11 months ago
Neuromorphic chips represent the future because they mimic the brain&#x27;s neural architecture, leading to significantly higher energy efficiency and parallel processing capabilities. These chips excel in pattern recognition and adaptive learning, making them ideal for complex AI tasks. Their potential to drastically reduce power consumption while enhancing computational performance makes them a pivotal advancement in hardware technology.
cjk211 months ago
I think GPT-4 is way more than 3 million times more efficient than my brain. All it does is a lot of multiplication and adding and my brain is crap at that.
评论 #40766546 未加载
评论 #40766179 未加载
SubiculumCode11 months ago
I kept waiting for the &#x27;milion&#x27; in the headline to be part of the explanation somehow.<p>I guess it was misspelling rather than an allusion to the Roman stone pillars for distance measurement <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Milion" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Milion</a>
shinycode11 months ago
If some day AGI happens and can exists on its own, wouldn’t that prove that intelligence is a base requirement for intelligence to happen in the first place ? AGI can’t happen on its own, it needs our intelligence first to help it structure itself
评论 #40766063 未加载
评论 #40766133 未加载
评论 #40766012 未加载
评论 #40766016 未加载
评论 #40766555 未加载
评论 #40766132 未加载
评论 #40766042 未加载
EncomLab11 months ago
It&#x27;s always going to be difficult to compare a carbon based, ion mediated, indirectly connected, reconfigurable network of neurons to a silicon based, voltage mediated, directly connected, fixed configuration transistors.<p>The analogy works, but not very far.
southernplaces711 months ago
Some of the comparisons here in the comments between LLMs and the human brain go into the territory of deep naval gazing and abstract justification. To use a phrase mentioned below, by Sagan &quot;You can make an apple pie from scratch, but you&#x27;d have to invent the universe first&quot;. Sure, to the deepest level this may be somewhat true, but the apple pie would still just be an apple pie, and not a condensed version of all that the universe contains.<p>The same applies to LLMs in a way. If you calculate their capabilities to some arbitrary extreme of back--end inputs and ability based on the humans building them and all that they can do, you can arrive at a whole range of results for how capable and energy-efficient they are, but it wouldn&#x27;t change the fact that the human brain as its own device does enormously more with much less energy than any LLM currently in existence. Our evolutionary path to that ability is secondary to it, since it&#x27;s not a direct part of the brain&#x27;s material resources in any given context.<p>The contortions by some to give equivalency between human brains and LLMs are absurd when the very blatantly obvious reality is that our brains are absurdly more powerful. They&#x27;re also of course capable of self-directed, self-aware cognition, which by now nobody in their rational mind should be ascribing to any LLM.
cainxinth11 months ago
Bicycles are much more efficient than trucks, but try using one to move a sofa…
richrichie11 months ago
&gt; Computers do not understand words, they operate on binary language, which is just 1s and 0s, so numbers.<p>That’s a bit like saying human brains do not understand words. They operate on calcium and sodium ion transport.
TheDong11 months ago
The vector db comparison is written so much like an advertisement that I cannot possibly take it seriously.<p>&gt; Shared slack channel if problems arise? There you go. You wanna learn more? Sure, here are the resources. Workshops? Possible.<p>&gt; wins by far [...] most importantly community plus the company values.<p>Like, talking about &quot;You can pay the company for workshops&quot; and &quot;company values&quot; just makes it feel so much like an unsubtle paid-for ad I can&#x27;t take it seriously.<p>All the actual details around the vectorDB (for example a single actual performance number, a clear description of the size of dataset or problem) is missing, making this all feel like a very handwavy comparison, and the final conclusion is just so strong, and worded in such a strange way, it feels disingenuous.<p>I have no way to know if this post is actually genuine, not a piece of stealth advertising, but it hits so many alarm bells in my head that I can&#x27;t help but ignore its conclusions about every database.
redka11 months ago
Seems like the title here on HN is bait testing for people not reading the article - and most of you failed. I came here to see what people have to say about his vector DBs comparisons
chx11 months ago
They are not comparable. There&#x27;s a prevalent <i>metaphor</i> which imagines the brain as a digital computer. However, this is a <i>metaphor</i> and not actual facts. While we have some good ideas on how the brain works on higher levels (recommended reading Incognito: The Secret Lives of the Brain by David Eagleman) we do not really have any ideas on the lower levels. As the essay I link below mentions, for example, when attending a concert, our brain changes so that later it can remember it but two brains attending the same concert will not change the same way. This make modelling the brain really damn tricky.<p>This complete lack of understanding is also why it&#x27;s completely laughable to think we can do AGI any time soon. Or perhaps ever? The reason for the AI winter cycle is the framing of it, this insane chase of AGI when it&#x27;s not even defined properly. Instead, we should set out tasks to solve -- we didn&#x27;t make a better horse when we made cars and locomotives. No one complains these do not provide us with milk to ferment into kumis. The goal was to move faster, not a better horse...<p><a href="https:&#x2F;&#x2F;aeon.co&#x2F;essays&#x2F;your-brain-does-not-process-information-and-it-is-not-a-computer" rel="nofollow">https:&#x2F;&#x2F;aeon.co&#x2F;essays&#x2F;your-brain-does-not-process-informati...</a>
xqcgrek211 months ago
The caloric need of a monkey typing, or a cat, is much lower than even a human.<p>But it doesn&#x27;t mean the results are good.
评论 #40766054 未加载
评论 #40766051 未加载
asah11 months ago
FTFY: ONLY 3 million times.<p>At the current pace of development, AI will catch-up in a decade or less.
评论 #40766104 未加载
评论 #40766150 未加载
mati36511 months ago
My is not