Yet we also see that hyperscale cloud emissions targets have been reversed due to AI investment, Datacenter growth is hitting grid capacity limits in many regions, and peaker plant and other non-renewable resources on the grid are being deployed more to handle this specific growth from AI. I think the author, by qualifying on "chatgpt" maybe can make the claims they are making but I don't believe the larger argument would hold for AI as a whole or when you convert the electricity use to emissions.<p>I'm personally on the side that the ROI will probably work out in the long run but not by minimizing the potential impact and keeping the focus on how we can make this technology (currently in its infancy) more efficient.
[edit wording]
This is a great article for discussion. However articles like this must link to references. It is one thing to assert, another to prove. I do agree that heating/cooling, car and transport use, and diet play massive roles in climate change that should not be subsumed by other debates.<p>The flip side to the authors argument is that LLMs are not only used by home users doing 20 searches a day. Governments and Mega-Corporations are chewing through GPU hours on god-knows-what. New nuclear and other power facilities are being proposed to power their use, this is not insignificant. Schneider Electric predicts 93 GW of energy spent on AI by 2028. <a href="https://www.powerelectronicsnews.com/schneider-electric-predicts-substantial-energy-consumption-for-ai-workloads-globally/" rel="nofollow">https://www.powerelectronicsnews.com/schneider-electric-pred...</a>
One miss in this post is that the author tries to make their point by comparing energy consumption of LLMs to arbitrary points of reference. We should be comparing them to their relevant parallels.<p>Comparing a ChatGPT query to an hour long Zoom call isn't useful. The call might take up ~1700 mL of water, but that is still wildly more efficient than what we used to do prior - travel/commute to meet in person. The "10x a Google search" point is relevant because for many of the use cases mentioned in this post and others like it (e.g. "try asking factual questions!"), you could just as easily get that with 1 Google search and skimming the results.<p>I have found use for LLMs in software development, but I'd be lying if I said I couldn't live without it. Almost every use case of an LLM has a simple alternative - often just employing critical thinking or learning a new skill.<p>It feels like this post is a long way of saying "yes, there are negative impacts, but I value my time more".
The section on training feels weak, and that's what the discussion is mainly about.<p>Many companies are now trying to train models as big as GPT-4. OpenAI is training models that may well be even much larger than GPT-4 (o1 and o3). Framing it as a one-time cost doesn't seem accurate - it doesn't look like the big companies will stop training new ones any time soon, they'll keep doing it. So one model might only be used half a year. And many models may not end up used at all. This might stop at some point, but that's hypothetical.
~90% of the plastic debris in the ocean comes from ten rivers [0]. eight are in china/SEA. millions and billions of single-use items are sitting in warehouses and on store shelves wrapped in plastic. even before the plastic is discarded, the factories these items are produced in dump metric tons of waste into the oceans/soil with little repercussion.<p>point is, none of our "personal lifestyle decisions" - not eating meat, not mining bitcoin, not using chatgpt, not driving cars - are a drop in the bucket compared to standard practice overseas manufacturing.<p>us privileged folks could "just boycott", "buy renewable", "vote with your wallet", etc, but sales will move to a less developed area and the pollution will continue. this is not to say that the environment isn't important - it's critically important. it's just to say that until corporations are forced to do things the right way, it's ludicrous to point fingers at each other and worry that what we do day-to-day is destroying the planet.<p>[0] <a href="https://pubs.acs.org/doi/10.1021/acs.est.7b02368" rel="nofollow">https://pubs.acs.org/doi/10.1021/acs.est.7b02368</a>
The absolute best thing I've read on this subject is this article here: <a href="https://about.bnef.com/blog/liebreich-generative-ai-the-power-and-the-glory/" rel="nofollow">https://about.bnef.com/blog/liebreich-generative-ai-the-powe...</a><p>It talks at great length about data center trends relating to generative AI, from the perspective of someone who has been deeply involved in researching power usage and sustainability for two decades.<p>I made my own notes on that piece here (for if you don't have a half hour to spend reading the original): <a href="https://simonwillison.net/2025/Jan/12/generative-ai-the-power-and-the-glory/" rel="nofollow">https://simonwillison.net/2025/Jan/12/generative-ai-the-powe...</a>
Such a stupid post, I know people on HN don’t like absolute descriptors like that and sorry for that.<p>Obviously the LLMs and ChatGPT don’t use the most energy when answering your question, they churn through insane amounts of water and energy when training them, so much so that big tech companies do not disclose and try to obscure those amounts as much as possible.<p>You aren’t destroying the environment by using it RIGHT NOW, but you are telling the corresponding company that owns the LLM you use “there is interest in this product”, en masse. With these interest indicators they will plan for the future and plan for even more environmental destruction.
Hey all I wrote this post. To clear up a few points:<p>I meant this post to tell individuals that worrying about the emissions they personally cause using ChatGPT is silly, not that AI more broadly isn't using a lot of energy.<p>I can't really factor in how demand for ChatGPT is affecting the future of AI. If you don't want to use ChatGPT because you're worried about creating more demand, that's more legit, but worry about the emissions associated with individual searches right now on their own is a silly distraction.<p>One criticism is that I didn't talk about training enough. I included a section on training in the emissions and water sections, but if there's more you think I should address or change I'm all ears. Please either share them in the comments on the post or here.<p>I saw someone assumed I'm an e/acc. I'm very much not and am pretty worried about risks from advanced AI. Had hoped the link to an 80,000 Hours article might've been a clue there.<p>Someone else assumed I work for Microsoft. I actually exclusively use Claude but wanted to write this for a general audience and way fewer people know about Claude. I used ChatGPT for some research here that I could link people to just to show what it can do.
The title does not match the content.<p>A more appropriate title is "Emissions caused by chatgpt use are not significant in comparison to everything else."<p>But, given that title, it becomes somewhat obvious that the article itself doesn't need to exist.
Where in the world are you getting the numbers for how much video streaming uses energy? I am quite sure that just as with LLMs, most of the energy goes into the initial encoding of the video, and nowadays any rational service encodes videos to several bitrates to avoid JIT transcoding.<p>Networking can’t take that much energy, unless perhaps we are talking about purely wireless networking with cell towers?
If you use chatgpt somehow saves you from making one trip to the doctor in your car it can offset the entire year worth of chatgpt usage in terms of co2 impact.
To me the "ChatGPT is destroying the environment "card always felt somewhat like bad faith arguing from the anti-AI crowd trying to find any excuse for being against AI. Like, the same people who complained about "using AI is destroying environment" seemed to have no issue with boarding a plane which would emit a bunch of CO2 so that they can have a vacation in Europe or the like. Selective environmentalism.
Sort of off-topic, but it does make one think about usage of compute (and the backing energy / resources required for that)...<p>i.e. it doesn't seem too much of an exaggeration to say that we might be getting closer and closer to a situation where LLMs (or any other ML inference) is being run so much for so many different reasons / requests, that the usage does become significant in the future.<p>Similarly, going into detail on what the compute is being used for: i.e. no doubt there are situations currently going on where Person A uses a LLM to expand something like "make a long detailed report about our sales figures", which produces a 20 page report and delivers it to Person B. Person B then says "I haven't time to read all this, LLM please summarise it for me".<p>So you'd basically have LLM inference compute being used as a very inefficient method of data/request transfer, with the sender expanding a short amount of information to deliver to the recipient, and then the said recipient using an LLM on the other side to reduce it back again to something more manage-able.
Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse?: <a href="https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech" rel="nofollow">https://www.theguardian.com/technology/2024/sep/15/data-cent...</a>
what does "water used by data center" even mean? Does it consume the water somehow? What does it turn into? Steam? So uploading a 1GB file boils away nearly 1 liter of water? Or is it turned into bits somehow in some kind of mass to energy conversion? I sorta doubt that. Also this means data centers would have cooling towers like some power stations. Are we talking about the cooling towers of power stations?<p>I think at least that graph is complete non-sense. I will try and have chatGPT explain it to me.
"personal carbon footprint" is a term invented by BP and is the single hack that derailed the environment discussion by making people personally responsible and removing the actual polluters from the discussion.
You are kidding right?<p>OpenAI -- for now -- is planning to build <i>5 gigawatt</i> data centers (yes, plural) to continue its quest towards AGI. -- see HN archives. Meanwhile they are also looking into <i>private nuclear power</i> for the same purpose.<p>Any serious competitor will likely need to do the same.<p>So there's a rational choice and tradeoff to make:<p>Net zero or AGI.<p>We can't have both.
> and it’s completely clear to me that one side is getting it entirely wrong and spreading misleading ideas<p>What a great way to start an article. I get it as: "I am not open to listening to your arguments, and in fact if you disagree with me, I will assume that you are a moron".<p>It reminds me of people saying "planes are not the problem: actually if you compare it to driving a car, it uses less energy per person and per km". Except that as soon as you take a passenger in your car, the car is better (why did you assume that the plane was full and the car almost empty?). And that you don't remotely drive as far with your car as you fly with a plane. Obviously planes are worse than cars. If you need to imagine people commuting by car to the other side of the continent to prove your point, maybe it's not valid?<p>The fact is that the footprint of IT is increasing every year. And quite obviously, LLMs use more energy than "traditional" searches. Any new technology that makes us use more energy is bad for environment.<p>Unless you don't understand how bad the situation is: we have largely missed the goal of keeping global warming to 1.5C (thinking that we could reach it is absurd at this point). To keep 2C, we need to reduce global emissions by 5% every year. That's a Covid crisis every year. Let's be honest, it probably won't happen. So we'll go higher than 2C, fine. At the other end of the spectrum, 4C means that a big stripe (where <i>billions</i> of people live) around the equator will become unlivable for human beings (similar to being on Mars: you need equipment just to survive outside). I guess I don't need to argue how bad that would be, and we are currently going there. ChatGPT is part of that effort, as a new technology that makes us increase our emissions instead of doing the opposite.
> It is extremely bad to distract the climate movement with debates about inconsequential levels of emission<p>This. So we should focus on optimizing transport, heating, energy and food.
A very nice article. But the google search and LLM energy estimates are outdated. More recent work put both at 10x less.<p><a href="https://engineeringprompts.substack.com/p/does-chatgpt-use-10x-more-energy?triedRedirect=true" rel="nofollow">https://engineeringprompts.substack.com/p/does-chatgpt-use-1...</a>
It's discomforting to me when people compare resource usage of ChatGPT, a computer, to the resource usage of a human being.<p>I've seen charts like this before that compare resource usage of people to corporations, implying corporations are the bigger problem. The implication here seems to be the opposite, and that tone feels just a little eugenicist.
Thinks like heating and car heavily depends on the usage, which I guess is based on the USA average.<p>US houses are HUGE and even here in Europe square m2 / person double in the last decades.<p>- we don’t have a housing problem, we have a surface inflation problem.<p>- heating is directly correlated to the volume to heat. Heating 100m2/person with (coal Chinese steel, resource extracted, logistics…) solar and batteries or heat pump isn’t necessarily more carbon or water efficient that 20m2/person with gas.<p>Bonus point: the resident will have to think twice before filling his property with garbage consumerism.<p>Ps: my GF and I live in 80m2 house, the precedent family where… 2 adults and 3 children! I thing the space is wayyy enough for us but people visiting regularly remark "it’s so tiny/small! "
Oof. This article misses some important details.<p>Training is not a "one time cost". Training gets you a model that will likely need to be updated (or at least fine-tuned) on newer data. And before GPT4, there was a series of prior, less effective models (likely swept under the rug in press releases) made by the same folks that helped them step forward, but didn't achieve their end goals. And all of this to say nothing of the arms race by the major players all scrambling to outdo each other.<p>It also needs to compare this to the efficiency modern search engines run at. A single traditional query is far less expensive than a single LLM query.
The use of ChatGPT doesn’t replace the others it comes on top of that.<p>MS is missing its CO2 targets because of AI not because of burgers.<p>The whole argument is, it’s not bad because other things are worse.<p>We are racing towards the abyss but don’t worry AI only accelerates a little more.
This used to be the stick they used to beat bitcoin with. I guess it's a good stick because you can hit any technology with it and you can conveniently forget all the terrible uses to which electricity is put.
The part on training is misleading and full of shit.<p>Training is not a "one-time cost". There is an implied never-ending need for training. LLMs are useless (for one of their main purposes) if the models get stale.<p>I can use Musk's own argument on this one. Each model is a plane, fully built, that LLM researchers made into a disposable asset destined to be replaced by a newly built plane on the next training. Just incredibly stupid and inneficient.<p>I know what you're thinking right now: fine-tuning, etc. That is the "reusable" analogy to that, is it not? But fine-tuning is far, far from reusability (the major players don't even care about it that much). It's not even on the "hopper" stage.<p>_Stop training new shit, and the argument becomes valid. How about that?_<p>---<p>I am sure the more radical environmentalists know that LLMs can be eco-friendly. The point is: they don't believe it will go that way, so they fight it. I can't blame them, this has happened before.<p>_This monster was made by environment promises that were not met_. If they're not met again, the monster will grow and there's nothing anyone can do about it. I've been more moderate than this article in several occations and still got attacked by it. If not LLMs, it will target something else. Again, can't blame them.
In my country there's a lot of institutional hype about green algorithms. I find the whole idea quite irrelevant (for the reasons explained in this post) but of course, it's a way to get funding for those of us who work in AI/NLP (we don't have much money for GPUs and can't do things like training big LLMs, so it's easy to pitch everything we do as "green", and then get funding because that's considered strategic and yadda yadda).<p>It's funny, but sad, how no one calls the billshit because we would be sabotaging ourselves.
A lot of conversations regarding the environment feel so frustrating because they are either qualitative or use aggregate high level data or are like we'll be dead in 50 years (lol my personal favorite)<p>Why not start capturing waste/energy data for all human made items like nutritional data on food? It won't add much overhead or stifle economies as people fear<p>That way when I log in to use any online service or when I buy/drive a car or when I buy an item I can see how much energy was consumed and how much waste I produced exactly
I just created an account and submitted a post but it does not appear on new section just to me, anyone knows why?<p>can anyone submit this:<p>TikTok Ban in USA and the Hypocrisy of the USA Regime
<a href="https://justpaste.it/tiktok_ban" rel="nofollow">https://justpaste.it/tiktok_ban</a>
The major players in AI are collectively burning 1-2 gigawatts, day and night, on research and development of the next generation of LLMs. This is as much as my city of a million people. The impact is real, and focusing on inference cost per query kind of misses the point. Every person who uses these tools contributes to the demand and bears some of the responsibility. Similar to how I have responsibility for the carbon emissions of a flight, even if the plane would have flown without me.<p>I'm saying this as someone who finds LLMs helpful, and uses them without feeling particularly guilty about it. But we should be honest about the costs.
Anyone who says that LLMs are terrible for the environment will never be swayed from that belief.<p>It is like a new shibboleth for idiocy.<p>When someone says it just reply with, “I see” and move on with your life.
"Other bad things exist" does not mean this thing isn't bad. Or absolutely could be that all these other things are huge energy wasters AND chat gpt is an energy waster.<p>We have to stop thinking about problems so linearly -- it's not "solve only the worst one first", because we'll forever find reasons to not try and solve that one, and we'll throw up our hands.<p>Like, we're well aware animal agriculture is a huge environmental impact. But getting everyone to go vegetarian before we start thinking about any other emissions source is a recipe for inaction. We're going to have to make progress, little by little, on all of these things.
Let's compare everything presented there with just one day of Russian war in Ukraine and we can forget about all CO hesitations in our normal lives.
Perhaps off topic, what exactly does the “one way european flight” mean in the context of avoiding co2 emissions? I.e. what is the choice or scenario here?
I published this as a comment as well, but it's probably worth nothing that the ChatGPT water/power numbers cited (the one that is most widely cited in these discussions) comes from an April 2023 paper (Li et al, arXiv:2304.03271) that estimates water/power usage based off of GPT-3 (175B dense model) numbers published from OpenAI's original <i>GPT-3 2021 paper</i>. From Section 3.3.2 Inference:<p>> As a representative usage scenario for an LLM, we consider a conversation task, which typically includes a CPU-intensive prompt phase that processes the user’s input (a.k.a., prompt) and a memory-intensive token phase that produces outputs [37]. More specifically, we consider a medium-sized request, each with approximately ≤800 words of input and 150 – 300 words of output [37]. The official estimate shows that GPT-3 consumes an order of 0.4 kWh electricity to generate 100 pages of content (e.g., roughly 0.004 kWh per page) [18]. Thus, we consider 0.004 kWh as the per-request server energy consumption for our conversation task. The PUE, WUE, and EWIF are the same as those used for estimating the training water consumption.<p>There is a slightly newer paper (Oct 2023) that directly measured power usage on a Llama 65B (on V100/A100 hardware) that showed a 14X better efficiency. [2] Ethan Mollick linked to it recently and got me curious since I've recently been running my own inference (performance) testing and it'd be easy enough to just calculate power usage. My results [3] on the latest stable vLLM from last week on a standard H100 node w/ Llama 3.3 70B FP8 was almost a 10X better token/joule than the 2023 V100/A100 testing, which seems about right to me. This is without fancy look-ahead, speculative decode, prefix caching taken into account, just raw token generation. This is 120X more efficient than the commonly cited "ChatGPT" numbers and 250X more efficient than the Llama-3-70B numbers cited in the latest version (v4, 2025-01-15) of that same paper.<p>For those interested in a full analysis/table with all the citations (including my full testing results) see this o1 chat that calculated the relative efficiency differences and made a nice results table for me: <a href="https://chatgpt.com/share/678b55bb-336c-8012-97cc-b94f70919daa" rel="nofollow">https://chatgpt.com/share/678b55bb-336c-8012-97cc-b94f70919d...</a><p>(It's worth point out that that used 45s of TTC, which is a point that is not lost on me!)<p>[1] <a href="https://arxiv.org/abs/2304.03271" rel="nofollow">https://arxiv.org/abs/2304.03271</a><p>[2] <a href="https://arxiv.org/abs/2310.03003" rel="nofollow">https://arxiv.org/abs/2310.03003</a><p>[3] <a href="https://gist.github.com/lhl/bf81a9c7dfc4244c974335e1605dcf22" rel="nofollow">https://gist.github.com/lhl/bf81a9c7dfc4244c974335e1605dcf22</a>