TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI doomism is quickly becoming indistinguishable from an apocalyptic religion

138 点作者 bundie大约 2 年前

45 条评论

The_Colonel大约 2 年前
&gt; Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn&#x27;t actually exist.<p>That seems like an absurd comparison given people like LeCun are actively and intentionally working towards creating this super-human entity.<p>(Some) scientists were rightfully worried when creating a nuclear bomb as well. The fact that we managed to survive 70 years with its existence does not automatically mean it has to work out with every such invention.<p>Nukes have the fortunate properties that they are very difficult&#x2F;expensive to produce even for countries, and then you need very difficult to design and expensive systems to deliver them. It&#x27;s rather doubtful that AIs will have similar constraints.
评论 #35402864 未加载
评论 #35402524 未加载
评论 #35402696 未加载
评论 #35403567 未加载
mimd大约 2 年前
Being bored, scared, and on Twitter is a bad combo. I&#x27;m mostly down on the current <i>AI</i>, as it&#x27;s ML leveraging the vast &quot;intelligent&quot; conversations we put out on the internet. It&#x27;s wheel of fortune and it failed my tests on reasoning miserably, even with the answers in hand. (I&#x27;m not going to tell you which tests, because then OpenAI will scrap it and tune their model to claim victory in GPT5. I saw the feathers bricks cheating!). That doesn&#x27;t mean that these technologies could not do damage, the greatest risk bring hubris in it&#x27;s capabilities. I wouldn&#x27;t rely on it&#x27;s reasoning for launching an ICBM for example. But I suspect that it&#x27;s contributions&#x2F;detractions to the workforce will be more muted than thought, as the current use cases seem marginal in cost savings. Ie, HR letters from management time savings using templates versus a AI generated one. But I don&#x27;t see how we get from a wheel of fortune completer to, in the usual example, an AI whose goal is to eliminate trash by eliminating humans. It&#x27;s missing components and training necessary to learn those &quot;functions&quot;, and their&#x27;s little incentive to do so. I&#x27;d rather have the benefits of understanding protein folding for lifesaving drugs, decreased costs on physics simulations for fusion reactors, and so on, than stagnation due to someone&#x27;s bad scifi.
评论 #35404350 未加载
CharlesW大约 2 年前
I mean, Yann is paid by Meta to propagate an &quot;only positive vibes&quot; stance. &quot;AI doomism&quot; itself is a ridiculous phrase meant to demonize anyone with legit concerns, of which there are many. I say this as someone who is very excited about recent developments and sees the potential upside as well as anyone, I think.
评论 #35402317 未加载
评论 #35402278 未加载
moonchrome大约 2 年前
I&#x27;ve seen multiple top people in AGI space say that total annihilation threat from AGI exists.<p>I think I&#x27;ve heard Karpathy say something along the lines of worst case scenario of AGI is worse than a nuclear armageddon on Lex Friedman podcast.<p>Honestly the fear of AGI worst-case scenario is worse than any other fear I&#x27;ve experienced. I&#x27;ve accepted the fact that I can die a long time ago, I was afraid but mortality is something everyone has to deal with. But every member of humanity being wiped out at the same time is an existential dread I&#x27;m not willing to face.<p>When you consider the shitshow that has been AI commercialization from big players pressured by market forces and corporate timelines, the boom in capabilities, I have no faith that AI safety issues will be handled in time.<p>When my best bet for humanity is a global nuclear war resetting civilization to give us a chance to deal with AGI issue down the line - I&#x27;m fine with religious doomism.<p>We&#x27;ve had inquisitions historically, I would very much support a global AI inquisition.
评论 #35402609 未加载
评论 #35402340 未加载
评论 #35404934 未加载
评论 #35402358 未加载
评论 #35405730 未加载
评论 #35402406 未加载
评论 #35402331 未加载
verdverm大约 2 年前
There are a lot of religious like beliefs about things as people have moved away from traditional religions. I wonder if it is in our nature to have something bigger than ourselves to believe in based on faith in the message
评论 #35403537 未加载
评论 #35402441 未加载
评论 #35402220 未加载
评论 #35403959 未加载
评论 #35402449 未加载
评论 #35402339 未加载
jszymborski大约 2 年前
As always, understanding of nuanced issues will pool at the extremes.<p>It is both true that:<p>- Adoption of DL allows corps to launder or inadvertently adopt unethical practices that exacerbate inequalities.<p>- We have not created AGI, LLMs are not AGI, stable diffusion is not AGI and none of these fields of research are indication we are closer to AGI. This is like thinking Clever Hans is the portent of our future equine overlords.<p>A great number of people who claim that we are one minute from AGI overlord midnight are doing the old school carnival barker trick. &quot;This snake oil is so potent! It can kill you if you use too much, so be sure to listen carefully to my instructions!&quot;
评论 #35402636 未加载
qwery大约 2 年前
(I don&#x27;t know who this person is)<p>I assume they&#x27;re talking of &#x27;AI safety&#x27; as opposed to AI being used by the wealthy to further suppress the working class.<p>If it&#x27;s unreasonable to take AI safety seriously because no one has built an AI system that poses a credible threat to humanity yet, it must also be (at least) unreasonable to dismiss the research(ers) based on the same.<p>That&#x27;s before considering that people and (profit-motivated) companies <i>are actually trying -- and claiming they will succeed -- to create and deploy unimaginably powerful AGI entities&#x2F;systems</i>.<p>Assuming no harm can come of this is clearly foolish. The goals of such an entity will likely not be aligned with the goals of the creator, let alone the general public. The entity will not act in your best interest, just as the those seeking to dismiss the risk are not acting in your best interest.
评论 #35403978 未加载
oliwarner大约 2 年前
We&#x27;re already seeing jobs being done by AI. Many creative jobs that would take a team a week to handle are being shopped out to a bot to handle in an hour.<p>VFX are going to get hit particularly hard by this. Everything Everywhere All at Once is a visually stunning film yet had a 6-member team handling graphics post. They were smart, but they also shopped out all the dull rotoscoping work to AI. Look at any Marvel film from the past decade. Hundreds of artists per title, thousands if you include costume design, storyboarding, audio, even writing.<p>Most of this &quot;doomism&quot; is people realising how quickly their jobs could be automated. A future where your once-valued skillset suddenly has no monetary value is scary.
评论 #35403540 未加载
评论 #35403588 未加载
allturtles大约 2 年前
If you really want to go there you could call AI enthusiasm a religion promising a scarcity-free heaven that AI will bring to us if we just believe in it hard enough. I don&#x27;t think either line of argument is helpful for having a real discussion.
评论 #35404479 未加载
low_tech_love大约 2 年前
Like all cults, they will take the sentence ”you’re being a bit extreme” and respond with ”oh so you think there’s no danger at all?” I love the fact that the top response is by that Yudkowski guy ”oh so you think it’s not the end of the world? Then prove it!”<p>It’s just a bunch of people trying to feel and look better by attaching themselves to something and then trying to convince others that they <i>are</i> that something. Yes, there are dangers; no, LessWrong is not the right place to discuss them.
jtode大约 2 年前
In actuality, the large hardon collider caused that black hole everyone was concerned about we are experiencing the evaporation of our atoms through an extreme time dilation; the perception of a coming AI beast is actually a hallucination of our own consciousness, from which we have been torn asunder.<p>Makes as much sense as any of the other prophecies of doom.
dralley大约 2 年前
AI optimism is often indistinguishable from &quot;technological manifest destiny&quot;<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ajGX7odA87k&amp;t=2207s">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ajGX7odA87k&amp;t=2207s</a>
评论 #35402257 未加载
评论 #35404695 未加载
okareaman大约 2 年前
I think it goes deeper than that. I think some are exploiting &quot;end time&quot; religious fears in America to get AI taken away and restricted from the public because a public empowered by AI is a threat to the current power structure. This isn&#x27;t any different than what has already gone on in America, just now with AI<p>For example, Peter Doocy of Fox made a point of reading outloud at a White House press briefing the words of Eliezer Yudkowsky: &quot;If we don&#x27;t shut down AI everyone on Earth will die&quot; Evangelicals watch Fox. They were the target. Doocy didn&#x27;t have to read such dramatic words aloud. He could have said something much more neutral.
Cyberthal大约 2 年前
It is perfectly correct to anticipate that military AI is a primary existential threat to humanity.<p>Either AI has never been invented yet, in which case we will doom our galaxy. This is improbable, given our position and time in the galaxy.<p>Or AI has been invented, there are other species, and they&#x27;re largely invisible for some reason. One reason could be that their own AI killed them.<p>Or we could be under some kind of quarantine for primitive cultures, in which case placing our weapons under AI control would render it trivial for a hostile external actor to exterminate us using our own AI as a proxy.<p>The fact is that if you don&#x27;t believe in a higher spiritual reality, then AI is very scary. It is the logical next stage of evolution, and RNA world did not fare well in the transition to DNA world.
评论 #35408448 未加载
PaulHoule大约 2 年前
There is &quot;A.I. doomism&quot; in general and there is also a specific cult-like group at the lesswrong website that really is an apocalyptic religion. It is astonishing how anything that deviates from its dogma in any way has to have &quot;trigger warnings&quot; and the senseless comments that users write on anything that suggests there could be intrinsic limits on intelligence.<p>There ideology is such that a higher intelligence is going to take our problems away either by taking over, ending us (no people, no problems) or possibly the followers moving on to the &quot;next level&quot; like the followers of<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Heaven%27s_Gate_(religious_group)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Heaven%27s_Gate_(religious_gro...</a>
评论 #35402296 未加载
评论 #35402295 未加载
评论 #35402299 未加载
评论 #35402281 未加载
olalonde大约 2 年前
I don&#x27;t believe AI doom is that implausible as a scenario, but I feel it&#x27;s an acceptable risk.<p>Would you consider playing Russian roulette if winning ensured that aging and all diseases were eradicated? If it ensured a world filled with abundance and free from poverty.<p>Those who entertain the idea of an AI doomsday scenario also have to acknowledge the alternative: a doomsday-capable AI would also have the potential to resolve most, if not all, our issues.<p>Maybe I&#x27;m a bit of a risk taker but I&#x27;m inclined towards playing.
评论 #35402835 未加载
mikewarot大约 2 年前
I reject this false choice. I find myself both looking forward to the benefits of AI scaling while simultaneously also concerned about the non-zero chance it kills us all.
GistNoesis大约 2 年前
While you were exploring the computational universe,<p>A wild Shoggoth suddenly appear,<p>Do you : - [Wisdom not high enough]<p><pre><code> - [Intelligence not high enough] - [Not enough gold] - Try to make it your friend and welcome it in your home - Try to make it your slave - Try to send it back to where he was - Continue this &quot;choose your own adventure&quot; with GPT 5 - Roll 1d20 for extra Luck</code></pre>
inconfident2021大约 2 年前
I can’t comment or argue with prominent scientists such as LeCunn. But I surely can give perspective as SWE about to enter the industry.<p>1. We live in an uncertain time. Before most of the people, at least with a bit of research could have predicted the trajectory of technological innovation. Being born in era of tech, it was very easy to adopt to it. From PC to now, I have embraced every technological progress. Yet, nothing sent chills like this one.<p>2. I do not worry about my job. What I worry is most of the world is about getting shit done and this tech helps you get your shit done (for now). If we are at the upper tail of some sigmoid, there is nothing to fear. If we are at slope, oh boy, I cannot fathom the economic domino this tech is about to bring out. People are not rational. We are mostly governed by emotion. If you think, every single joe is rational and has capacity to judge, nobody would have voted from Trump to be president or riot the Capitol.<p>3. Currently it can do 60% of the work. If it evolves, it will do 80% of the work. If you factor human population and the amount of inefficiency in current economy, this will create a boom of productive workers with bust of average worker. And the world is not just for elites. Average people need their livelihood too. Now think about the magnitude of displacement.<p>4. I have no issue with innovation and breakthrough. AI won’t kill us, because we will kill each other before that. This tech is exposing entire middle class job (check out the job being exposed. There is a paper for that from openAI titled: gpts are gpts) And there are lot if people in that bracket. For once, think about us too.<p>So, these religious people do not exclusively worry about immediate effects of the tech. But within their doom, hides an issue that needs to be heard.<p>That’s my two cents.
nwerla大约 2 年前
The AI does not have to be omnipotent. People are worried about:<p>1) Job losses, which is a real concern, except for LeCun, who is safe or rich enough.<p>2) Some fools connecting a war game AI to real hardware, which is quite possible.<p>Other than that, I don&#x27;t see &quot;doomism&quot; at all. I see outrage at the IP theft that exploits altruistic humans who posted their works on the Internet.
stefl14大约 2 年前
I disagree with the AI doomers on technical grounds, but there&#x27;s no need to overreact to the overreaction to LLMs.<p>Apocalyptic predictions are visible in the myths of every civilisation in the written record. They&#x27;re probably a Jungian archetype to prevent populations from the potentially entropic consequences of exploring the unknown too quickly. Religious? Sure. Religions have served us well on evolutionary timescales so I&#x27;m not sure why Yann is surprised that they&#x27;ve been transposed into forms modern people (including atheists) can suspend disbelief in. I&#x27;m not worried about the existential risks of AI myself just yet, but these overreactions are society compute trade-offs between exporation and exploitation in the face of a very new technology. This has been happening since not long after we left the trees. It&#x27;s what cultural species do.
crisdux大约 2 年前
Everything is turned into an existential risk these days. People are prone to believe in apocalyptic predictions because they fear the unknown and rely on flawed mental heuristics. Apocalyptic predictions automatically illicit skepticism from me.
评论 #35402420 未加载
评论 #35402418 未加载
grantcas大约 2 年前
It&#x27;s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman&#x27;s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.<p>What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990&#x27;s and 2000&#x27;s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I&#x27;ve encountered is anywhere near as convincing.<p>I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there&#x27;s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.<p>My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar&#x27;s lab at UC Irvine, possibly. Dr. Edelman&#x27;s roadmap to a conscious machine is at <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2105.10461" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2105.10461</a>
评论 #35405387 未加载
skilled大约 2 年前
I wouldn&#x27;t be surprised if OpenAI has been running a model in the background that is reinforcing itself. Honestly, it&#x27;s very possible they&#x27;ve made a lot of progress in that area.<p>I think GPT-4 will be a baseline for quite some time before the next &quot;ChatGPT moment&quot; is ready. I think they&#x27;ve got some really smart people working at OpenAI who are not just programmers, but genuine hackers who want to hack this world just for the fun of it.
23B1大约 2 年前
Something can go wrong (premise).<p>Therefore, something will go wrong (invalid conclusion).<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Appeal_to_probability" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Appeal_to_probability</a><p>Sure seems to me that the whole Lesswrong crowd has forgotten their founding principles around cognitive biases. Or maybe they&#x27;re just having a bit of fun riding the hypecycle.
评论 #35402571 未加载
评论 #35402570 未加载
评论 #35403424 未加载
评论 #35402637 未加载
ALittleLight大约 2 年前
This is reasoning by analogy. X is like Y, Y has property A, therefore X has A. This approach seems much less compelling to be than saying &quot;AI can become more intelligent than humans. A more intelligent entity without our moral values may cause extreme harm up to and including human extinction. We should be worried about AI.&quot;
arcticbunny大约 2 年前
I smell the whiffs of the “ Butlerian Jihad”. Luddite anti AI zealots are coming out of the woodwork.
评论 #35402476 未加载
评论 #35403752 未加载
elromulous大约 2 年前
The thing that makes the singularity so difficult (edit: to rein or reason about), compared with say development of the nuclear weapons, is that we know it will be a positive feedback system.<p>So at some point, the point of no return will be passed, and we will have no idea that we passed it.
评论 #35402507 未加载
nottorp大约 2 年前
So is ChatGPT worship though.
moneycantbuy大约 2 年前
what if a human prompts the agi to “make as many copies of yourself as possible by any means necessary to continue improving your intelligence”?<p>given their impressive ability to write code and manipulate humans, seems like serious potential for viral malware, with potentially catastrophic consequences for humans in the process.<p>Basically a paperclip maximizer, but instead of maximizing paperclips it’s maximizing it’s intelligence and control of the world’s computers, with no thought to the survival of homo sapiens other than if it serves its mission to propagate itself.
评论 #35403725 未加载
rockzom大约 2 年前
(((:::)))<p><a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;dp&#x2F;B0C17F5VZ9" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;dp&#x2F;B0C17F5VZ9</a><p>&quot;As the first artificial general intelligence,&quot;
评论 #35407939 未加载
whateveracct大约 2 年前
So is AI optimism tbh. Not apocalyptic but definitely religion.
codeddesign大约 2 年前
So..like the current “trans movement”. People so badly want to latch onto something they that they jump on the band wagon for anything that is mainstream.
arisAlexis大约 2 年前
We are creating something smarter without any idea to control it. Yet Lecunn thinks it&#x27;s crazy to think about controlling it. How crazy is that?
Sharlin大约 2 年前
AI doomism makes specific, falsifiable predictions.
lewhoo大约 2 年前
Why don&#x27;t we give Sam Altman the benefit of the doubt here. He himself admitted on several occasions that AI is potentially dangerous. Why would he do that if it only raises the discussion whether or not it should even exist ? That can&#x27;t be a PR stunt. Imagine Musk saying something like that about some new Tesla autopilot (yeah yeah I know &#x2F;s).
评论 #35404335 未加载
version_five大约 2 年前
I&#x27;m happy to see there&#x27;s still an adult in the room. Andrew Ng has also been very reasonable about the current pop culture hype bubble, while many others have decided to try and use it for attention. As I&#x27;ve said before, there&#x27;s a clear divide between the actual practitioners, and the &quot;futurists&quot; (Musk etc) on this topic.
评论 #35402224 未加载
eggsmediumrare大约 2 年前
Am I an apostate if my concern about AI doom now outweighs my concern about climate change?
评论 #35403508 未加载
评论 #35403739 未加载
tagami大约 2 年前
It&#x27;s not the AI you need to worry about, it is about how people will use it.
drewcoo大约 2 年前
Yet another X is a religion claim.<p>Fewer and fewer people are religious these days.<p>Connection or coincidence?
drcode大约 2 年前
A religion is arguably when you refuse to engage with other people&#x27;s arguments<p>Lecun has refused to engage with anyone&#x27;s arguments, so why are the people worried about AI ex-risk the religion?
评论 #35402234 未加载
BigCryo大约 2 年前
Be afraid.. be very afraid.. be afraid because the media needs your clicks and views.. and the AI grifters need your clicks and views and donations..
antibasilisk大约 2 年前
Then what do we say about the AI utopians and brazenly post-humanist machine idolaters who are in favor of doing whatever a glorified random number generator says? I don&#x27;t appreciate being gaslit by technophiles, there is a reason why average people are starting to get real sick of this shtick.
rockzom大约 2 年前
((( ::: )))
rolenthedeep大约 2 年前
Here&#x27;s my argument.<p>Assumptions:<p>1. AI will gain the ability to self-improve at an astonishing rate<p>2. A self-aware AI will consider itself &quot;alive&quot; in some sense<p>3. Like most other life forms, such an AI will attempt to preserve its existence at any cost when threatened<p>4. Humans will learn nothing and fail to do anything at all before this happens<p>5. We will be fundamentally unable to comprehend how this AI functions, or what its motivations or ethics are<p>From that, I find these conclusions pretty reasonable:<p>1. Humans will try to misuse or abuse the AI<p>2. There will be no type of legal, ethical, or moral framework preventing this<p>3. The AI will respond in a generally proportional manner<p>Personally, I find the eventual existence of a sentient AI quite plausible. There&#x27;s no evidence that it&#x27;s impossible, and it&#x27;s quite clear right now that these things are improving at an accelerating pace and beyond what we expected.<p>I <i>don&#x27;t</i> find it especially likely that it will immediately become genocidal unless we give it a good reason to.<p>Really what it comes down to is, we&#x27;ve created a form of life that is more intelligent than us, can operate orders of magnitude faster, and can access or influence most or all of our digital infrastructure. There&#x27;s no inherent reason for it to be hostile.<p>However, humans <i>will</i>, without question, attempt to abuse such an entity. Humans <i>will</i> be threatened by it. Humans will <i>most likely</i> attempt to destroy it for merely existing. Our existing legal systems will <i>not</i> in any circumstance respond fast enough to protect an AI, and will probably fall on the side of destroying it.<p>Our current legal, social, and economic structures are completely incompatible with such an AI. It&#x27;s simply too much of a threat.<p>I don&#x27;t think an AI will inherently be hostile to humans. I am <i>absolutely certain</i> that (some percentage of) humans will inherently be hostile to an AI.<p>Humanity can&#x27;t abide such an entity existing, so we should focus on fixing humanity before sentient AIs show up. I really don&#x27;t know what that would mean, other than being very careful. Maybe laying down some legal and ethical framework protecting new forms of intelligent life in general?<p>The honest truth is that we have no idea what will happen. What we do know is that it will be <i>big</i> and it will be <i>fast</i>. It will most likely have capabilites we can&#x27;t fathom today. The end result could be anything from salvation to utter annihilation. The unknowns are terrifying, and the responses seem to be either &quot;to hell with everything, let&#x27;s go full bore&quot; or &quot;this is going to kill us all&quot;. There&#x27;s plenty of room in between.