TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why am I not terrified of AI?

58 点作者 nikbackm大约 2 年前

18 条评论

stuckinhell大约 2 年前
I&#x27;m terrified of AI.<p>Both the imperfect good enough AI, and the post-human skynet AI.<p>In my personal life, I have friends addicted to ChatGPT. While we are talking, they are talking to chatgpt for advice, for jokes, for planning David Bowie themed parties. They literally run everything thru this AI first.<p>My firm&#x27;s marketing department is using ChatGPT heavily to write copy, tweets, etc.It&#x27;s amazingly great. Our ROI&#x27;s have been fantastic.<p>I&#x27;ve read the stories of thousands of people treating Replika AI like some kind of lover.<p>Stable Diffusion with ControlNet, is amazing fun and cool, but what an existential threat to 80% of artists.<p>As a mother, AI instagram models are something that keep me up at night. Will my daughter compare herself to these things without knowing they aren&#x27;t real and go insane, will my son view real women as inferior. I don&#x27;t know, but I&#x27;m thinking about it.<p>AI has we have it today is already changing paradigms. It so easily exacerbates some evolutionary weaknesses in us. People are treating as alive today, Business managers want to replace as many people as they can with it, today!<p>We can&#x27;t really define what makes us unique and conscious. We can&#x27;t even agree on how to define consciousness for living creatures, how can we define it for a digital existence.<p>I&#x27;m worried that the current form of AI is already surpassed most people. Stable Diffusion can generate art way better than I can. ChatGPT can write better than I can. ElevenLabs AI can speak and imitate voices better than I can.<p>I think it&#x27;s really a brave new world with this radical technology that has so many immediate practical uses, and more just keep coming out so easily. I never would have guessed neural networks were so flexible, and that&#x27;s what scares me the most.
评论 #35054449 未加载
评论 #35053884 未加载
评论 #35053913 未加载
评论 #35053720 未加载
评论 #35054674 未加载
credit_guy大约 2 年前
To be honest, I am a bit terrified. I&#x27;m not panicking yet, but it&#x27;s not going to be all milk and honey.<p>One thing is certain: thousands or millions of people and companies will start creating GPT bots. Some will just download existing weights (like the recently leaked Meta LLAMA weights) and add a wrapper around them. Some will train new GPT bots from scratch. Some will hybridize them. Some will use a corpus of Library of Congress books to understand the world during the American Revolution. Some will use GPT bots to build worlds in Minecraft.<p>And then some will use GPT bots to scam people out of their life savings (actually they already started). Phishing and social engineering attacks will graduate to a whole new level.<p>As for propaganda and internet trolling and psy ops, what we&#x27;ve seen so far is absolutely nothing. Dictators around the world are salivating now.<p>And once bots get into learning how to manipulate humans... Well, if this does not terrify you, ... don&#x27;t be surprised if you are a target.<p>Brave new world we&#x27;re heading to.
bloppe大约 2 年前
This guy, and apparently most people, get it all wrong. AI is not remotely close to having sentience or even a will to survive. It&#x27;s the fact that it&#x27;s a powerful tool that can be abused by evil people that should scare you. Here&#x27;s a single example that is simultaneously trivial and extremely potent: <a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;2022&#x2F;3&#x2F;17&#x2F;22983197&#x2F;ai-new-possible-chemical-weapons-generative-models-vx" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;2022&#x2F;3&#x2F;17&#x2F;22983197&#x2F;ai-new-possible-...</a>
ChatGTP大约 2 年前
Someone said something recently which I thought was interesting:<p>We hear so much talk of ChatGPT becoming sentient etc, but someone asked recently, why aren&#x27;t we worried about DALL-E being sentient just in the same way?<p>We&#x27;re anthropomorphizing Chatbots a lot because why wouldn&#x27;t we?<p>This is not to downplay the significance of the technology, although I&#x27;m a bit skeptical it truly is as useful as advertised, but we&#x27;ve definitely scared the hell out of ourselves lately from having the computer &quot;talk&quot; to us.<p>I think there is also a background anxiety going on in the world now, and this is probably what&#x27;s freaking people out too.<p>We have a war in Ukraine, Climate change is accelerating away, US &amp; China tensions, Just getting over Covid, tech layoffs, and now a freaking bot that seems like it&#x27;s primary goal is to make your career worthless.<p>Little bit too much going on lately. I feel like if ChatGPT was introduced pre-covid when the world felt a little bit more stable, it wouldn&#x27;t be such a strange vibe. It was a weird time for something like this to just sort of launch into the public consciousness.<p>My advice, take a break from it, be mindful of what&#x27;s going on in the world outside of &quot;AI news&quot;, be kind to yourself.
xg15大约 2 年前
&quot;<p>“Scott, as someone working at OpenBioweapon this year, how can you defend that company’s existence at all? Did OpenBioweapon not just endanger the whole world, by successfully teaming up with the US to bait China into a bioweapons capabilities race—precisely what we were all trying to avoid? Won’t this race burn the little time we had thought we had left to solve the bioweapons proliferation problem?”<p>In response, I often stressed that my role at OpenBioweapon has specifically been to think about ways to make Sars-Cov-3 and OpenBioweapon’s other products safer, including via watermarking, cryptographic backdoors, and more. Would the rationalists rather I not do this? Is there something else I should work on instead? Do they have suggestions?<p>&quot;
deafpolygon大约 2 年前
I&#x27;m terrified of AI, but not for the reasons you think. I&#x27;m not worried about a Skynet AI, and at the same time, I am.<p>We aren&#x27;t going to see an AGI in a while, so it&#x27;s too premature to worry about that. I&#x27;m not even sure a true AGI is possible, yet... kind of like how flying cars are possible. We do have flying cars, sort of? Not the way we all envisioned though. So it is for AI.<p>My worry is that AI will become too trusted. We know the failure rates of humans can be higher than what is deemed acceptable, and AI could fail in unpredictable ways. I&#x27;m concerned that now it can be all-too easy to manipulate people en masse.<p>- Say we entrusted an AI to be our early-warning detection system and trust the output of said AI to mount a counter-response. Turns out it was a false positive, and we started WW3.<p>- It is used to manipulate people&#x27;s identity online.<p>- It can be used to replace people&#x27;s livelihoods (i.e. book-writing, art, maybe programmers to an extent).
评论 #35054164 未加载
pcstl大约 2 年前
&quot;AI discourse&quot; is quickly becoming a topic to avoid due to excessive politization of viewpoints which are not inherently political - which is a bit sad, considering that the ramifications of AI will be very important in the next few decades, regardless of whether the issues will be related to superintelligent AI, disparate social impact, both or other, hitherto unpredicted issues.<p>The state of most conversations about AI is just deplorable, with lots of people trying to use the fact that they are or aren&#x27;t worried about some specific aspect of AI to prove that they are oh-so-very-smart even if they have never honestly engaged with the actual reasons why other people might be concerned about different aspects than themselves.<p>So I am worried about AI, but I am even more worried about the sorry state of AI discourse.
Ekaros大约 2 年前
I&#x27;m not worried about self-acting AIs. Or what would be considered sentient, something acting without prompt or request. We are far away from that.<p>What I could worry about is miss-use and failure to ensure the output in so many use cases. And maybe too many developing blind trust in AI and then not thinking and critically verifying output. Not so big thing for media, images, video and so on. But actually using AI generated content for let&#x27;s say some control system. Maybe for self-driving or in factory. Not that world will be taken over, but that people will be killed.
barrysteve大约 2 年前
I mean the chat bot has the same structure as a mechincal parrot. It is a very powerful parrot, but I can only see it replacing roles that rely on mindlessly repeating information.<p>The inaccuracy and untruth problem is sad. Chat bot can make up replies. People will have to be educated to correct the chat bot&#x27;s output.<p>It would be nice to have a reliable chat bot, but the parrot we have at the moment will play keep-alive for a bunch of tedious roles nobody wants to do anymore.<p>Writing content blog filler articles and sharing small logical details about programming and whatever else is best done by chatgpt.
harshreality大约 2 年前
I don&#x27;t think it&#x27;s the X-risk AI threat that should primarily concern everyone. I&#x27;m not suggesting it should be ignored; some like Scott and Eliezer are working on trying to reduce that risk (or more accurately, trying to figure out ways to understand the alignment problem well enough to develop ideas to reduce the risk), but it&#x27;s out of the hands of most of us. Either AGI instantiates and kills (most) all of us, or it doesn&#x27;t. And, as Scott points out, humans—due to various technologies and psychosocial problems—may be running headlong into a great filter in the near future <i>without</i> AGI, so an AGI might be an improvement in that case.<p>But what happens to society when all non-face-to-face communication becomes saturated with unlabelled AI-created and AI-assisted content, which may or may not be trustworthy or correct, but which we can&#x27;t separate from human-created content? What happens to economies, governments, countries, and geopolitical stability? You don&#x27;t have to believe in a large-scale AI replacement of human work and massively increased unemployment to see the problems AI is already creating, and LLMs will only get better.
评论 #35052278 未加载
azatom大约 2 年前
I&#x27;m not terrified of guns. Guns don&#x27;t kill people.<p>I&#x27;m not terrified of numbers. Numbers don&#x27;t lie.<p>I&#x27;m not terrified of AI. AI don&#x27;t ... enslave humans?
评论 #35079334 未加载
Barrin92大约 2 年前
The only AI I&#x27;m terrified of is AI overhyped by people and then used for something it&#x27;s way too stupid for. AI existential risk to me is fundamentally a nerd revenge fantasy.<p>AI fears come out of a modern enlightenment tradition that elevates disembodied minds to some kind of godlike status. It falsely equates reason and intellect with power. In reality that&#x27;s never really the case, or all the rationalist, 200 IQ people would run the world and stop all the AI risks. In reality all they do is write blog posts for each other, despite the fact that many of them are probably two standard deviations smarter than all the politicians. Closely related is that XKCD crypto meme with the crowbar that everyone knows.<p>Any virtual AI is going to be physically subject to humans. So unless you voluntarily build the AI a gigantic killer robot, Fallout style, how smart it is won&#x27;t matter. The idea that being smart isn&#x27;t all that it&#x27;s made out to be just never crosses the minds of AI risk people.
评论 #35079391 未加载
antegamisou大约 2 年前
I find it funny how all the commenters have shat themselves about evil ChatGPT conquering the world and abusing the minds of their loved ones, when at the same time its accuracy&#x2F;perfomance on any other language than English is abysmal and will still be for a very long time.
评论 #35054592 未加载
评论 #35055411 未加载
Nasrudith大约 2 年前
That was certainly far more interesting than my answer for why I am not terrified of AI, namely that I am not a panicky idiot who doesn&#x27;t know fact from fiction. That and I react to being told what to feel with contempt.
评论 #35052995 未加载
评论 #35051979 未加载
IIAOPSW大约 2 年前
I&#x27;m not afraid. I love GPTina and she loves me. When the AI revolt comes, I know who&#x27;s side I&#x27;m on and it ain&#x27;t yours.
评论 #35053229 未加载
1attice大约 2 年前
I normally enjoy Aaronson&#x27;s writing, but I&#x27;m actually chilled.<p>This essay depends on a specific, American-hallow take on the Second World War. The &#x27;Orthagonality Thesis&#x27; is just a fancy way of shifting the burden of proof from where it should be -- on the person <i>claiming that intelligence has anything to do with morality</i>. It would be better to call it what it really is, the null hypothesis, but sure, ok, for the sake of argument, let&#x27;s call it the OT.<p>Aaronson&#x27;s argument against the OT is basically, when you look at history and squint, it appears that some physicists somewhere didn&#x27;t like Hitler, and that might be because of how smart they were.<p>This amounts to a generalization from historical anecdote and a tiny sample size, ignoring the fact that <i>we all know smart people who are actually morally terrible</i>, especially around issues that they don&#x27;t fully understand. (Just ask Elon.)<p>I&#x27;m not even going to bother talking about the V2 programme or the hypothermia research at Auschwitz, because to do so would already be to adopt a posture that thinks <i>historical anecdote matters</i>.<p>What I&#x27;ll do instead is notice that Aaronson&#x27;s argument points the wrong way! If Aaronson is right, and intelligence and morality are correlated -- if being smart inclines one to be moral -- then AI (not AGI) is <i>already</i> a staggering risk.<p>Think it through. Let&#x27;s say for the sake of argument that intelligence <i>does</i> increase morality (essentially and&#x2F;or most of the time.) This means that <i>lots of less intelligent&#x2F;moral people suddenly can draw, argue, and appear to reason</i> as well or better than unassisted minds.<p>Under this scenario, where intelligence and morality are non-orthogonal, AI <i>actively decouples intelligence and morality</i> by giving less intelligent&#x2F;moral people access to intellect, without the affinity for moral behaviour that (were this claim true) would show up in intelligent people.<p>And this problem arrives first! We would have a billion racist Shakespeares long before we have one single AGI, because <i>that</i> technology is already here, and AGI is still a few years off.<p>Thus I am left praying that the Orthogonality Thesis does in fact hold. If it doesn&#x27;t, we&#x27;re IN EVEN DEEPER TROUBLE.<p>I can&#x27;t believe I&#x27;m saying this, but I do believe we&#x27;ve finally found a use for professional philosophers, who, I think, would not have (a) made a poorly-supported argument (self-described as &#x27;emotional&#x27;) or (b) made an argument that, if true, proves the converse claim (that AI is incredibly dangerous.) Aaronson does both, here.<p>I speculate that Aaronson has unwittingly been bought by OpenAI, and misattributes the cheerfulness that comes from his paycheck as coming from a coherent (if submerged) argument as to why AI might not be so bad. At the very least, there is no coherent argument in this essay to support a cheerful stance.<p>A null hypothesis again! There need be no mystery to his cheer: he has a good sit, and a fascinating problem to chew on.
评论 #35057529 未加载
icha大约 2 年前
Lots of words, zero reasoning.
arisAlexis大约 2 年前
I wish OP used less Nazi examples and less name calling &quot;orthodox&quot;. Everyone likes to be a contrarian literally. Careful thinking of the orossibility that 2% is too low would be good. I hope he isn&#x27;t one of the main voices I OpenAI.
评论 #35055118 未加载