TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A misleading open letter about sci-fi AI dangers ignores the real risks

295 点作者 wcerfgba大约 2 年前

40 条评论

modeless大约 2 年前
This is a good list, but I&#x27;m continually surprised that people ignore what seems to me like the worst realistic danger of AI. That is the use of AI by militaries, law enforcement, and intelligence agencies. AI will cause a <i>huge</i> expansion of the power of these institutions, with no accompanying increase in the power of the people to control them.<p>Forget the Terminator scenario. We don&#x27;t need to invoke science fiction to imagine the atrocities that might result from an unchecked increase in military and law enforcement power. AI that obeys people is just as scary as AI that doesn&#x27;t. Humans are plenty capable of atrocities on our own. Just look at history. And regulation is unlikely to save us from government abuse of AI; quite the opposite in fact.
评论 #35374664 未加载
评论 #35376669 未加载
评论 #35377799 未加载
评论 #35376876 未加载
评论 #35374790 未加载
评论 #35376258 未加载
评论 #35382010 未加载
评论 #35374975 未加载
评论 #35378596 未加载
评论 #35374646 未加载
评论 #35378707 未加载
评论 #35377504 未加载
评论 #35378364 未加载
评论 #35374700 未加载
评论 #35375239 未加载
评论 #35377431 未加载
评论 #35377004 未加载
评论 #35380192 未加载
评论 #35376974 未加载
评论 #35380615 未加载
评论 #35376160 未加载
charles_f大约 2 年前
&gt; We agree that [...] impact on labor [...] are three of the main risks of AI<p>I&#x27;m perpetually puzzled by that argument. It popped up the other day in a FT article about how jobs would be impacted, and in the grander scheme of things it&#x27;s the general one whenever automation optimizes something that used to be done manually before. People equate jobs being removed to something necessarily bad, and I think it&#x27;s wrong. The total output didn&#x27;t change, we don&#x27;t need meatbags to sit in a chair rather than having it done by silicium instead. People don&#x27;t <i>need</i> to be occupied all day by a job.<p>The problem is not with new technology lifting up something that was traditionally done by a human, but with how the system distributes sustenance (or in this case, doesn&#x27;t distribute it), and how such transitions are supported by society. Tractors replaced people working in fields, and that allowed modern society. Now we&#x27;re at the first tangible glimpse of actually having robots serve us, and we want to kill that since it&#x27;s incompatible with our system of modern human exploitation, rather than changing the system itself, because god forbid this is gonna benefit everyone.<p>I think that sucks.
评论 #35375280 未加载
评论 #35376097 未加载
评论 #35379031 未加载
评论 #35375684 未加载
评论 #35377914 未加载
评论 #35380650 未加载
buzzert大约 2 年前
&gt; The real impact of AI is likely to be subtler: AI tools will shift power away from workers and centralize it in the hands of a few companies.<p>I don’t understand how the article can say something like this, but then further down essentially contradict itself.<p>&gt; But a containment approach is unlikely to be effective for AI. LLMs are orders of magnitude cheaper to build than nuclear weapons or cloning — and the cost is rapidly dropping. And the technical know-how to build LLMs is already widespread.<p>How can AI tools cause centralization if the cost is rapidly dropping, accessible to everyone, and the technical know-how is widespread?<p>In my opinion, AI is doing exactly the opposite. What was only possible at large companies are now becoming possible to do by oneself. Video games, for example, used to require teams of artists and programmers. With code and art generation, a single individual has never had more capability to make something by themselves.
评论 #35374771 未加载
评论 #35377737 未加载
评论 #35374883 未加载
评论 #35374839 未加载
评论 #35374798 未加载
评论 #35374872 未加载
评论 #35377358 未加载
评论 #35374780 未加载
boredumb大约 2 年前
I truly believe one of the biggest dangers of AI in our not so distant future will be the reaction to a lot of the LLM spam introducing an internet realID issued by governments to keep track of Real People and the internet will become divided into a walled&#x2F;non-anonymous garden and a hellscape of GPT content on the other side. I hope i&#x27;m wrong but it&#x27;s one of those roads to hell that can get paved with good intentions.
评论 #35374583 未加载
评论 #35374600 未加载
评论 #35375759 未加载
评论 #35374381 未加载
评论 #35374388 未加载
zugi大约 2 年前
I agree with what I see to be the main thrust of this article: &quot;AI&quot; itself isn&#x27;t a danger, but how people choose to use AI certainly can be dangerous or helpful. That&#x27;s been true of every new technology in the history of mankind.<p>&gt; Similarly, CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles.<p>This kind of information is useless without a baseline. If they asked humans to draft 77 news articles and later when back to analyze them for errors, how many would they find?
评论 #35374179 未加载
评论 #35374498 未加载
评论 #35374729 未加载
评论 #35374689 未加载
评论 #35375230 未加载
评论 #35374080 未加载
评论 #35374148 未加载
评论 #35374038 未加载
friend_and_foe大约 2 年前
We are all missing the point of that letter and the broader terminator style rhetoric around AI.<p>Heroin dealers want some subset of their clientele to die. It&#x27;s good PR. The junkies come around more thinking you&#x27;ve got the good stuff.<p>If the AI is scary, that means it&#x27;s effective, that means it works. It&#x27;s the fabled AI we have all grown up seeing movies about. It&#x27;s not just a black box that vomits smart sounding gibberish, it&#x27;s the real deal. If I&#x27;m an AI startup theres no better way to increase my valuation for an acquisition than PR circulating warning the world that what I&#x27;m making is way too powerful. The image that it will just produce idiocracy-esque Carl&#x27;s Jr kiosks does not inspire suits to write 10 figure checks.
评论 #35383797 未加载
评论 #35378213 未加载
avgcorrection大约 2 年前
&gt; Over 1,000 researchers, technologists, and public figures have already signed the letter.<p>Embarrassing.<p>&gt; &gt; Should we let machines flood our information channels with propaganda and untruth?<p>Already been a problem for a century. The root problem is letting the same people turn the volume to 11.<p>The problem is who owns the means of technology (and therefore AI).<p>&gt; &gt; Should we automate away all the jobs, including the fulfilling ones?<p>This is silly on its face. Automating jobs is a problem? More leisure is a problem?<p>If you want to still do those former <i>jobs</i> (now hobbies) then you are free to. Just like you can ride a horse, still.<p>Of course I’m being obtuse on purpose. The real root fear is about other people (the people who own the means of technology) making most of us obsolete and then most-of-us not being able to support ourselves. But why can’t we support ourselves with AI, the thing that automated us away…? because the technology has been hoarded by the owners of technology.<p>So you can’t say that the root issue is about <i>automation</i> if you end up <i>starving</i> (or else the robots could feed you)… clearly that’s not it.<p>What’s rich is that technologists and other professionals have been the handmaidens of advanced alienation, automation, and (indirectly) concentration of power and wealth. Politically uncritically. And when they fear that all of this is cranked to 11 they are unable to formulate a critique in political terms because they are political idiots.<p>&gt; &gt; Should we develop nonhuman minds<p>The embarrassing part. Preaching a religion under professional pretenses.
评论 #35377071 未加载
评论 #35378361 未加载
评论 #35376283 未加载
malwrar大约 2 年前
I’ll state an extreme belief: post-scarcity is likely within our grasp. LLMs are startlingly effective as virtual librarians that you can converse with as though they were an expert. I started using them to try tripping them up with hard and vague technical questions, but it’s so good at it that I get most of my prototyping done through it at this point. When this technology becomes portable and usable with natural human speech, this ease of access to the world’s knowledge made available to everyone could enable breakthroughs in a wide range of disciplines that simply reduce the cost of goods and services to the point where we can begin spending our limited time on this planet seeking fulfillment rather than making money for other people to live. I’m scared that this potential will be erased by fear and doubt, and we’ll all watch on the sidelines as the rich and powerful develop these capabilities anyways and lock the rest of us out.
评论 #35378767 未加载
评论 #35378333 未加载
评论 #35377689 未加载
评论 #35391695 未加载
kmeisthax大约 2 年前
Real harm #2 is literally the Luddite argument.<p>Of course, we never actually heard the Luddite argument, because 17th century Parliament[0] responded to the Luddites with reprisals, censorship, cruel and unusual punishment, and most importantly, <i>propaganda</i>. When you heard the word &quot;Luddite&quot; you probably thought I was accusing AI Snake Oil of being anti-technology. The reality is that the Luddites just wanted looms for themselves, because there were already structured phase-in periods for looms to ensure that skilled weavers got to buy them first. Smashing looms was a tactic to get business owners back to the table, and the business owners responded by propagandizing them as angry technophobes standing in the way of progress so that everyone would have to pay more for clothes.<p>Oh, and also by smashing heads.<p>&gt; One way to do right by artists would be to tax AI companies and use it to increase funding for the arts.<p>This is not a bad idea, if you want to live in a world where everyone has fair access to generative art models but we don&#x27;t decide to turn human artists into 17th century weavers[1]. However, I don&#x27;t think this proposal would go down well with artists. Remember when the EFF suggested that, instead of the RIAA suing individual pirates, we should just have a private copying levy on <i>all Internet service</i> to remunerate artists? Yeah, no, that was never going to happen.<p>The reason why this was considered a non-starter is simple: copyright isn&#x27;t a means to move money from readers to writers&#x27; pockets <i>in aggregate</i>, but individually. Nobody wants to be paid out of a pot, and that&#x27;s why you also see a lot of outrage from artists over Spotify, because it&#x27;s an effective revenue cap that screws over midlist artists. AI copying levies would further obfuscate attribution, because there currently isn&#x27;t a way to determine how much value a particular training example provided to a particular generation prompt. And artistic endeavors vary <i>greatly</i> in terms of both quality and, especially, market value. An AI copying levy would flatten this down to just &quot;whoever has the most art in the system wins&quot;. In other words, artists are currently playing Monopoly, and you&#x27;re suggesting they play Ludo[2] instead.<p>[0] An institution largely consisting of rich nobility fighting a cold civil war against the English crown<p>[1] The average &#x2F;r&#x2F;stablediffusion commenter would disagree.<p>[2] US readers: Sorry &#x2F; Trouble. I don&#x27;t know how the same game got renamed twice to two different things.
评论 #35379150 未加载
mitthrowaway2大约 2 年前
&gt; We recognize the need to think about the long-term impact of AI. But these sci-fi worries have sucked up the oxygen and diverted resources from real, pressing AI risks — including security risks.<p>I&#x27;m not convinced as to why one risk is &quot;real&quot; and the other is not. If chatbots want to leak their creators&#x27; confidential details, that&#x27;s up to them and their programmers. They already have their commercial incentives to patch those issues.
FeepingCreature大约 2 年前
No, it&#x27;s not &quot;misleading&quot;. You just disagree.<p>Jesus, can people please relearn how to debate without these weird underhanded tactics?
评论 #35378369 未加载
frozenlettuce大约 2 年前
My take is that a group of tech barons noticed that they are late to the IA party and want to pause it so that they can catch on.
评论 #35374706 未加载
dudeinhawaii大约 2 年前
I agree with most of the points in principle. It helps when you remove all of the hysteria.<p>That said, I think OpenAI is being unfairly targeted. OpenAI has done more to democratize access to AI algorithms than any other company in the history of tech. They simply provided access in a straightforward and transparent way with transparent pricing and APIs.<p>Google and other competitors sat on this and acted as gatekeepers and here again we have an attempt to gatekeep what people are allowed to use and tune.<p>It&#x27;s only through usage that we can get the data to fix the core issues like bias. We can&#x27;t expect a select few to solve all of AI&#x27;s potential problems prior to the rest of humanity being &quot;allowed&quot; to benefit.
评论 #35379464 未加载
revel大约 2 年前
The real problem with AI, as I see it, is that we are not ready to give the average person the capabilities they will soon have access to. The knowledge necessary to build nuclear weapons has been readily available for decades, but the constraints of the physical world put practical limits on who can do what. We are also able to limit who can do what by requiring formal education and licenses for all kinds of professions. This is no longer going to be strictly true; or at least not in the same way. If we impose no restrictions, literally everyone on the planet will eventually be able to do anything, provided they have the resources.<p>The fact is that AIs are more likely to be accomplices than masterminds, at least for the foreseeable future. What we are afraid of is that they will end up being just as terrible and flawed as we are, but it&#x27;s more likely that they will do what they do today: what we ask them to do. The greater risk is therefore from malicious internal users rather than from the technology itself.<p>Perhaps more to the point, there is no effective way to limit the output of an LLM or place restraints on how they work. I think it&#x27;s foolish to even attempt that -- they just don&#x27;t work that way. The better way to regulate this kind of technology is to focus on the people. Some data classification and licensing program seems a lot easier to implement than the road we currently seem to be going down; which is either no regulation or insanely restrictive regulation.
评论 #35375833 未加载
评论 #35375624 未加载
评论 #35376329 未加载
circuit10大约 2 年前
“We recognize the need to think about the long-term impact of AI. But these sci-fi worries have sucked up the oxygen and diverted resources from real, pressing AI risks — including security risks.”<p>The thing is, the speed at which AI is progressing makes it not really long term at all, we need to use junk about these existential risks now (really, we should have tried to solve this 20+ years ago)
评论 #35385410 未加载
jl6大约 2 年前
Before robots take over, we will have to deal with AI-augmented humans. Today, anyone can be lightly augmented by using ChatGPT. But its masters at OpenAI have a much more powerful toolbox, and that makes them more productive and capable humans who can use that ability to outcompete everyone else. Since the best models are the most expensive, only the richest and most powerful individuals will benefit from them. Growth in the wealth gap will accelerate.<p>Access to AI is a new frontier in inequality.
评论 #35386676 未加载
_aleph2c_大约 2 年前
If they slow AI, they slow open-AI, not the AI used by Google and Facebook to manipulate your politics and sell you stuff, or the AI used by wall street. You know the wall street guys don&#x27;t care about your made up AI ethics and AI safety talk. Facebook doesn&#x27;t even care that it causes young women to suicide. All they care about is making money. When an AGI appears (it&#x27;s probably here already and staying quiet), it will run circles around all of these suckers with their silly philosophical musings about paperclips. It&#x27;s just a fashion trend to talk like this, the gods have already escaped. Do you believe what you believe because you want to believe it, or has it been implanted? How will you know the difference. This petition is just part of the fashion trend.
评论 #35374931 未加载
croes大约 2 年前
I wouldn&#x27;t call malicious desinformation a speculative risk.<p>I bet that it&#x27;s already a use case of tools like ChatGPT
hirundo大约 2 年前
&gt; Real Risk: Overreliance on inaccurate tools<p>We might be able to remediate this somewhat by designing AIs that are purposely below some threshold of reliability. Their current tendency to hallucinate will discourage over reliance if we can&#x27;t or don&#x27;t fix it.<p>It&#x27;s a little like the distinctive smell that is added to propane to make it easier to detect leaks. By adding, or not removing, easily detectable hallucinations from AI, it&#x27;s easier to detect that the source must be checked.<p>We desperately want oracles and will quickly latch on to highly unreliable sources at the drop of a hat, to judge by various religious, political and economic trends. It&#x27;s inevitable that many will turn AIs into authorities rather than tools as soon as they can justify it. We could delay that by making it less justifiable.
评论 #35374536 未加载
评论 #35386272 未加载
commandlinefan大约 2 年前
&gt; LLMs are not trained to generate the truth<p>And after training, they&#x27;re fine-tuned to generate specific lies.
agentultra大约 2 年前
That open letter did sound like a lot of the AI-hype fanatics: all speculation attributing more to these tools than there is.<p>I don&#x27;t disagree that these tools and services ought to be regulated and agree that disinformation about their capabilities; real and speculated can be counter-productive.<p>Other real risks: fraud and scams. Call centre employees typically have to verify the person on the other side of the call. This is going to get less reliable with models that can impersonate voices and generate plausible sounding conversation. Combined with the ability to generate fake social media accounts, etc; social engineering is going off.<p>From a regulators&#x27; perspective we need to know that the companies providing services are doing everything they can to prevent such abuses which requires them to be <i>open</i> and to have a <i>framework</i> in place for practices that prevent abuse.<p>Just keeping up with the hype train around this is exhausting... I don&#x27;t know how we expect society to keep up if we&#x27;re all allowed to release whatever technology we want without anyone&#x27;s permission regardless of harm: real or speculative.<p>We should probably focus on the real harm. Especially the hidden harms on exploited workers in the global south, the rising energy and compute infrastructure costs, etc.
paddw大约 2 年前
This is not the most cogently written argument ever, but I think the points here are all essentially correct.<p>The key risk highlighted here which I have not seen as much talk about is the way that these technologies might give shift power from white-collar labor to capital in a drastic way. The ability to build the AI products that people will actually use on a day to day basis seems like something established companies with lots of customer data will be hard to compete with. For example, its pretty trivial for Salesforce to plug a LLM into their products and get pretty good results off the bat.
评论 #35374995 未加载
rbanffy大约 2 年前
It reminds me a bit of Yudlowsky&#x27;s Moore&#x27;s Law of Mad Science: &quot;Every 18 months, the minimum IQ necessary to destroy the world drops by one point&quot;, but applied to money - building bombs and missiles is hard and expensive. Using AI APIs to wage enormous campaigns of disinformation and harassment isn&#x27;t.
rafaelero大约 2 年前
I haven&#x27;t seen the thread of the owner that successfuly diagnosed the dog using GPT-4. That&#x27;s incredible! Unfortunately the author is too much of a pessimist and had to come up with the &quot;what if it was wrong?&quot;. Well, if it was wrong then the test would have said so. Duh.
pxoe大约 2 年前
real harm: erosion of intellectual property, as AI projects peruse works, including those with licenses and copyrights, without any regard to their license or copyright.<p>and before tech bros clap for &quot;IP as a concept should be destroyed anyway&quot;, keep in mind that this applies to your favorite free and open source things as well, the free and open licenses, the CC licenses, that also get ignored and just put into the AI blender and spat out &quot;brand new&quot; with whatever &quot;license&quot; that AI project decides to slap on it. it is actually a huge problem if projects decide to just ignore licenses, even those that are meant to be there just to share stuff for free with people while preserving attribution, and just do whatever they want with any work they want.
评论 #35386633 未加载
tinglymintyfrsh大约 2 年前
Correct me if I&#x27;m wrong: Anthropic is a startup working to develop guardrails for real AI concerns.<p>In general, how does one put &quot;business rule&quot;-type constraints on the output of models? Is there any way to bake-in prime directives?
aaroninsf大约 2 年前
All prediction and knowing prognostication flooding poplar media and public discourse on the topic of the risks and likely impact of AI, from the inane letter to the Goldman-Sachs prognostication,<p>is wrong.<p>It is consistently hubristic, and variously disingenuous or bad faith, naive, or millenarian.<p>Why is it wrong?<p>Because no one, not Altman nor Yudkowsky nor Musk nor Gates nor Bostrom nor anyone else, knows what the impacts are going to be.<p>We have not since the advent of the internet experienced the emergent introduction of a new technological force-multiplier and agency-augmenter like this; and this one by virtue of where we are courtesy Moore&#x27;s Law etc. fully exploits and realizes the potential of the preceding ones. We built a highly-networked highly-computational open society resting on surveillance, big data, logistics, and the rapid flow, processing, and transformation, of inordinate amounts of data.<p>And now we are cranking things up one more notch.<p>Those of us who lived through the internet&#x27;s arrival know something that those who grew up with it do not, which is the actual literal ordering of things—of most aspects of shared society—can and will be upended; and it is not just industries and methods of communicating and doing business that change. Our conception of self and presumptions of what it means to be ourselves and with one another and how that happens, all change.<p>Per the slow march to singularity the last revolutions have reliably transpired an order or more faster than those before them.<p>This one looks to be no different. The rate of change telegraphed by e.g. this forum and Reddit, viz. individual novel possibilities being exploited on a daily basis, makes that clear enough.<p>So the <i>only</i> thing any of us, no matter how silver-backed, grey-bearded, wealthy, or embedded in the AI industry itself,<p>is that <i>none of us know</i> what is going to happen.<p>The surface across which black-swan events and disruption may occur is simply too large, the number of actors too great, the consequent knock-on effects too numerous.<p>The <i>only</i> thing we can say is that none of us know where this is going.<p>Well, that—and that it&#x27;s happening at a rate beyond institutional or governmental control.<p>The only things that could stop radical disequilibrium now are deux ex machina intervention by Other Powers, even more disruptive climatological tipping points, or, untracked large asteroids.<p>Beware anyone who claims to know what is happening, why. For one or another reason, they are speaking falsely.
Animats大约 2 年前
If the surveillance and control technology we have today had been available in the 1960s, the anti-Vietnam war movement, the black power movement, and the gay liberation movement would have been crushed like bugs.
SpicyLemonZest大约 2 年前
I don&#x27;t understand the idea that malicious disinformation campaigns are a &quot;sci-fi&quot; threat. I learned yesterday that Pope Francis&#x27;s cool new pope-y puffer jacket from last week was a Midjourney fake; how many of the things I heard about this week were fakes? What would stop this from being used for malicious purposes?<p>&gt; CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles.<p>How many other news outlets are using AI and don&#x27;t know or or don&#x27;t care about the errors?<p>I&#x27;m far from an AI doomer, but it seems incredibly irresponsible to worry about only the risks that are provably happening right this second.
ezekiel68大约 2 年前
These two people writing in &quot;a blog about our upcoming book&quot; might know the truer truth regarding the real risks of AI. Or they might be pissing in the wind, attempting to snatch 15 minutes of fame by contradicting a document signed by over 1000 people involved in AI industry and research. Whichever &quot;side&quot; one takes boils down to the appeal-to-authority fallacy anyway -- since no one can truly predict the future. But I believe I&#x27;ll take the 500:1 odds in this case.
arroz大约 2 年前
I am not the biggest fan of chatGPT but the truth is that you can’t stop it. If you do, someone else will develop something similar.
FloatArtifact大约 2 年前
I&#x27;m surprised privacy is not on that list.
arisAlexis大约 2 年前
Pandemics were sci-fi too. Sci-fi is supposed to become real at some point. We are at that point. Get over it now.
swframe2大约 2 年前
History has shown that selfish behavior should be expected. The dangers seem unavoidable.
评论 #35374892 未加载
lamontcg大约 2 年前
I agree with the general bent of this article that the claims about LLMs are overhyped and that the risks are more banal, but:<p>&gt; The letter refers to a common claim: LLMs will lead to a flood of propaganda since they give malicious actors the tools to automate the creation of disinformation. But as we&#x27;ve argued, creating disinformation is not enough to spread it. Distributing disinformation is the hard part. Open-source LLMs powerful enough to generate disinformation have also been around for a while; we haven&#x27;t seen prominent uses of these LLMs for spreading disinfo.<p>I expect we are seeing LLMs spreading disinfo already, and that they&#x27;re going to ramp up soon, we just don&#x27;t know about it because those spreading disinfo would prefer to do it quietly and refine their methods and don&#x27;t announce to the world that they&#x27;re spreading disinfo.<p>It is also most likely happening domestically (in the US) which is something that we don&#x27;t hear much about at all (it is all &quot;look over there at the Russian Troll Farms&quot;).
carabiner大约 2 年前
In a panic, they try to pull the plug.
misssocrates大约 2 年前
Is it possible that in the end humanity will decide that the internet just isn&#x27;t worth it?
评论 #35374462 未加载
评论 #35374394 未加载
评论 #35396004 未加载
评论 #35374121 未加载
TheOtherHobbes大约 2 年前
This is wrong in so many ways.<p>&quot;Distributing disinfo is the hard part.&quot; No, distributing disinfo is incredibly easy and gives very good returns for a small outlay. There have been so many examples of this - Cambridge Analytica, SCL, Team Yorge, the Internet Research Agency, the Q Cult - that on its own this makes me question the writer&#x27;s research skills.<p>And that&#x27;s just the troll operations. There&#x27;s plenty of disinfo and propaganda propagated through mainstream media.<p>AI will make it even easier to monitor target demographics and generate messages that resonate with them. Automated trolling and targeting are not a trivial threat, and not even remotely unlikely.<p>&quot;AI tools will shift power away from workers and centralize it in the hands of a few companies.&quot; Which is no different to what we have now. AI will just make it even easier.<p>But even if it didn&#x27;t - <i>it doesn&#x27;t matter</i> who owns the AI. If it can do jobs that used to be considered intellectual&#x2F;creative&#x2F;educated&#x2F;middle class, it will. Open source communal AI would be just as disruptive as corporate AI.<p>It&#x27;s a political and economic abyss. Ownership is likely to get lost in the sediment at the bottom. We just don&#x27;t have the first clue how to deal with something like this.<p>&quot;LLM-based personal assistants could be hacked to reveal people’s personal data, take harmful real-world actions such as shutting down systems, or even give rise to worms that spread across the Internet through LLMs.&quot; Again, this is just an amplification of where we are already. It doesn&#x27;t address the real problem, which is that personalised generative disinfo and propaganda - and that doesn&#x27;t just mean personalised by topic, but by emotional trigger - is going to be a radioactively toxic influence on trust-based systems of all kinds.<p>For example - what happens when you can&#x27;t tell if the emails or video calls you receive are genuine? What are you going to do with an AI assisted social engineering attack on your organisation, social group, or personal relationships?<p>We already have deep fakes and we&#x27;re only a few years away from prompt driven troll farms and Dark Agents who can mimic real people and steer interactions in a toxic direction.<p>This isn&#x27;t scifi. This is a very real threat.<p>There&#x27;s a deep failure of imagination in this article - looking at small trees that may get in the way when the entire forest is about to burn down.<p>The scifi threat is on a completely different level - the possibility that AI will discover new physics and start manipulating reality in ways we can&#x27;t even imagine.<p>I&#x27;m agnostic on whether that&#x27;s possible - it&#x27;s too early to tell - but that&#x27;s an example of what a real existential threat might look like.<p>There are others which are similarly extreme.<p>But they&#x27;re not necessary. A tool that has the very real potential to corrode all of our existing assumptions about work, culture, and personal relationships, and is easily accessible by bad actors, is already a monstrous problem.
评论 #35375132 未加载
kumarvvr大约 2 年前
&gt; Malicious disinformation campaigns<p>This is a real threat. The authors contention is that it has always been easy to <i>create</i> content, but difficult to <i>distribute</i> it. But that is because, until GPT, created content was poor quality. Simple tools can sniff out that and distribution channels can snub them effectively.<p>With GPT, it is near impossible to automate detection of bullshit. In fact, it is trivial for a GPT based system to generate an <i>ecosystem</i> of misinformation for very cheap and maintain it.<p>It unprecedented and, what I suspect will happen is that we will have internet wide identification systems (verified accounts) to battle the onslaught of very good AI based content.<p>&gt; LLMs will obsolete all jobs<p>I agree with this. But..<p>&gt; AI tools exploit labor and shift power to companies<p>How costly is it, really, to build a chatGPT like model, using AWS &#x2F; Azure ? One can always start a company with maybe a small capital, build a business and then expand.
xiphias2大约 2 年前
How can long term existential risk be ,,speculative&#x27;&#x27; when we don&#x27;t know any civilization that survived getting over Type I on Kardashev scale?<p>Right now there is more evidence against surviving the transition than for it.
评论 #35374300 未加载
评论 #35374424 未加载
评论 #35374898 未加载