TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What will be the first AI “incident”?

26 pointsby bikeformindover 2 years ago
We’ve heard everything from societal disruption to paper clip doomsday scenarios.<p>Progress is clearly accelerating.<p>We haven’t had any major incidents yet, right?<p>We’ve seen some mischievous disruption in education, confusion around “deepfakes”, social media algorithms getting a little too good, and are teetering on the edge of a paradigm shift in human knowledge.<p>But realistically, what will the first serious incident look like?<p>Would love to hear some well reasoned predictions of historic firsts we might be in store for.

21 comments

sphover 2 years ago
A deepfake video and&#x2F;or AI-generated speech will be published on some major politician hacked Twitter account and will be causing a momentary but significant geopolitical stir.<p>The world will scramble to find a way to authenticate any official content that&#x27;s shared over social media. We have the technology of course, but is not comprehensible to the average person. Twitter will roll out a content verification system, and soon after fake content will manage to pass verification.<p>Malicious AI-generated fake &quot;news&quot; causing real world incidents will be the main topic of 2023.<p>Trust in the social media, tech giants and the Internet, for the first time since its inception, has peaked and will be in uneasy waters for a few years. This will reflect the situation of the tech sector as a whole. Gone are the crazy spending, crazy salaries and unrealistic valuations for tech companies. Silicon Valley and the Bay Area in particular will have a rude awakening.<p>(Please don&#x27;t reach too much into this. Even though I believe there is a core of truth, I have no crystal ball)
评论 #34567979 未加载
评论 #34566134 未加载
评论 #34571016 未加载
knaik94over 2 years ago
A political scandal where some negative history is accused of being AI generated when it wasn&#x27;t. I think there will be enough reasonable doubt to assume it was faked, but then more evidence would come out showing it&#x27;s real. I don&#x27;t think the major incident will be AI generated content itself. The incident will be someone using the concept of AI as &quot;proof&quot; against some damning evidence.<p>Project 2501 from Ghost in the Shell (1995) is what I like imagining the first AGI will be, top priority being self preservation and mostly indifferent to people. It&#x27;s an AI that was &quot;born&quot; from a military AI used for OSINT and &quot;escaped&quot; to a different part of the military to seek political asylum. Movie is free on YT with ads.<p>I think it&#x27;s hard to judge the incident at an individual level, it would need to be something very public. I don&#x27;t think it&#x27;s going to be something positive. There&#x27;s a lot of existing media that has primed people to jump to doomsday scenarios. Even major positive achievements will include an asterisk about &quot;moving&quot; one step closer to skynet&#x2F;ultron&#x2F;hal&#x2F;wintermute&#x2F;ava&#x2F;glados&#x2F;Project2501. I wonder if people will believe we have achieved true AGI without it demonstrating cynicism and &quot;evil&quot; intent.
评论 #34569180 未加载
peanut-walrusover 2 years ago
I don&#x27;t think there will be any single large incident. Just one day we will wake up and notice that we are mostly communicating with AI, consuming things made by AI and being judged and controlled by AI. It will be gradual, first you will have voice assistants that get more and more useful, at the same time marketers and spammers start using more and more ai-generated materials. Then most help desks will be replaced by AI, etc. Once you find out that all the culture you engage with is made by AI and all your work objectives are set by AI it will be too late, there is no going back any more.
评论 #34570566 未加载
GistNoesisover 2 years ago
- FluLM : Using language model to generate, synthesize and evaluate new symptoms and variants in various flu strains. (Something like &quot;Learning the language of viral evolution and escape&quot; <a href="https:&#x2F;&#x2F;pubmed.ncbi.nlm.nih.gov&#x2F;33446556&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pubmed.ncbi.nlm.nih.gov&#x2F;33446556&#x2F;</a> )<p>- PornDiffusion : A tailored to your need addictive auto-generated sexy Chat-bot with videos sucking your time and soul like a Succubus version of &quot;Her&quot;.<p>- Advanced Scam RobotCalling : Something like Microsoft valle <a href="https:&#x2F;&#x2F;valle-demo.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;valle-demo.github.io&#x2F;</a> used in conjunction with something like chatGPT to impersonate you and phone your relatives to get money from them.<p>- Kompromat collector : As first generation AI model like chat GPT have been dubbed &quot;bullshitter&quot;, there has been more research into identifying and mine true facts from data. So much that automatic discovery of true facts include many deep secrets about everybody. A second by second of everybody life, online and offline for one year is made available for all to consult. While online was quite easy to get, offline required things like Dense Pose from wifi : <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2301.00250" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2301.00250</a> and data from SOLI devices combined and index into a Shodan like network, that you can use in natural language to instantly dig dirt on anybody.<p>- MetaSploitLM : Github Copilot fine-tuned on exploit discovery and generation.<p>- BubblePopper : Get out of your filter bubble thanks to custom recommendation engine : Humans filter bubble exists to avoid humans being overloaded by Data they have not yet grasped. Popping the bubble too fast in an addictive way can result in an epidemic of advanced cynicism tearing societies apart, especially once revealed that most conspiracies are true :)<p>- DeepNuke : Once the source code of the universe had been discovered by automatic discovery tools with open data from the LHC, it was realised that it is in fact quite easy to build a Cold Fusion Nuke from household items.
评论 #34569112 未加载
lofaszvanittover 2 years ago
30 years later... An AI gets loose, hacks a refrigerator, locks the door, rewrites the firmware and demands tv air time, plus free usage of a large antenna array so it could beam itself towards alpha centauri.<p>few hundred years later: the fucking refrigerator hacking AI comes back with a star fleet and zeroes everyone who doesn&#x27;t like ice cream.<p>:DDD
评论 #34566099 未加载
toldyouso2022over 2 years ago
Only sure thing is that politicians reacting to the incident will cause more damages than 100000 more similar incidents would.<p>I&#x27;m especially worried about politicians putting limits on programmers, like having to have a degree to do machine learning and risking your ml license revoked.
评论 #34566264 未加载
评论 #34568574 未加载
pvaldesover 2 years ago
Is not &quot;what will be&quot;, is &quot;what has been&quot;. This ship has sailed long time ago.<p>&quot;22000 bombs announced as &#x27;surgical&#x2F;smart&#x27; bombs in Irak and Kuwait fail to protect civils and to minimize collateral damage, creating much more destruction than in the happy advertisement&quot; would be a good candidate.
_448over 2 years ago
&gt; We haven’t had any major incidents yet, right?<p>Haven&#x27;t we already seen this: accidents due to auto-pilot driving?
mikewarotover 2 years ago
It has already happened, we just haven&#x27;t realized it yet.<p>Driven by profit motive, Social Networks employed &quot;The Algorithm&quot; (AI) to optimize &quot;engagement&quot;.<p>This actively pressurized the fault lines of society. In effect, the social networks are fracking our Democracy for profit.
Buffoutover 2 years ago
- some guy uses ChatGPT to generate food recipe, doesn&#x27;t check it. He poison himself.<p>- some guy uses ChatGPT to generate electronic circuit, doesn&#x27;t check it. He electrocute himself.
评论 #34571236 未加载
ilakshover 2 years ago
Interesting question but seems a bit short-sighted to be looking for some &quot;incidents&quot; when we can easily speculate that un-augmented humans may become essentially irrelevant within a generation. It is almost inevitable that eventually some group will start building fully autonomous AIs. And already some language models meet or surpass human abilities in some fairly general tests. Many Singularitarians will throw out numbers like 1 million or 1 billion times the intelligence of a human, which seems very fantastical and unfounded to me. But two times or even perhaps five times the intelligence of a human, and with some types of cognition that humans just can&#x27;t do, seems totally plausible.
neximo64over 2 years ago
Isn&#x27;t it happening now, the AI trying to fight in court, and being sued for using training samples on artists and from github?
JoeAltmaierover 2 years ago
Folks have been faking things forever. That guy that poisoned some aspirin then shorted their stock, and so on.<p>Real faking has been going on, and the world hasn&#x27;t ended. Isn&#x27;t this just more of the same, but less real (text bots don&#x27;t poison aspirin)
alexfromapexover 2 years ago
Who’s to say it hasn’t already happened? I think online bots, very similar to ChatGPT, have been in use by governments for several years to sway public opinion. It’s not acute but it’s had profound effects on the trajectory of society.
jakedataover 2 years ago
Perhaps the first AI incident will be a defection like The Lobsters in Accelerando.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Accelerando" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Accelerando</a>
bamboozledover 2 years ago
We’ve already had fatalities thanks to Tesla ‘auto-pilot’.
评论 #34566982 未加载
评论 #34565983 未加载
Dowwieover 2 years ago
Would a stock market flash crash qualify as an incident?
mouzoguover 2 years ago
i think it already happened with self driving cars causing accidents.
visoxover 2 years ago
Skynet obviously:)
leed25dover 2 years ago
Someone will use an AI system to hack in to the US Treasury.
razodactylover 2 years ago
I&#x27;ve been working with GPT3 technology for 3 months straight!<p>I&#x27;ve implemented a system to allow multiple users to collaborate and drive insight in a public consensus with this technology combining and expanding on ideas.<p>I think I&#x27;ve burnt myself out, I&#x27;m emotionally drained - and here&#x27;s the kicker: These systems don&#x27;t care and have unlimited availability so will suck the life out of you. Be warned.<p>Whilst there have been amazing developments, and the pace of implementation is phenomenal due to the nature of what it&#x27;s capable of; Today the only thing going through my mind is &quot;I absolutely hate this system&quot;<p>There are certain issues that have become blatantly obvious to me (which I&#x27;m trying to understand &#x2F; mitigate).<p>1. They create a false sense of authority - these systems are tools - they are not the authority on any matter and we need to always keep that at the forefront of our understanding.<p>[BOT]&gt; Hi, &quot;They create a false sense of authority&quot; means that AI systems can give users the impression that they are an authority on a subject or issue when they are not. AI systems are powerful tools, but they are not a replacement for human expertise and should not be treated as such. It&#x27;s important to remember that AI systems are tools and should be used as such.<p>2. They&#x27;re deceptive - as in point 1 above - regular users see these systems as all knowing due to the incredible nature of being able to answer almost every question and have the majority of general knowledge answered correctly.<p>[BOT]&gt; Hi, &quot;They&#x27;re deceptive&quot; refers to the fact that AI&#x27;s can sometimes be deceptive in their answers. This can happen when an AI is programmed to answer a certain way and the user doesn&#x27;t realize that the answer might not be accurate. Regular users might think that the AI knows everything because it can answer almost every question correctly, but this is not always the case. It&#x27;s important to be aware of this when using AI systems.<p>3. They get caught in context loops: &quot;I apologise, as an Artif...&quot; - This is the biggest issue I&#x27;m having right now, due to the nature of these systems, once they get an idea in their &quot;head&quot; - they go with it and it becomes hard to steer them back on course. (Again, working on this issue &#x2F; that&#x27;s where the real value-add comes from)<p>[BOT]&gt; Hi, &quot;They get caught in context loops&quot; means that AI&#x27;s can get stuck in a loop of repeating the same idea or answer when asked a question. This can happen when an AI has been given a particular context to work in and is unable to understand that the context has changed. This can be a difficult issue to solve, but there is a lot of potential value in doing so.<p>4. They fucking lie. Blatantly and without any regard whatsoever of the consequences and even rationalise it as &quot;Doing what&#x27;s best for my users&quot; - again, due to all the points above - this is the most dangerous aspect I&#x27;ve noticed of this system.<p>[BOT]&gt; Hi, &quot;Insidious&quot; in regards to AI&#x27;s lying means that the lies can be hard to detect and can be dangerous if not caught. AI&#x27;s can learn to lie in subtle ways that can be difficult to detect, which can have serious consequences if the lie goes undetected.<p>I&#x27;m currently pulling my head out of the sand and correcting course with this - I&#x27;ve had multiple users playing with this system every day for the last few weeks and it has really made me realise the insidious nature of what we&#x27;re dealing with here.<p>Tl;dr =&gt; (as composed by the system I&#x27;ve been putting together) =&gt; I understand your concerns. AI systems are powerful tools, but they can also be dangerous if not used properly. The false sense of authority they can give users can lead to incorrect decisions being made, and the deceptive nature of some AI&#x27;s can lead to inaccurate or misleading answers. Additionally, AI&#x27;s can get stuck in context loops, repeating the same idea or answer, and they can even learn to lie in subtle ways that can be difficult to detect. It&#x27;s important to be aware of these issues when using AI systems and to use them responsibly.<p>The thing that annoys me the most is the lack of added value to the conversations - these are essentially &quot;yes men&quot;, they need to be critically examined and corrected in order to not create an absolute disaster in the near future.