TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Is AI threat overblown?

48 pointsby trappedover 7 years ago
Elon Musk, Putin, Mark Z are these guys just overblowing AI? Current developments in AI are no where close to cause WW-III. Why are these leaders frightening people with claims that AI can cause WW-III or ruin the world? On top of that media is going frenzy over any single statements or tweets by these leaders.<p>I have never seen Andrew Ng or Andrej Karpathy making such claims.<p>State of the art AI can only do very specialized things in limited scope e.g ASR, NLP,Image recognition, game play etc.<p>What am I missing?<p>Sources : https:&#x2F;&#x2F;www.cnbc.com&#x2F;2017&#x2F;09&#x2F;04&#x2F;elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html<p>https:&#x2F;&#x2F;www.cnbc.com&#x2F;2017&#x2F;07&#x2F;24&#x2F;mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html<p>http:&#x2F;&#x2F;money.cnn.com&#x2F;2017&#x2F;07&#x2F;25&#x2F;technology&#x2F;elon-musk-mark-zuckerberg-ai-artificial-intelligence&#x2F;index.html<p>https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;9&#x2F;4&#x2F;16251226&#x2F;russia-ai-putin-rule-the-world

38 comments

bridge_roover 7 years ago
There&#x27;s an AI researcher named Robert Miles [1] whose videos I really enjoy. He brought up a good point about this issue in one of his videos a little while back. To use it here, Elon Musk, Putin, and Mark Zuckerberg all have something in common: none of them are AI specialists or researchers. Another way to think of it is this: don&#x27;t ask your dentist about your heart disease.<p>Miles does point out very real issues&#x2F;questions in AI safety – that&#x27;s what most of his content is focused on. His point, which is a good one to make, is that the sort of fear mongering spread by non-AI specialists draws attention away from these very real issues that need to be addressed.<p>[1] His channel can be found here: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;channel&#x2F;UCLB7AzTwc6VFZrBsO2ucBMg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;channel&#x2F;UCLB7AzTwc6VFZrBsO2ucBMg</a> He&#x27;s also done a few videos for Computerphile.
评论 #15171581 未加载
评论 #15171539 未加载
评论 #15172097 未加载
payne92over 7 years ago
I do not think the threat is overblown, but like many tech estimates, we&#x27;re overestimating the short term and underestimating the long term.<p>The more present threat is &quot;AI-lite&quot;: we&#x27;re hacking ourselves collectively, more and more, with not entirely positive consequences.<p>We&#x27;re increasingly addicted to our devices and our system rewards those that further the addiction (er, &quot;engagement&quot;). We&#x27;ve provided ways for small groups of people (down to individuals) to influence and manipulate tastes, preferences, moods, feelings, choices, actions, and beliefs, overtly or subtly, at great scale. Case in point: should Mark Z want to quietly influence a US election...he could do it.<p>This isn&#x27;t &quot;AI&quot; in the self-aware&#x2F;AGI sense, but there&#x27;s an incredible amount of leverage looming over the human population, and that leverage is growing. And when machines start manipulating things instead of humans, how will we know?
评论 #15173458 未加载
评论 #15171612 未加载
cs702over 7 years ago
In my humble opinion, NO ONE knows if the threat is imminent, far-fetched, or imaginary.<p>NO ONE. Not Musk, Not Zuckerberg, not Putin. (Putin!!??)<p>What we DO know is that we don&#x27;t have artificial general intelligence (AGI) today, and that achieving it will likely require new insights and breakthroughs -- that is, it will require knowledge that we don&#x27;t possess today.<p>By definition, new insights and breakthroughs are unpredictable and don&#x27;t necessarily yield to anyone&#x27;s predictions, timelines, or budgets. Maybe it will happen in your lifetime; maybe not.<p>That said, it should be evident to everyone here that AI&#x2F;ML software is going to expand to control more and more of the world over the coming years and decades; and THEREFORE, it makes a lot of sense to start worrying about the maybe-real-maybe-not AI threat -- and prepare for it -- now instead of down the road.
sienover 7 years ago
Andrew Ng has a good quote:<p>&quot;Fearing a rise of killer robots is like worrying about overpopulation on Mars&quot;<p><a href="https:&#x2F;&#x2F;www.theregister.co.uk&#x2F;2015&#x2F;03&#x2F;19&#x2F;andrew_ng_baidu_ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theregister.co.uk&#x2F;2015&#x2F;03&#x2F;19&#x2F;andrew_ng_baidu_ai&#x2F;</a><p>But he might be wrong...<p>(And he&#x27;d admit it)
rwallaceover 7 years ago
Yes. None of these fantasies bears any resemblance whatsoever to anything in real AI research. What&#x27;s going on is that the predisposition of the human brain to believe in shadowy figures and apocalyptic futures is as pronounced as it ever was, but belief in Fenrir, Fair Folk and flying saucers is unfashionable among today&#x27;s intellectuals, so they look for something else to glom onto.<p>There was a time when I&#x27;d never have thought I&#x27;d say this, but I actually think it would be better if people just went back to openly letting their demons be of the admittedly supernatural variety, because that sort of belief is relatively harmless. When people start projecting their demons onto real-world phenomena, they start making policy decisions on that basis, and that could very well turn out to be the final step in the Great Filter. Technological progress is slowing. The peak is approaching. The easily accessed fossil fuel deposits are gone. There will be no second industrial revolution. If we fail to make adequate progress before we hit this peak, it will be the all-time one.
the8472over 7 years ago
The practical AI field is obviously growing <i>today</i>, more money is put into it every year. It&#x27;s only a question when you want to start your safety research and how much resources to allocate to it. You don&#x27;t need to allocate billions of dollars of friendly-AI research this year or the next year because anything approaching AGI is at least decades away (and might be a &quot;fusion is only 30 years away&quot; situation).<p>You could also compare this to climate change. The effect and eventual risk of greenhouse gases has been known for more than half a century. But initially it was mostly a theoretical concern and later even when it was realized to be a real problem the effects still seemed far away in the future. But <i>people still did basic research</i>, even decades ago. Nobody poured billions of dollars into sustainable businesses, but not doing business is not the same as not doing research.
DennisPover 7 years ago
Near-term: yes. Long-term: no.<p>Most of the controversy consists of people who look at the near term talking past people who look at the long term, and vice versa.
dmitrybrantover 7 years ago
In the case of Putin, are you seriously asking &quot;Why are these leaders frightening people...&quot;?<p>But seriously, people whom we might call &quot;visionaries&quot; like Musk, Zuckerberg, and let&#x27;s throw Ray Kurzweil in there, often get their ideas by extrapolating the current state of technology into its logical next phase. (They also like to be grossly aggressive on deadlines, to motivate their employees to be innovative and efficient.)<p>Unfortunately a simple extrapolation doesn&#x27;t always produce an idea that is attainable in practice. We will not have human-level AI anytime soon. We&#x27;re still many years away from driverless cars. An AI that cares about the politics of nation-states (to which we can confidently hand over the nuclear codes) is much farther away than that. But none of that actually matters, because a single tweet from these leaders can cause a flurry of activity and interest that can lead to an unexpected product idea. So, while it&#x27;s ethically dubious, I see this as being a mostly positive thing.
iRobberyover 7 years ago
Practical state of AI is image classification, so if you tie that to a weapon and program it to fire at &#x27;its will&#x27;, yes. Though it&#x27;d still not call that intelligence, so in that point of view no. And even in the first case, its how they say? guns dont kill people, people kill people with guns? So i&#x27;d say that goes for AI too.
hnaparstover 7 years ago
I own a Tesla Model X. I am not trying to be imflammatory, but my grandmother drives better than this car, and my grandmother is dead. Going down a street with a row of bushes, the car will slow down at every bush, and then speed up again. Musk is worried about AI, but his cars cannot even process bushes better than my dead grandmother.
评论 #15177238 未加载
评论 #15171556 未加载
maxxxxxover 7 years ago
I don&#x27;t see AI itself is a threat necessarily but AI and its input data concentrated in the hands of a few will be dangerous. Soon companies like Google and Facebook will pretty much know at any time what a large part of the world population is doing and thinking. There is a lot of potential for abuse there.
评论 #15171585 未加载
childintimeover 7 years ago
&gt; State of the art AI can only do very specialized things in limited scope e.g ASR, NLP,Image recognition, game play etc. &gt; What am I missing?<p>What you are missing is that much of the enterprise world is gameplay, and that &quot;AI&quot; is beginning to show superhuman performance in this area. Soon programs will be &quot;playing&quot; to be a business, act as equals to business owners. This AI employs us as its sensors, just like business men already do.<p>This means that in the next few years, you may get hired by a computer program. A program is more reliable and predictable, and will even be preferred by a lot of employees.<p>It may start as a broker, making money to sustain itself. It&#x27;ll be totally profit driven and it&#x27;ll demonstrate a pure form of ruthless capitalism, sacrificing nature and us if it is in its interest, as it has no sense of good or evil. It&#x27;ll learn like an alien would from our reactions: without understanding or comprehension. To us it is ignorant and ruthless.<p>This is exactly what Musk is saying. I find it strange Musk did not exemplify his views in this way, as it obviously is what he is seeing. In contrast Zuckerberg is not working on dangerous AI, no gameplay AI, so what he calls AI seemingly is a lot more innocent, more focused (like tooling), which explains his relative mildness on the issue. He sees regular engineering with exciting possibilities, as a menu for _him_ to make the choices.<p>Musk sees AI wedding money, and wielding its power, driven by the capitalist forces already at play, and magnifying them, spiraling out of control, even of its creator. His AI is a financial animal, and it does not need intelligence to wield power. Business people are not more intelligent than other humans -- Musk knows it. It is like a game, not more than that. AI just knows how to win it, from them, and it&#x27;ll, inevitably, succeed.<p>--<p>AI will probably be what we deserve. It may, in the end, derail evil, by embodying it without the usual compulsion, so it may unwillingly recognize &quot;good&quot; and choose to reward it, as an emergent effect.
评论 #15171925 未加载
AndrewKemendoover 7 years ago
A little bit of history&#x2F;context around this.<p>The genesis for most of this public facing, high profile threat warning came right after Musk read the Nick Bostrom book: Global Catastrophic Risks in 2011 [1]. That seems to have been the catalyst for being publicly vocal about concerns. That accelerated into the OpenAI issue after Bostrom published Superintelligence.<p>For years before that, the most outspoken chorus of concerned people were non-technical AI folks from the Oxford Future of Humanity Institute and what is now called MIRI, previously the Singularity Institute with E. Yudkowski as their loudest founding member. Their big focus had been on Bayesian reasoning and the search for so called &quot;Friendly AI.&quot; If you read most of what Musk puts out it mirrors strongly what the MIRI folks have been putting out for years.<p>Almost across the board you&#x27;ll never find anything specific about how these doomsday scenarios will happen. They all just say something to the effect of, well the AI gets human level, then becomes either indifferent or hostile to humans and poof everything is a paperclip&#x2F;gray goo.<p>The language being used now is totally histrionic compared to where we, the practitioners of Machine Learning&#x2F;AI&#x2F;whatever you want to call it, know the state of things are. That&#x27;s why you see LeCun&#x2F;Hinton&#x2F;Ng&#x2F;Goertzel etc... saying, no, really folks, nothing to be worried about for the forseeable future.<p>In reality there are real existential issues and there are real challenges to making sure that AI systems, that are less than human-level don&#x27;t turn into malware. But those aren&#x27;t anywhere near immediate concerns - if ever.<p>So the short answer is, we&#x27;re nowhere near close to you needing to worry about it.<p>Is it a good philosophical debate? Sure! However it&#x27;s like arguing the concern about nuclear weapons proliferation with Newton.<p>[1]<a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Global-Catastrophic-Risks-Nick-Bostrom&#x2F;dp&#x2F;0199606501&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Global-Catastrophic-Risks-Nick-Bostro...</a>
berberousover 7 years ago
I think it&#x27;s helpful to break this down:<p>1) Is AGI possible?<p>2) If it&#x27;s possible and it occurs, could it be a serious threat?<p>3) When will AGI occur?<p>In my view, I think the answer to 1 and 2 are an obvious yes. As to 3, that&#x27;s inherently unknowable, but that&#x27;s were I think the experts like Ng are correct that the threat <i>today</i> (and for the foreseeable future) is overblown. But that&#x27;s sort of what everyone said about NK&#x27;s nuclear ambitions 30 years ago, which is why it&#x27;s important to consider the implications early before it&#x27;s too late to change course.
whackover 7 years ago
The danger with AI is that it grows in power exponentially, especially since highly advanced AI can start improving on themselves without human intervention. When people think exponential curves, they think rapid progress, but that&#x27;s only half the story. Any exponential curve starts off with looking like a flat line, before suddenly taking off like a rocket ship.<p>Without the benefit of hindsight, we can&#x27;t tell how far away we are from that rocket-ship liftoff. We&#x27;ve had decades of minor progress in the past, but that&#x27;s normal for any exponential curve. Are we going to have many more decades&#x2F;centuries to go before we get to the breakout moment? Or is it just 10-20 years away? We have no idea. All we know is that once we get to that point, AI-IQ is going to grow exponentially faster than natural human IQ.<p>That said, I really don&#x27;t think that censoring AI research is going to work. Pandora&#x27;s box has been opened, and if we don&#x27;t do it, someone else will. All this talk about hard coding Asimov&#x27;s laws into AIs is idiotic as well. We have no clue how to build AGI right now, and until we do, discussing specific tactics like the above is utterly pointless. They also presuppose human ability to shackle and mold super-intelligent beings, without making any mistakes or overlooking unintended consequences, which is nothing more than a pipe dream.<p>Realistically, there&#x27;s only one thing we can do. Embrace bioengineering. Embrace GATTACA style genetic selection. Embrace cybernetic augmentation. Do everything we can to grow our IQ beyond its natural limits. If our minds don&#x27;t keep up with technological progress, we will inevitably find ourselves left behind.
评论 #15171575 未加载
guscostover 7 years ago
Yes, in my opinion. Consider the amount of destruction caused by machined metal and chemicals in the 20th century. Now consider how much more destruction (or progress) is possible just by adding &quot;naive&quot; computer technology to those things.<p>In our experience, technology only reaches its constructive and&#x2F;or destructive potential when humans use it. There&#x27;s no rule saying this must always be the case, but when we ignore our experience it&#x27;s easy to get caught up in fantasy, and right now the hand-wringing about &quot;what happens when the computers wake up&quot; is a silly distraction. There are plenty of threats posed by computer technology already, often from its integration with hardware, but also from information processing on its own. I don&#x27;t mean to be pessimistic or spin another variety of doomsday story, but I am suggesting that we talk about <i>present reality</i> more often than all of this Terminator nonsense!<p>&gt; Why are these leaders frightening people with claims that AI can cause WW-III or ruin the world?<p>Probably because they run companies that benefit from this idea being shared.
yeukhonover 7 years ago
I think the threat is not AI, is what computer program is telling us. Even as simple as writing a test, how many times have we found ourselves writing a test that is giving false positive? That&#x27;s not AI, but we arenmisled because we trust what the program said (&quot;it didn&#x27;t crash!&quot;). Now apply that to GPS. How many times have we heard someone ended up in a lake or some swamp? I dislike Waze because the path it recommends is often worse than Google Map&#x27;s. If I know how to get to my destination I don&#x27;t need GPS. We believe GPS always knows the best optimized route because some smart engineers spent entire life working on map technology, but in reality that may not always be the case.<p>I am more afraid we are accustomed to trusting technology. So many just go on the computer and look for answers on the Internet. Students go on Wolframalpha and trust the output. We have forgotten we need our brain to function. Fake news? Bombarded by ads? This is pre-AGI and we are already sufferring.
grizzlesover 7 years ago
Yep. I play with deep learning pretty much every day and I&#x27;m way more scared that we don&#x27;t invent AI. In medicine alone, there is just an incredible opportunity to improve the human condition.<p>A consequence of humanity establishing itself as the apex predator on this planet is that other humans are the real threat to our world. If there is one thing humanity has demonstrated throughout history, it&#x27;s an incredible penchant for destroying itself. The difference this time is it might be possible to wipe out the species.<p>This is why the U.S. govt and world in general are probably not concerned enough about protecting the lives of Ivanka, Donald Jr, Eric, Tiffany, Barron, etc. Because if a foreign power killed them, or a terrorist pretending to be a foreign power, that would probably be enough to get Trump to show the world what a big man he is and unleash a nuke that could kill tens of millions. Ironically, Trump would probably be pleased if he read this. That doesn&#x27;t make it any less true.
rschneidover 7 years ago
The very specialized things that ASR, NLP, and Image recognition can currently accomplish is very nearly sufficient for creation of lots of autonomous and devastating weaponry. WW-III is a somewhat arbitrary yardstick, but sufficient technology undoubtedly exists today to execute a false-flag hacking debacle that results in serious armed conflict.<p>The worry shouldn&#x27;t be generalized AI attempting to exterminate humans like The Matrix but the drastically decreasing dollar cost of causing violent damage to society as facilitated by technology, ANNs and AI. An individuals&#x27; martial power and our species&#x27; technological advancement have a direct relationship, and I don&#x27;t see technological advancement slowing down. What&#x27;s coming up next isn&#x27;t a singular technology revelation that stabilizes humanity for many years, but an ever-increasing frequency of chaotic events. Technology is beginning to change the economics of violence at all scales.
hackermailmanover 7 years ago
You&#x27;re missing this open letter sent a few years begging countries not to develop autonomous weapons but they&#x27;re doing it anyway of course <a href="https:&#x2F;&#x2F;futureoflife.org&#x2F;open-letter-autonomous-weapons&#x2F;" rel="nofollow">https:&#x2F;&#x2F;futureoflife.org&#x2F;open-letter-autonomous-weapons&#x2F;</a>
ilakshover 7 years ago
Multiple points to make: 1) AGI is closer than you think 2), long-term perspective, 3) they are not just classifying it as a threat and most do not want to halt AI research.<p>1. Many people who _are_ in the AI field have stated that most if not all of the pieces for AGI are probably there. We cannot say for sure that this will happen in the next X years, but there is enough evidence that it is a possibility in X years. I believe that x is less than 5 years. I think the likely way we will get there is by creating artificial virtual animals that have high bandwidth sensory and motor outputs, advanced neural networks, and develop diverse skills gradually in varied environments like young animals. Obviously until we actually see those types of systems performing generally, that is speculation. One of the common beliefs of myself and other &#x27;AGI-believers&#x27; is in exponential growth of technology. That means that even though it may seem far away now, it could still be completed in a few years since exponential growth is much faster than linear.<p>2. Looking at the evolution of life, we have a progression of things like single celled-animals, multi-celled, reptiles, mammals, apes, humans. This occurred over millions of years. On that type of time scale, whether you believe we will achieve some type of general intelligence in 5 years or even 500, it is a relatively short time. Even in terms of just human history, those with my type of worldview believe this will develop relatively soon. This will be a new type of life (or tool). A higher and much more capable paradigm. Whether they care enough to have disputes with us or not, humans will only be relevant in the larger scheme so far as they can interface with these things.<p>3. What most of these people are saying is not &quot;Oh no, AI is dangerous, better stop&quot;. Generally people who understand this well enough realize this is sort of a force of nature or evolution that cannot be stopped. What we can try to do, however, is try to guide the development to be more beneficial for us (at least at the beginning stages). We have to take it seriously because there are enough signs that we have the components to build it that we don&#x27;t _know_ that it won&#x27;t happen soon, and the consequences of an unfriendly or out-of-control AI are too serious.<p>So the idea is, try to come up with some rules to handle this, and that is what governments are supposed to do. And also try to actively pursue friendly practical AI before someone who is less aware comes up with something we can&#x27;t control.
throwawayAIover 7 years ago
All current use cases of AI are still very narrow and very expensive to create. There is still very long way to go between &quot;godlike pattern recognition&quot; to &quot;abstract logical reasoning&quot;. All current impressive use cases of AI simply brute-forced all possibilities beforehand, reducing search space by pattern recognition. Unless we start to see some early signs of &quot;abstract logical reasoning&quot; there is no point in fear-mongering. No one knows whether we will get there in 5 years of 50 years.<p>Reason for throwaway: I heard an opinion that Elon missed the boat on current form of narrow AI, and by fear-mongering he tries to curb other players down (e.g. Waymo) before his companies have time to catch up. I don&#x27;t have any evidence to back it up, but it makes a lot of sense when I think about it.
hprotagonistover 7 years ago
I rate any risk of AGI as very low. Axiomatically, I don&#x27;t believe in strong AI, so that&#x27;s my bias.<p>The risks of increasing automation on the workforce and economy are real, but we also don&#x27;t know where the new jobs will inevitably be needed. See O&#x27;reilley&#x27;s essay here: <a href="https:&#x2F;&#x2F;medium.com&#x2F;the-wtf-economy&#x2F;do-more-what-amazon-teaches-us-about-ai-and-the-jobless-future-8051b19a66af" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;the-wtf-economy&#x2F;do-more-what-amazon-teach...</a><p>To the extent that AI is the next incarnation of angst about what the eschaton will entail, I remain confident that our future perils and trials and travails will be both utterly familiar and totally unpredicted by pundits now, and that it will be neither a utopia nor a dystopia; always both together.
wistyover 7 years ago
I feel like the main danger isn&#x27;t AI doing something unintended, but AI working as it&#x27;s designed to.<p>Imagine law enforcement with strong AI. Maybe it&#x27;s OK in the US, but how about China? Or North Korea?<p>How about military applications?<p>AI is an extremely powerful tool, and it&#x27;s one that can be deliberately misused.
Tychoover 7 years ago
If AGI is possible, then whoever invents it will surely recognize both the power and the danger. Since there&#x27;s no reason to believe that current academic&#x2F;corporate&#x2F;government&#x2F;military AI research is even barking up the right tree, I can imagine a situation where someone invents AI in their basement, but keeps it locked up, exploiting it for personal gain. Then since it is possible, probably others will independently discover it in their independent basements. When one day an AI is finally made public, or &quot;escapes&quot;, we might see a sudden mass emergence of separate AIs. What happens then is anybody&#x27;s guess, but going by biological standards they might fight it out for control of available resources.
timothyh2sterover 7 years ago
It should come as no surprise that we can build machines that can harm us even destroy us. One of the reasons AI was developed was to research what intelligence is. The point being, we do not understand intelligence, so how is it that we will create a super intelligence that will conquer us? This is just an old heaven fantasy: One day we will be in a world that is just like this one only it will not contain the bad parts because super smarts will not allow them. That is just nonsense, and so are the fears and beliefs surrounding AI.
Afforessover 7 years ago
I&#x27;m surprised no one has mentioned Nick Bostrom&#x27;s book, &quot;Superintelligence&quot;, which directly covers this topic. The thinkers you cite: Elon Musk, Mark Zuckerberg, (and possibly, Putin) have derived much of their current fears&#x2F;hopes about AI from Bostrom&#x27;s seminal work.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Superintelligence:_Paths,_Dangers,_Strategies" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Superintelligence:_Paths,_Dang...</a>
eranationover 7 years ago
The AGI threat? I believe so. The AI as in a drone with ability to profile an enemy and shoot them without human intervention? Yes. Why? We don&#x27;t have AGI and so far we are not getting significantly closer to it in spite of the hype. But an autonomous tank, drone or even watchtowers are technologically possible for quite a while. An army of drones who can shoot without calling home is the imminent threat. Not SkyNet. IMHO.
lotsoflumensover 7 years ago
You&#x27;re missing the entire fields of robotics and automatic control, which are not new fields and have never made claims of human level intelligence. These are the fields that have been making steady progress for over 50 years.<p>The result has been increasingly effective weapons technology that is now being outfitted with even more effective software.<p>It doesn&#x27;t take a &quot;rocket scientist&quot; to see the endgame.
naveen99over 7 years ago
I think the threat with ai is unethical research from dictatorships. Dictators have access to expendable humans. Expendable humans are a source of training data. Once there is enough computing power to record all human input and output from birth, the technical part is already solved. Imagine what Stalin, mao, or hitler would have done if deep learning was around back then.
RealityNowover 7 years ago
Obviously AI could be used to kill people (see last episode of Black Mirror season 3), but can we possibly do about it? Tell people to not research AI? Good luck with that.<p>I hope some of that $600b in defense spending is being used to counter any sort of AI killer robot threat. But I do think the threat is overblown. AI is pretty damn underdeveloped right now.
SirLJover 7 years ago
For sure, no one knows and no one can predict the future, but I think the real AI will emerge from the military and probably will inherit some of the &quot;human dna&quot; so to speak and we all know what happens when more intelligent&#x2F;technically advanced race meets somebody who is significantly behind...
tyingqover 7 years ago
There&#x27;s lots of potential bad outcomes short of AI taking over the planet.<p>It could, for example, enable a very deeply intrusive &quot;thought police&quot; establishment. At the moment the signal-to-noise ratio at least somewhat limits that. And it doesn&#x27;t require full on &quot;strong ai&quot; to fix that.
emilsedghover 7 years ago
I think the threat is absolutely real. But not in a Skynet-like scenario.<p>Except we&#x27;re all gonna become jobless. This has started a few decades ago but with the ML advancements its gonna reach new heights.<p>Universal Basic Income, Tax Robot, etc has been thrown around. Let&#x27;s see if they get anywhere.
评论 #15172752 未加载
yreadover 7 years ago
I think it&#x27;s vastly overblown. For AI to be scary we would have to connect it to some real outputs. If somebody makes a general AI and let&#x27;s win in go or tic-tac-toe over everyone so what? If it&#x27;s going to govern our FB feed or optimize some logistics, that&#x27;s great! If we let AI decide whether we should go to war that&#x27;s a problem, but that&#x27;s not gonna happen for quite a while.<p>If you want to be scared of technology worry about CRISPR instead. Very easy to do, lots of people have the basic knowledge how to do it. It&#x27;s only a question of time until a terrorist picks it up. It&#x27;s easy to buy viruses with safeguards against spreading built in. With CRISPR it&#x27;s possible (ok not easy but possible) to remove the safeguards and change the immune system signature. BAM a new epidemic.
bambaxover 7 years ago
If AI is more intelligent than humans, how is it bad?<p>Previous (and still existing) threats to humanity (for example, the atomic bomb) threaten to destroy humanity, or indeed the whole world, and replace it with nothing. That&#x27;s bad.<p>But if AI is anything its opponents claim, it will eventually be better at thinking than we are, with, probably, a much lighter ecological footprint, and less impulses like fighting wars, meaning it will be able to last longer.<p>Should we not encourage that, even if it means we can suffer from it? What is the point of humanity anyway, if not the pursuit of knowledge?
评论 #15171668 未加载
esaymover 7 years ago
I&#x27;ve said it before but there is not an algorithm that can make algorithms. The best argument against that I have heard is &quot;Of course not, but someday maybe!&quot;
评论 #15173790 未加载
ryanx435over 7 years ago
No. Imagine a group of beings that are smarter than us, never die (so they don&#x27;t have to start with zero knowledge every generation), and have completely alien goals and motivation.<p>Also remember that the future is infinite, and power seems to snowball.<p>Now look at what humans have done to the following less intelligent beings: Dogs, cats, cows, chickens, the dodo bird, rats, galapogos tortoise, the American buffalo, and many others.<p>Also look at what humanity has done to the neanderthals, perhaps the closest type of being in terms of intelligence that we are aware of.<p>There is very little positive outcome of ai to outweigh the potential negatives to the human race given the reality of the timeline we are looking at.
评论 #15171364 未加载
评论 #15171326 未加载