TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Warnings of AI doom gave way to primal fear of primates posting

37 点作者 morisy大约 4 年前

8 条评论

qPM9l3XJrF大约 4 年前
The people concerned with AI doom stopped talking to journalists because journalists typically didn&#x27;t report very well on the issue.<p>&gt;...something I’m very concerned about is the use of AI for autonomous weapons. This is another area where we fight against media stereotypes. So when the media talk about autonomous weapons, they invariably have a picture of a Terminator. Always. And I tell journalists, I’m not going to talk to you if you put a picture of a Terminator in the article. And they always say, well, I don’t have any control over that, that’s a different part of the newspaper, but it always happens anyway.<p><a href="https:&#x2F;&#x2F;futureoflife.org&#x2F;2020&#x2F;06&#x2F;15&#x2F;steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;futureoflife.org&#x2F;2020&#x2F;06&#x2F;15&#x2F;steven-pinker-and-stuart...</a>
评论 #26736791 未加载
rob74大约 4 年前
I don&#x27;t think it&#x27;s an either&#x2F;or situation - it&#x27;s us (the social networks) creating the AI (&quot;the algorithm&quot;) that, in the pursuit of greater &quot;user engagement&quot; (=making more money), exploits and amplifies our most basic primate instincts (fear, hatred etc.) and may lead to our doom if unchecked...
评论 #26736034 未加载
hfjgktnrnf大约 4 年前
Some of the people who are usually warning about AI doom, like the rationality community, were screaming in January 2020 that Coronavirus is getting out of control, and a pandemic is inevitable.<p>They were accused of fear mongering at the time, and told that they should worry about the flu, and not about a virus which only killed a few hundred people.
ed-209大约 4 年前
The existance of a problem requiring censorship is never critically examined. The need for AI mediation between humans is never questioned. The author himself is captive to this most insidious meme without the slightest awareness.
audunw大约 4 年前
I think any analysis of the dangers of AI need to consider the principles of evolution. Human intelligence is a product of humans evolving in a natural world, where individuals were selected for their ability to compete for resources in that world. This has produced many characteristics, one of which is violence, which has sometimes been necessary to secure those resources.<p>AI robots will probably not be intentionally violent against humans. AIs are also evolving, but in an artificial world. In this world AIs are competing for humans favor. If they do well, if we&#x27;re happy with them, we grant them computing power and replicate them. This selects for very specialized AIs with very deterministic behavior. Nobody wants an AI with unpredictable behavior.<p>AIs that survive are the ones that are the most adapted to serving humans. The danger is not that the AI itself harms humans, but that humans wants to harm or exploit other humans through AI. There could be a danger in AIs developed by the military, but I&#x27;m not too worried because they&#x27;ll most likely be extremely special purpose with multiple fail-safes. Nobody wants to develop an AI that could kill the ones developing&#x2F;using it. I&#x27;m most worried about AIs developed for economic exploitation. It&#x27;s what we&#x27;re motivated to work on, it&#x27;s the area where there&#x27;s most development being done, so it&#x27;s probably where we&#x27;ll first see advanced AIs causing problems. Arguably we already have (algorithms used on social media platforms promoting disinformation)<p>The thought that AIs will somehow gain some kind of general intelligence where it&#x27;ll find that the logical thing to do is to eliminate humans is a fantasy. We don&#x27;t select for AIs with general intelligence, if there even is such a thing. Most likely we are overestimating our own intelligence. It&#x27;s probably not as &quot;general&quot; as we like to think. We don&#x27;t generally kill because it&#x27;s the logical thing to do, but because of emotional reactions which are a product of our evolution.<p>The example of the paperclip maximizer is really dumb. Such an AI would not be selected for general intelligence, and there&#x27;s no reason to think general intelligence will occur accidentally. Even if it somehow gained this magical general intelligence, the decision of whether to murder humans to secure metal resources, or work with them, is probably absolutely undecidable. Even the most intelligent AI imaginable could not consider all the factors. The default would be no action. An AI would not have emotions produced through natural evolution that it could use as heuristic to decide what to do here. Not a problem for us humans. We have a built-in drive to consider killing someone outside our group, even if there&#x27;s no rational argument to do it.
Animats大约 4 年前
There&#x27;s been a shift from freedom of speech to successful demands for censorship everywhere, and it only took about two years. That&#x27;s scary.<p>The real power behind this was not social media. It was television. No need to rehash the history of Fox News and Trump. The key point is that news detached from real-world facts became the major input for a sizable fraction of the population. Fox discovered that there&#x27;s a huge market for telling people what they want to hear. Notably, to the exclusion of contrary views.<p>Supply-side propaganda has been around for centuries. Now we have a market demand for propaganda. One that pulls media into being even more radical. That&#x27;s new. Eventually even Fox felt they&#x27;d gone too far. Then they started losing viewers to Breitbart and OAN. It really is demand pull, not supply push.<p>Social media let you listen to people you want to listen to. So it amplifies this phenomenon. But it didn&#x27;t create it.
评论 #26736748 未加载
评论 #26737992 未加载
评论 #26736652 未加载
评论 #26737297 未加载
SyzygistSix大约 4 年前
&quot;f you had told a lay observer a decade ago that there would be a crisis over Facebook, Twitter, Instagram, and other social media platforms banning the president of the United States for inciting a violent riot against Congress that included a barechested, behorned man dressed as the “QAnon Shaman,” you would likely be accused of writing bad science fiction.&quot;<p>It wasn&#x27;t popular but neither was it unheard of for people to recognize social media as antisocial a decade ago. Plenty of people would not be surprised a bit by that headline. People didn&#x27;t just decide not to join Facebook and Twitter in a void by themselves; there was plenty of media (and lots of fiction being written for decades) warning about social media ten years ago.<p>And the fear of robots was always more about their human controllers.<p>AI is it&#x27;s own issue, although the idea that we are being manipulated by a perverse AI has crossed my mind, as an entertaining but not realistic idea.
brutusborn大约 4 年前
This was needlessly verbose, if they want subscribers like myself they are going to need to simplify.