"This rendering—the unknowable future that eats the present—may resonate more with an anxiety endemic to capitalist societies; as we will see, it is a characteristic nightmare of the capital-accumulating class. Capital always has one foot in the future, and even packages and exchanges “futures” as a financial instrument. A time bubble that erases the future would mean a collapsing asset price bubble in the present. For capitalism’s reality, it turns out, is stranger even than science fiction. Radical challenges to the system can change conditions in the present by, in a manner of speaking, altering the future."<p>I had this exact thought the other day in another context.<p>Lately I've been trying to puzzle out why there's been this outbreak of seemingly absurd and ridiculous nail biting over artificial intelligence in and around Silicon Valley circles. Rationally it makes little sense.<p>If you don't know, I am referring to this kind of thing: <a href="http://blog.samaltman.com/machine-intelligence-part-2" rel="nofollow">http://blog.samaltman.com/machine-intelligence-part-2</a><p>We have no evidence that "Hollywood AI" is nigh, no evidence it will "explode" and become super-human in a short period of time (and some very good counter-arguments against this scenario), and no evidence it would be intrinsically more dangerous than we are to each other. The whole fear mongering topic seems rooted in a tower of speculations that becomes increasingly precarious as you ascend.<p>I wrote a maybe 3/4 of the way baked blog post on it here: <a href="http://adamierymenko.com/on-the-imminence-and-danger-of-ai/" rel="nofollow">http://adamierymenko.com/on-the-imminence-and-danger-of-ai/</a><p>That blog post addresses some of the issues such as whether AI can or will "explode," but to me it felt like I was still struggling with the ultimate question of what really lies behind all this. Then maybe yesterday or the day before I realized that these fears might be rooted in the fear of <i>disruption</i>.<p>Consider Francis Fukuyama's very similar -- and perhaps equally shaky -- fear-mongering about transhumanism.<p><a href="http://reason.com/archives/2004/08/25/transhumanism-the-most-dangero" rel="nofollow">http://reason.com/archives/2004/08/25/transhumanism-the-most...</a><p>So transhumanism, which is basically the nebulous idea that we should attempt to radically <i>improve ourselves</i>, is what Fukuyama thinks is the most <i>dangerous</i> idea to <i>future human welfare</i>? Really? I can think of a few concerns, but how is this more dangerous than other much more obvious candidates like religious fundamentalism, totalitarian nationalism, or certain varieties of misanthropic nihilism? You know, ideas already drenched in blood that seem to have a disturbing ability to recur throughout history?<p>Fukuyama is also well known as the author of "The End of History," which is basically a court intellectual feel-good tome assuring today's leaders that the world has achieved a steady state and nothing much is going to change. (It's since become a laughingstock, as it should have been on the basis of its absurd title.)<p>Perhaps what scares certain people so much about AI is its potential to upset the world order. Human systems of control and authority are largely based on the systematic exploitation of human cognitive biases and fallacies. Even if an AI weren't explosively super-human, it might still operate in ways that are <i>non</i>-human. In so doing it might simply not be vulnerable to the same techniques of persuasion. How exactly does one rule aliens?<p>Maybe the fear isn't so much that AI is going to kill us all (especially since it would probably be symbiotic with us), but that it'd be a loose cannon on the deck.<p>At the same time, even a non-sentient but very versatile and powerful AI -- a programmable "philosophical zombie" if you will -- could obsolete entire industries overnight. As the article says, capitalist economies can cope with some amount of so-called creative destruction but too much is bad news. What happens if/when some kind of AI can do >50% of the job of lawyers, doctors, politicians, journalists, non-fiction writers, bankers/financiers, etc.? You'd have wave upon wave of bankruptcies both personal and corporate.<p>A real deep and wide breakthrough in AI could be hyperdeflationary. So might real "transhumanism" for that matter, by radically increasing the effectiveness of labor among other reasons.<p>I do know this: the reason you constantly hear financial types harp on about their terror of <i>inflation</i> is because their real fear is the opposite.<p>Interesting food for thought, don't you think? I'm not sure I share all this article's sentiments, but I agree with the basic sense that present economic systems demand conformity and conservatism at some level and fear large disruptive changes.