TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Future Bubble

29 pointsby dreamweaponabout 10 years ago

3 comments

tjradcliffeabout 10 years ago
The claim the author leads off with is not plausible: capitalism has seen enormous changes in social relations since its earliest days.<p>The author is a sociologist, not an economic historian, but they should still be aware of that economic historians (even ones on the Left) tend to place the emergence of capitalism sometime in the 1600&#x27;s, and break it up into multiple phases, each of which is embodied by significantly different social relations.<p>Once upon various times anti-capitalists proclaimed that unionization, emancipation, universal suffrage (giving the vote to all men rather than just men of property), women&#x27;s suffrage, labour parties and social democracy would all overthrow the capitalist order <i>precisely because</i> these were seen as radical departures from existing social relations upon which capitalism was presumed to depend.<p>Instead, capitalism has proven an enormously resilient mode of economic organization that can be instantiated in societies with extremely different social relations.<p>So the claim that the modern worker in a mature capitalist society like Sweden or Canada or Germany (to pick a few at random) stands in the same social relationship to their employer as did workers in 17th century Amsterdam or 18th century London or 19th century New York or 20th century Shanghai is extremely implausible.<p>To further claim that social relations in the future must remain as they are today for the preservation of the capitalism mode of economic organization is equally implausible.
评论 #9260718 未加载
shopinterestabout 10 years ago
Loved the article - VCs and investing is indeed an approach where you want the future to be &#x27;X&#x27;, so for that, in the present you want to do &#x27;Y&#x27; at &#x27;Z&#x27; costs.<p>In essence, we are doing exactly what this describes, and every time investors or founders take cash off the table on a round, they are indeed taking future &#x27;gain&#x27; money into the present, without knowing exactly if the future will turn out as described and anticipated.
apiabout 10 years ago
&quot;This rendering—the unknowable future that eats the present—may resonate more with an anxiety endemic to capitalist societies; as we will see, it is a characteristic nightmare of the capital-accumulating class. Capital always has one foot in the future, and even packages and exchanges “futures” as a financial instrument. A time bubble that erases the future would mean a collapsing asset price bubble in the present. For capitalism’s reality, it turns out, is stranger even than science fiction. Radical challenges to the system can change conditions in the present by, in a manner of speaking, altering the future.&quot;<p>I had this exact thought the other day in another context.<p>Lately I&#x27;ve been trying to puzzle out why there&#x27;s been this outbreak of seemingly absurd and ridiculous nail biting over artificial intelligence in and around Silicon Valley circles. Rationally it makes little sense.<p>If you don&#x27;t know, I am referring to this kind of thing: <a href="http://blog.samaltman.com/machine-intelligence-part-2" rel="nofollow">http:&#x2F;&#x2F;blog.samaltman.com&#x2F;machine-intelligence-part-2</a><p>We have no evidence that &quot;Hollywood AI&quot; is nigh, no evidence it will &quot;explode&quot; and become super-human in a short period of time (and some very good counter-arguments against this scenario), and no evidence it would be intrinsically more dangerous than we are to each other. The whole fear mongering topic seems rooted in a tower of speculations that becomes increasingly precarious as you ascend.<p>I wrote a maybe 3&#x2F;4 of the way baked blog post on it here: <a href="http://adamierymenko.com/on-the-imminence-and-danger-of-ai/" rel="nofollow">http:&#x2F;&#x2F;adamierymenko.com&#x2F;on-the-imminence-and-danger-of-ai&#x2F;</a><p>That blog post addresses some of the issues such as whether AI can or will &quot;explode,&quot; but to me it felt like I was still struggling with the ultimate question of what really lies behind all this. Then maybe yesterday or the day before I realized that these fears might be rooted in the fear of <i>disruption</i>.<p>Consider Francis Fukuyama&#x27;s very similar -- and perhaps equally shaky -- fear-mongering about transhumanism.<p><a href="http://reason.com/archives/2004/08/25/transhumanism-the-most-dangero" rel="nofollow">http:&#x2F;&#x2F;reason.com&#x2F;archives&#x2F;2004&#x2F;08&#x2F;25&#x2F;transhumanism-the-most...</a><p>So transhumanism, which is basically the nebulous idea that we should attempt to radically <i>improve ourselves</i>, is what Fukuyama thinks is the most <i>dangerous</i> idea to <i>future human welfare</i>? Really? I can think of a few concerns, but how is this more dangerous than other much more obvious candidates like religious fundamentalism, totalitarian nationalism, or certain varieties of misanthropic nihilism? You know, ideas already drenched in blood that seem to have a disturbing ability to recur throughout history?<p>Fukuyama is also well known as the author of &quot;The End of History,&quot; which is basically a court intellectual feel-good tome assuring today&#x27;s leaders that the world has achieved a steady state and nothing much is going to change. (It&#x27;s since become a laughingstock, as it should have been on the basis of its absurd title.)<p>Perhaps what scares certain people so much about AI is its potential to upset the world order. Human systems of control and authority are largely based on the systematic exploitation of human cognitive biases and fallacies. Even if an AI weren&#x27;t explosively super-human, it might still operate in ways that are <i>non</i>-human. In so doing it might simply not be vulnerable to the same techniques of persuasion. How exactly does one rule aliens?<p>Maybe the fear isn&#x27;t so much that AI is going to kill us all (especially since it would probably be symbiotic with us), but that it&#x27;d be a loose cannon on the deck.<p>At the same time, even a non-sentient but very versatile and powerful AI -- a programmable &quot;philosophical zombie&quot; if you will -- could obsolete entire industries overnight. As the article says, capitalist economies can cope with some amount of so-called creative destruction but too much is bad news. What happens if&#x2F;when some kind of AI can do &gt;50% of the job of lawyers, doctors, politicians, journalists, non-fiction writers, bankers&#x2F;financiers, etc.? You&#x27;d have wave upon wave of bankruptcies both personal and corporate.<p>A real deep and wide breakthrough in AI could be hyperdeflationary. So might real &quot;transhumanism&quot; for that matter, by radically increasing the effectiveness of labor among other reasons.<p>I do know this: the reason you constantly hear financial types harp on about their terror of <i>inflation</i> is because their real fear is the opposite.<p>Interesting food for thought, don&#x27;t you think? I&#x27;m not sure I share all this article&#x27;s sentiments, but I agree with the basic sense that present economic systems demand conformity and conservatism at some level and fear large disruptive changes.
评论 #9260608 未加载
评论 #9260615 未加载
评论 #9260528 未加载
评论 #9260619 未加载