<i>Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.</i><p>In a limited way, this has already happened. Imagine talking to a human of average intelligence with the best education we could provide now sans all knowledge of and reference to the computers and the internet. S/he will be amazed by your seeming omniscience, even if the only site you had access to was wikipedia.
<i>From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century.</i><p>We are already seeing this happen because of the use of computers. Computers allow us to violate the economic laws and collapse the financial system. Computers have been the cause of more than one financial collapse in the past.<p>I have read words like, "AIG didn't factor in the probability of these things happening, because they were so remote." Isn't that the same as "happen in 'a million years'?"<p>Computers are allowing us to do things today, that we thought only decades ago would only be possible in a million years -- or never. Many thought we'd never get to the moon. We probably wouldn't have without computers.
<i>The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.</i><p>Yes, well, single-core clock speeds are a dead end now, and it's not at all clear how to take advantage of multiple cores in the same way. Why are we still excited about this?
What if we are culturally programmed to refuse singularity?<p>The key in the article is that constructing a higher intelligence will enable us to get to the next level faster - by using our new found capacities. But all we see around us makes it pretty obvious this is not the case. Whatever means we have of improving ourselves, they're woefully underused. We actually have ways to make intellectual work better. Not science fiction, but simply a trip to the pharmacy. They work (there have been several articles on HN about provigil or ritalin), and they can be made safe - they mostly are, all we need are some proper guidelines. But what do we do? We talk about outlawing them. And they are definitely not publicly acknowledged.<p>And before you say that this is not the same: how exactly is it not? Can you imagine a way to improve a human being which would be socially acceptable? Not curing handicaps, but making better for the sake of making better. Just imagine the public reaction if a company offered implantable headphones. Cutting a human for no medical purpose is social suicide.<p>Now of course this can not stop progress forever. But it can make it a good deal slower.
Now I see what Steve Yegge was talking about in his sci-fi story...<p><a href="http://steve-yegge.blogspot.com/2009/05/programmers-view-of-universe-part-3.html" rel="nofollow">http://steve-yegge.blogspot.com/2009/05/programmers-view-of-...</a>
The original anti-singularity book: <a href="http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach" rel="nofollow">http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach</a>