Oh come on guys. It's much more <i>fun</i> to assume that he's right! This is why you know his name at all: it's a fascinating idea.<p>This is why I believe the singularity will happen in my lifetime. I honestly, genuinely, have no idea whether it really will, but believing that it does makes for excellent ideas, great conversations and a whole lot of excitement (and Judgment Days) in general. I <i>want</i> to believe it. This might be difficult for a certain type of rationalist (that many HN'ers are) to appreciate, but if you really want to believe something very much, the step to actually believing it is very small. Even if it's a conscious process.<p>In general though, no need to squeeze those butt cheeks so much, this is not the stock market. Fun predictions are fun!
Ray Kurzweil is the wrong person to publicly personify transhumanism (I'm not opportunistically jumping on the bandwagon, I've been saying this for a long time now). I realized this when I read about his idea to resurrect his dead father by feeding some left-over data into an AI tasked with imitating his father. While I do think it's unethical to slave a sentient AI algorithm to this charade to begin with, the larger problem is that he apparently believes this will bring a dead person back to life. That's a very shallow idea for crossing over to a new substrate, <i>clearly</i> a lot more is required to port over a person.<p>While it's good to have a quasi-official spokesperson who can convey some of the concepts to a broader audience, things also got simplified in a way that seriously distorts them. It's not about arbitrary singular points during our development, it's about the breathtaking directions these developments are destined to go in. That's a much harder concept to sell than, say, a magical tipping point where history ends, but I'd say it's well worthwhile.<p>The standard singularity talk doesn't reflect what's really happening to us - nevertheless, it's important to open people up to the prospect of continued technological development. It sounds so simple, but this is really not an obvious path to people who believe the future will mainly contain ever-lighter iPads and will otherwise be the same: there will be a couple of points in history where technology reaches the potential to fundamentally change who we are, what it means to be alive, and what our goals are as a civilization.<p>Technology will make it possible to alter some very fundamental realities humans had to live with for a long time. The impact of this is enormous beyond words. We get to change things in a very big way, including some options that would be hugely unpopular even for the tech-savvy people on HN - like getting rid of material scarcity, for example, or choosing radical life extension.<p>It's clear that this will be upsetting to a lot of people, and suddenly we need a marketing person like Kurzweil. Personally, I'd prefer Eliezer Yudkowsky would take that role, but that's probably not a good choice for "lay people".
Its worth pointing out that this is a commentary on Kurzweil's particular view of technological singularity and how it might happen. Picking holes in his proposed theory of mind for AIs, or his relating biological evolution with technological "progress", doesn't <i>necessarily</i> invalidate the general idea of exponentially accelerating technology.<p>Kurzweil is just one advocate. In fact, the idea [1] originated with Von Neumann in the 50s and was popularised by Vernor Vinge in the 90s, and its worth reading Vinge's paper "The Coming Technological Singularity: How to Survive in the Post-Human Era" [2]. Bruce Sterling gave a interesting (and sceptical) talk [3] on the subject a while back.<p>For what its worth, I'm personally not convinced that we'll ever see a technological singularity because I don't think our culture generates technology that has the required attributes. Siri or Google Now don't need to be self-aware in order to do what they do, and there is no apparent commercial pressure to make them so. And since people build technology for human purposes, the presence of people who sleep, get bored, etc. will act as a drag on any tendency for runaway acceleration.<p>[1] <a href="http://en.wikipedia.org/wiki/Technological_singularity#History_of_the_idea" rel="nofollow">http://en.wikipedia.org/wiki/Technological_singularity#Histo...</a><p>[2] <a href="http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html" rel="nofollow">http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.htm...</a><p>[3] <a href="http://longnow.org/seminars/02004/jun/11/the-singularity-your-future-as-a-black-hole/" rel="nofollow">http://longnow.org/seminars/02004/jun/11/the-singularity-you...</a>
See, I'm seriously not convinced about the whole singularity thing. This article sort of touches on why I can't wrap my head around it, but it basically comes down to this: if I "upload" my "consciousness" -- how is it, well, <i>me</i>? Now, I'm fully aware that's a massive can of philosophical worms, but I simply can't reconcile it, personally.<p>I see the future hyperintelligent AI as something more like Iain M Banks "Culture" Minds, but hey, maybe I just can't comprehend it with my puny brain :)
If Kurzweil is right or wrong and by how much is completely irrelevant.<p>What is relevant, and this is the only relevant thing about the entire discussion is:<p>1) When will we reach the ability to take a snapshot of an entire organism with a nervous system up to a resolution sufficient to satisfy all expected simulation parameters?<p>2) When will sufficient computing and storage capacity be available to run a simulation off the data captured in step #1?<p>3) Will the simulation performed in step #2 adequately match the organism as it exists in real life to declare that the simulation a success.<p>4) When will we be able to perform steps #1, #2 and #3 for a human sized organism?<p>By the nature of such an experiment, a negative result does not indicate the absence of a positive in the future. And certainly, and inference on a negative outcome at any step cannot be valid without performing the steps, that's just guesswork.<p>It's also worth noting that inherently estimates on timelines of this are nearly impossible because one cannot exclude the possibility of radical advances in scanning, computing and storage hardware.<p>Or, if you prefer to have a TL;DR explanation for geeks. Unless you can write and run a program, it's nearly impossible to debug it or make statements about how useful it'll be.
Still, the performance of computers is increasing exponentially. The arguments in this article seemed a bit weak.<p><a href="http://top500.org/statistics/perfdevel/" rel="nofollow">http://top500.org/statistics/perfdevel/</a>
We've got quite a few years to go... but I firmly believe that we will actually beat his prediction. Artificial intelligence surpassing that of humans is an inevitability. If we accept that the 'mind' is the sum of the parts of the 'brain,' then we can safely say that simulation of a brain should produce a mind. Besides this line of attack, we could independently create a strong AI based on other methods. Just think of the myriad AI we have experienced in the past few years that people 20 or 30 years ago would've laughed at as ridiculous--Watson, Google car, and Deep Blue.
I find so much of the arguments the article lists as 'far-fetched' are actually not far from where we are today.<p>If you really think about it how far are we from there being ' no distinction... between human and machine or between physical and virtual'.<p>I specifically think of the 'physical and virtual' stuff. I'm currently having a virtual 'conversation' regarding this topic. None of you who may respond are with me, we're doing this all virtually, and I doubt the time is far off where somebody can have a machine respond coherently with their general thoughts on a subject.<p>When the author talks about Kurzweil not differentiating between biological evolution and mechanical evolution, I think that our biological evolution has in many ways been a result of our technological evolution, would one exist without the other? Isn't out technological evolution in the areas of food production resulting in our genetic evolution as larger (both taller and fatter) people?<p>I know I'm picking some pretty minor things here and that means the singularity is still a long way off, but compare the last 40 years to the next 40 years.<p>Not everything Kurzweil describes to be part of the Singularity has to happen exactly as he describes it. It's a theory, and (from getting only about 1/3 through the book) an interesting one.<p>I guess haters gonna hate.
Incorporation is giving a company the legal status of a biological entity. Within 25 years I think we will have the first company staffed entirely by AI machine (trading bitcoins?).<p>If machines ever get the "inventive spark", its seems clear to me there will be rapid expansion of machine influence on the universe. Computation is just silicon (rocks) after all.<p>So that's the big IF. Can we invent inventive machines? Machine learning is certainly going at rapid pace, NELL etc.