I see a big parallel with the predictions for advances in neuroscience with all the predictions that were made prior to the sequencing of the human genome (the author touches on this a bit too). Lots of smart scientists really believed that once the human genome was sequenced, we would have the keys to the biological kingdom. What has actually happened is that we have discovered that the complexity of the system is probably an order of magnitude more complex than previously thought. Knowing the sequence of a gene turns out to be important, but a pretty minor factor in explaining its function. Plus we are learning that all sorts of simple rules we thought were true aren't always the case.<p>I suspect a similar thing is playing out in neuroscience. As we peel back the layers of the onion, ever more complexity will be revealed. The things Ray Kurzweil predicts may well come true. He is a brilliant guy. But the timetable is very optimistic.<p>The march of biological progress is very slow, in part because all the experimentation involves living things that grow, die, get contaminated, run away, don't show up for appointments, get high, etc... Lots of people from other scientific disciplines, especially engineering related ones underestimate just how long even the simplest biological experiments can take.
He would be correct if creation of AI depended on a thorough understanding of neuroscience. But I hope we needn't wait that long.<p>It's the old "Birds fly. To fly, man must fully understand bird flight." argument. Yet today we still don't completely understand bird flight but planes _do_ fly.<p>The analogy is not complete: we have yet to find the "air", the "turbulence", a "Bernoulli principle", etc. of intelligence. That is to be determined. But this approach is the only reasonable one.<p>As the author implies, waiting for neuroscience is like waiting for Godot.
Not interested in arguing about his time table, but the example of DNA sequencing only affording a linear increase in undersanding is bogus and he ought to know that. It has <i>significantly</i> accelerated genetics research by making mapping a matter of a browser search. As an example, the fly lines developed by Gerry Rubin <i>et al</i>, which can be manipulated to express any reporter gene in any genetically defined brain locus. That would have been completely infeasible prior to complete genomic sequencing of the fly.
The OP asks reasonable technical questions about medical nanorobots. I'm not going to defend Kurzweil, but some less-sloppy thinkers have written about this kind of stuff, like Merkle, Freitas, and Drexler. E.g. <a href="http://www.merkle.com/cryo/techFeas.html" rel="nofollow">http://www.merkle.com/cryo/techFeas.html</a>
<a href="http://www.nanomedicine.com/NMIIA/15.3.6.5.htm" rel="nofollow">http://www.nanomedicine.com/NMIIA/15.3.6.5.htm</a>
They do tackle questions like how do you power these things; I wish he'd read and criticize them instead.<p>A 7-micron-long medical nanorobot sounds pretty damned big to me, btw -- in _Nanosystems_ Drexler fits a 32-bit CPU in a 400nm cube, less than 1/300 of the volume if we're talking about a 1-micron-radius cylinder.
This article was a very similar one to ones that biologists were publishing in mid 80's when Kurzweil predicted the mapping of the human genome within 15 years. It's interesting how exponential progress is counter-intuitive even for those who have been experiencing it in their fields for years.
My big problem with Kurzweil's singularity is the massive handwaving he does between 'computers are getting exponentially faster' and 'AI will arise'.<p>This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in hard AI in the last 30 years isn't a good sign.
Nice to see a post on this topic from a neuroscientist, as I am very interested in this area but know little biology.<p>One question though--the author says "while the fundamental insights that have emerged to date from the human genome sequence have been important, they have been far from evelatory." While not guaranteed, doesn't it seem likely that we will understand <i>much</i>, <i>much</i> more about the human genome once the economies of scale come into play? The price of sequencing a genome is currently on the order
of about $10000, and if they continue to fall at the rate they have (which seems likely, based both past price decay and in-development technologies), the cost to sequence a genome will be on the order of $100 well before the end of this decade. Once we sequence millions-billions of genomes and compare the information in said genomes with data from the corresponding human subjects, I suspect we will learn a lot more than we would by trying to understand a single
person's genome. Moreover, given that the human genome is on the order of roughly a gigabyte, it would seem difficult, but not unreasonably so, to try and understand most the information in our DNA.<p>Thanks for any insight you can provide.
I've never been impressed by the "simulate a single human" approach to AGI.<p>I don't know why it appeals to people. Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?<p>Another issue is that humans aren't that great anyway. The "game of life" is really about statistical inference and people aren't that good at it -- the success of Las Vegas proves it. If you can eliminate the systematic biases that people make dealing with uncertainty, you can make intelligence which is qualitatively superhuman, not just quantitatively superhuman.<p>It's much more believable that steady progress will be made on emulating and surpassing human faculties. This won't be based on any one particular methodology (symbol processing, neural nets, Bayesian networks) but will be based on picking and choosing what works. Progress is going to be steady here because progress means better systems each step of the way.<p>Sure, the Richard Dreyfuses will be with us each step of the way and will diminish our accomplishments... and they might still be doing so long after we're living in a zoo.
I find two aspects of the singularity compelling.<p>Singularities have happened in the past when life evolves a solution to a local problem such as photosynthesis, the social primate and agriculture.<p>Kurzweils singularity is just one of many potential signularities but the near future seems either contain major innovation with great generality or collapse.
The likelihood of Kurzweil's particular vision of the singularity in this case doesn't say anything about the likelihood of the singularity in general, i.e. by the creation of artificial intelligence through methods that are nearer at hand than nanobots or whole brain emulation.
For me this complexity problem could insurmountable. I think the best approach may be to side step this issue and try selective breeding of virtual (increasingly) intelligent beings.
Before knocking Kurzweil's predictions, review his predictions of the 1990's and the people who mocked them. Kurzweil does not have a perfect track record. I think his accuracy in predicting the future is way above average.<p>Also, I find his views of the future enlightening and useful, as he illustrates lots of "just out of reach" engineering projects for me to consider tackling.<p>Between the years of 1990 and 2005, Kurzweil predicted the following:<p><pre><code> * People will mainly use portable computers.
* Portable computers will be lighter and easier to transport.
* Internet access will be available almost everywhere.
* Device cables will disappear.
* Documents will have embedded moving images and sounds.
* Virtual long distance learning will be commonplace.
</code></pre>
Mock his current predictions with care. <a href="http://www.associatedcontent.com/article/8181399/the_predictions_of_ray_kurzweil.html" rel="nofollow">http://www.associatedcontent.com/article/8181399/the_predict...</a>
I would agree he's certainly right about Kurzweil's unrealistic optimism, but I'm not sure our understanding of the brain (and other aspects of our biology for that matter) isn't increasing exponentially. Perhaps rather it just seems linear compared to the turbo-charged progress of these enabling technologies? Certainly we've come a lot further since Phineas Gage than a linear trajectory would allow.<p>He should have thrown around some numbers while he was at it. I wonder if he'd agree with clinical immortality by the end of this century, and mind-uploading by the end of the next?