The thing people don't understand is that in order to simulate human intelligence, you have to be able to simulate TWO things:<p>1) A human brain<p>2) An entire human development up to the age of intelligence you are looking for<p>The first one is not the harder of the two.<p>Now, many AI researchers believe they can cut corners on the whole simulating an entire human lifetime thing, and that they can use a more impoverished simulation and make up for it on volume... say, just flashing a billion images in front of the AI hoping that's enough to form the specific subset of intelligences you are hoping for. Or letting the AI read the entire internet. But at this point it's an open question whether that could even theoretically lead to generalized intelligence.
[Regarding power of artificial intelligence] "...If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024."<p>I guess his prognostication here depends on super-powerful computing and brain-emulation software. China's Tianhe-2 has already hit 3.3^15 ops, Bostrom was anticipating for 10^14 - 10^17 to be the runway. Now, I am not sure what the state of brain emulation is at the moment but it looks like our biggest snag is there. Researchers are having a hard time bubbling up new paradigms for artificial intelligence software. Anyone have any insight into that?
There's a more recent article in the New Yorker that follows Mr. Bostrom around a bit and is a good general read:
<a href="http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom" rel="nofollow">http://www.newyorker.com/magazine/2015/11/23/doomsday-invent...</a>
That took some... balls, back in 1997.<p>There were a lot of strong AI sceptics, who repeated on and on: oh, computers can calculate, but can they play chess? Oh, chess was easy, how about understanding what this picture about? Driving cars? Talking like humans? Oh, they can talk now, but do they really _think_?<p>Reality happens faster than anybody imagined. Except a few visionaires like Bostrom.
Let's look at the most important section of the paper. He estimates the processing power of the brain:<p><i>The human brain contains about 10^11 neurons. Each neuron has about 5 • 10^3 synapses, and signals are transmitted along these synapses at an average frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals 10^17 ops. The true value cannot be much higher than this, but it might be much lower.</i><p>In other words, there are 5 * 10^14 synapses in the brain, and each synapse transmits up to 100 signals per second, and we can probably encode each signal with 5 bits. That's ~10^17 bits per second.<p>So, uh... does anybody else notice that that's <i>not an estimate of processing power</i>?<p>That's an estimate of the rate of information flow between neurons, across the whole brain.<p>The level of confused thinking here is off the charts. Does this guy not understand that in order to simulate the brain, you not only have to keep track of information flows between neurons, you also need to <i>simulate the neurons themselves</i>?<p>That's not merely a flaw in his argument. It indicates that he has no idea what he's talking about, at all.<p>Needless to say, this paper and its conclusions are complete nonsense.
Seems AGI is the rage these days. David Deutch has an article and outlines a good point. We won't have an AGI before a good theory of consciousness. Some philosopher will first need to explain consciousness in detail (more so than Dan Denett, which already did an amazing job), and then neuroscientists might have to prove that theory right, AND THEN AI researchers will be able to take a stab at it. So I don't think it will just pop in to existence by running some neural network training over and over again.
AI is the wrong way to go looking for superintelligence.<p>Far more realistic is developing means of organizing humans together effectively enough to achieve superintelligent levels of collaboration.<p>I think before 2025 is quite reasonable given this approach.
Yup, the XKCD translations still hold: <a href="https://xkcd.com/678/" rel="nofollow">https://xkcd.com/678/</a>
Nick Bostrom is a peddlar of the apocalypse who has made his name by spreading fear about a fairy creature called superintelligence. He's convinced people to go looking for weapons of mass destruction in snippets of math and code. But the WMDs aren't there any more than they were in Iraq. Nice work if you can get it.
<i>The human brain contains about 10^11 neurons. Each neuron has about 5</i>10^3 synapses, and signals are transmitted along these synapses at an average frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals 10^17 ops.*<p>This kind of nonsense is why no one should take Bostrom seriously. We did not then and do not know even begin to know <i>how</i> to write software to "simulate" a human brain, or whether such a task is even possible with modern day tools. Multiplying random neurobiology stats times 0.5 bits pulled out of your ass == AI in 2004?<p>We have "AI" that can drive a car or a copter, play Chess or Go, translate speech to text, do image recognition ... but what we mean by human intelligence is something different. And I see no evidence anyone has made much progress developing a truly biological-like AI even at the level of say, a mouse. Which according to Bostrom's math ought to be doable in a 2U chassis by now, right?<p>If someone does succeed in writing mouse-AI or dog-AI, I'd believe that could be scaled up to human-level intelligence very rapidly. But it's clear to me there's (at least one) fundamental breakthrough missing from the current approach, because my dog can't play chess or drive a car, but he has a degree of sentience and awareness (and self-awareness) that no 2016 AI even approaches.