There's that one iconic image of neurons suspended in space with bolts of electricity flashing between them. We are told that's how our brains work.<p>We are then shown a diagram by a computer scientist. Instead of cells and thunder, we see circles and arrows. Then we are told there is an algorithm that simulates what the brain does. Viola, we have our artificial neural network. Not only do they look similar, they have two words in common, neural and network!<p>And so for most of us, there is only one logical conclusion: It does what our brain does, so once our computers have the power our brains do, we'll have the singularity!<p>Of course, now we know this is complete bullshit.<p>Basically, computer scientists just took the names and those initial abstractions and ran with it. They never looked back at the biology or how brains actually work. The result is a ton of great research, but they've strayed further and further from neuroscience and from humans. Which is obvious, because they're staring at code and computers all day, not brain meat. If there is one thing AlphaGo proved it is that we've made a ton of progress in computation, but that it's a different direction. Just the fact that average people generally suck at Go should be enough to show that AlphaGo is not human (in many ways it's beyond human).<p>In the meantime, our neuroscientist have made progress also, except, they've done it staring at the actual brain. And now it's to the point where our brains look nothing like that original image our computer scientists were inspired with.<p>Now there is this (Harvard research): <a href="https://www.youtube.com/watch?v=8YM7-Od9Wr8" rel="nofollow">https://www.youtube.com/watch?v=8YM7-Od9Wr8</a><p>And this (MIT research): <a href="https://www.ted.com/talks/sebastian_seung?language=en" rel="nofollow">https://www.ted.com/talks/sebastian_seung?language=en</a><p>With advancement comes new vocabulary, and the new word this time is connectome.<p>Some incredibly smart computer scientists will, again, take the term and all the diagrams, and start programming based on it. The result will be Artifical Connectomes, and they will blow our socks off. Now, don't get me wrong. I am not trying to be sarcastic here. This is what _should_ happen. And with every iteration, we will get closer to AGI.<p>It's just that whenever I see articles about machine learning and neural networks, I can't help but think of that classic artist's rendition of neurons firing, and how it's basically complete bullshit. Like Bohr's atom, it's an illustration based on a theory, not reality. Now we have wave function diagrams and connectomes. But as a physicist would tell you, anyone caught with a Bohr's atom is stuck in the 20th century.