With articles like this, I want a "check back in 2 years" reminder, to see how the science shakes out. I'm not smart or informed enough to judge these current events style updates for myself.
There's three things I've always been baffled by the lack of interest in the current deep learning based AI field when it comes to parallels with biological brain:<p>1. Biological plausibility of back prop.<p>2. The lack of interest/consideration of time-continuous input on network. They are currently discrete and "learning" and inference is done separately. That's not how most organisms work.<p>3. The lack of consideration how brains (architecture, not weight) grows.<p>I might just be me missing something but I really have hard time seeing how things would scale in real world (ex: in Robotics applications of Neural nets) without those things addressed
> Nonetheless, Hinton and a few others immediately took up the challenge of working on biologically plausible variations of backpropagation.<p>Trying to prove the plausibility of a theory is one approach to science I guess... The researchers have already concluded that brains are simply information processing machines and that AI techniques are a sufficiently representative model to use to learn what brains are like.<p>I don't see how this research could give us clues to anything other than what is already presumed to be true by the researchers.
"In 2007, some of the leading thinkers behind deep neural networks organized an unofficial “satellite” meeting at the margins of a prestigious annual conference on artificial intelligence. The conference had rejected their request for an official workshop; deep neural nets were still a few years away from taking over AI."<p>The author almost makes this sound nefarious or short sighted. Workshops and symposia get rejected all the time for a mundane reason: Too many submissions for the available schedule resources at the conference. Important research gets "rejected" all the time, and the selection committees are not saying your topic/research are silly, illegitimate, or fantasy.
This linear model doesn't seem to reference those memories when considering new memories. You'd need a secondary processing unit for addressing the memories based on the current situation or argument. This is a decent model for how cells develop and how memory cells are maintained. However, it's creation still seems to be very binary, relying on IO rather than variance.<p>Maybe this will help.<p><a href="https://ieeexplore.ieee.org/document/9325353" rel="nofollow">https://ieeexplore.ieee.org/document/9325353</a>
I’m rather disappointed with the write-up. The way in which the author outlines these advances don’t really tell me what’s going on. I have some years of experience working with neural networks and I’m reasonably comfortable with the concepts.<p>Perhaps more surprisingly the mentioned ‘advances’ are not cited!
Really nice to read a round up of advances in biologically plausible algorithms. The field, responding to incentives has, in my subjective opinion, undervalued this class of advancement. I expect once we've wrung the value of of the current techniques that this is the direction advancements will be made in.
Does anyone else notice that a lot of this stuff is just rehashed forms of things from decades prior?<p>Someone tried making a computer like this decades ago.<p>Ex-Machina had a plot device like this too, to make the robot’s transistor based brain.