The paper, at least as shown here, [1] is vague about which results came from implanted electrodes and which came from functional MRI data. Functional MRI data is showing blood flow. It's like looking at an IC with a thermal imager and trying to figure out what it is doing.<p>[1] <a href="https://archive.is/650Az" rel="nofollow">https://archive.is/650Az</a>
I want to see a cats POV when its startled by a cucumber (Youtube has lots of examples). A theory is that part of the brain mistook it for a snake. Also research on "constant bearing, decreasing range (CBDR)" where drivers may not notice another car/cycle in a perfectly clear crossroads till its too late.'
I think it would be interesting to know if the viewer's familiarity with the object informs how accurate the reconstruction is. This shows presumably lab-raised macaques looking at boats and tarantulas and goldfish -- and that's cool. But presumably a macaque especially whose life has been indoors in confinement has no mental concepts for these things, so they're basically seeing still images of unfamiliar objects. If the animal has e.g. some favorite toys, or has eaten a range of foods, do they perceive these things with a higher detail and fidelity?
It reminds of this research where faces monkey's were seeing were recreated almost identically.<p><a href="https://www.bbc.co.uk/news/science-environment-40131242" rel="nofollow">https://www.bbc.co.uk/news/science-environment-40131242</a><p><a href="https://www.cell.com/cell/fulltext/S0092-8674(17)30538-X" rel="nofollow">https://www.cell.com/cell/fulltext/S0092-8674(17)30538-X</a>
Maybe I missed this, but isn't the underlying concept here big news?<p>Am I understanding this right? It seems that by reading areas of the brain, a machine can effectively act as a rendering engine with knowledge on colour, brightness etc per pixel based on an image the person is seeing? And AI is being used to help because this method is lossy?<p>This seems huge, is there other terminology around this I can kagi to understand more?