Maybe I missed this, but isn't the underlying concept here big news?<p>Am I understanding this right? It seems that by reading areas of the brain, a machine can effectively act as a rendering engine with knowledge on colour, brightness etc per pixel based on an image the person is seeing? And AI is being used to help because this method is lossy?<p>This seems huge, is there other terminology around this I can kagi to understand more?