Sounds like a fundamental misunderstanding of how current AI systems work. There is no "understanding" or "consciousness" present in LLMs such as ChatGPT. There is only an enormous database of text culled from the internet and other sources, and very clever algorithms to predict the most probable next word or syllable (token). This does a good job of giving the illusion that there's intelligence guiding the conversation, but it's really only a very sophisticated electronic parrot.<p>The good news is that this capability can be very useful in spotting patterns in large amounts of seemingly random data. But it would be a big mistake to assume that this indicates anything more than dumb search and retrieval.
C/F fMRI and the salmon of consciousness.<p>Look, I think there should be funding for consciousness science. I don't think it should be a panic, I don't think it should be tied to AI, or AGI, I don't think anyone who believes in "the turing test" as some magic arbiter should be involved.<p>But it feels to me like the problem of what consciousness is, is a multi-disciplinary question which will pay back in improvements to anasthesiology, mental health prescribing, post traumatic injury recovery, and maybe, very thin maybe, to AI and modelling of human decision making.<p>BTW I am also pretty sure it has been an ongoing activity of research with funding for decades. Like Norbert Weiner years ago.
> here are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.<p>I wonder what kinds of things they're including in this statement.