As someone with a background in philosophy, I found this to be a pretty vague and unsatisfying article that throws a lot of terms around without making any concrete points. Sloppy thinking, frankly, and the kind of thing that gives philosophy a bad name.<p>I don’t deny that these AI tools will probably have major effects on society, but from what I can tell from this rambling article, the idea seems to be that because LLMs have “interiority”, that will cause humans to rethink the notion of consciousness and start applying it to machines, presumably giving them rights and so on.<p>This narrative never made any sense to me, because I don’t think consciousness or intelligence has ever really been the relevant factor in determining worth. Plenty of things have consciousness or intelligence. The relevant factor here is humanity, and for the foreseeable future it will be very easy to biologically determine what is human and what isn’t. Until that becomes impossible, virtually no one is going to ascribe personhood to an AI, and no matter how complex AI systems get, they will still be perceived as complex machines, not selves.<p>I haven’t seen anyone address this point, but I also haven’t read a ton of the responses to the Turing Test which factor in recent developments. (So I’d be glad if anyone critiques the argument here.)