It seems probable at this point we're going to have ML that can pass every test at fooling humans without ever addressing the meaty issues people like to grapple with. That is, we can build statistical models that are indistinguishable from reality up to some detailed level of inspection. But as far as anybody can tell, they have none of the properties that we normally associate with humans, such as free will, consciousness, or agency (btw, I don't think humans have any of those things to the extent that they are physically definable).<p>Once we get to that point we can start asking more interesting questions, such as "why are we so biased by our mental models of how our mental models work when thinking about intelligence?"