Eventually we may produce a collection of problems exhaustive enough that these tools can solve almost any problem that isn't novel in practice, but I doubt that they will ever become general problem solvers capable of what we consider to be reasoning in humans.<p>Historically, the claim that neural nets were actual models of the human brain and human thinking was always epistemically dubious. It still is. Even as the <i>practical</i> problems of producing better and better algorithms, architectures, and output have been solved, there is no reason to believe a connection between the mechanical model and what happens in organisms has been established. The most important point, in my view, is that all of the representation and interpretation still has to happen outside the computational units. Without human interpreters, none of the AI outputs have any meaning. Unless you believe in determinism and an overseeing god, the story for human beings is much different. AI will not be capable of reason until, like humans, it can develop socio-rational collectivities of meaning that are <i>independent</i> of the human being.<p>Researchers seemed to have a decent grasp on this in the 90s, but today, everyone seems all too ready to make the same ridiculous leaps as the original creators of neural nets. They did not show, as they claimed, that thinking is reducible to computation. All they showed was that a neural net can realize a <i>boolean function</i>—which is not even logic, since, again, the entire semantic interpretive side of the logic is ignored.