<i>“Some people think, hey, there's this ultimate barrier, which is we have subjective experience and [robots] don't, so we truly understand things and they don’t,” says Hinton. “That's just bullshit. Because in order to predict the next word, you have to understand what the question was. You can't predict the next word without understanding, right? Of course they're trained to predict the next word, but as a result of predicting the next word they understand the world, because that's the only way to do it.”</i><p>When Geoffrey says they understand the question, are these models really understanding the question or isn't it just "transforming" in the sense that some type of lookup is happening where it says, "given something that looks like this, I should say x next". Is he just arguing that he believes humans do the same thing?<p>Personally I don't think we do this because somehow we now we're wrong long before we give an answer, even when people give false answers? People give false information out in order to try feel important, be helpful or to avoid being redundant etc.