Where this all falls down is in the assumption that human minds are consistent systems. I have to wonder if anyone who thinks such a thing has ever met any actual humans. Even a brief interrogation of a typical human as to their beliefs and assumptions about the world should rapidly disabuse us of the thought.<p>Humans are perfectly capable of assuming unproved axioms, changing their set of axioms, and accepting contradictory axioms. We do these things all the time, applying one axiom in one situation, and it's opposite in another. Just ask someone about their political beliefs for a while.<p>The fact is human intelligence is not an end in itself, it's a tool we use to achieve goals set by our evolutionary priorities, as encoded into our emotions and needs. These are the things that drive us, not logical axioms and proven truths. Even smart people have an emotional need to be correct, and many will resist having their beliefs challenged and changed tooth and nail. It takes constant effort and self discipline to maintain an open mind to new ideas and the rejection of existing assumptions, and certainly doesn't come naturally to us.<p>So this systematic theorising all seems somewhat beside the point. Don't get me wrong. It's interesting and useful philosophical work, no question, but it's not really applicable to actual human minds.
The argument discussed here was first published by Lucas in 1961 ("Minds, Machines and Goedel"). What seems to me a conclusive refutation of it was published by Hilary Putnam in 1960 ("Minds and Machines"). Putnam's point is much the same as simonh's in a comment here: the Lucas(/Penrose) argument makes assumptions about human mathematicians that do not in fact apply to human mathematicians.<p>[EDITED to add:] Contra simonh, though, one can refute the argument without going so far as to say that human mathematicians are definitely inconsistent. (Maybe a sufficiently careful human mathematician is consistent.) All that's required is that we not be able to <i>prove</i> that we are consistent, and I think it is extremely clear that we can't.<p>In other words, the Lucas(/Penrose) argument was refuted before it was ever published.<p>(Penrose's version isn't really any improvement on Lucas's.)<p>Note: The video is 2 hours long and highly technical. I haven't watched anything like all of it. The speaker is _not_ endorsing Lucas's or Penrose's conclusion that Goedel's theorem shows that minds cannot be mechanized; he makes observations similar to Putnam's, simonh's, and mine, but clearly makes them with more subtlety and intricacy :-).
I've only barely started with this video but I'm currently two thirds of the way into Penroses's "The Emperor's New Mind" and I've read a bunch of other opinion pieces on the subject along the way.<p>I think I'm just too dumb to understand this discussion. I think for me we could split the discussion in two potentially separate questions:<p>- Can a machine emulate a human mind so well that it would be indistinguishable from a "real" human to an external observer (that's the Turing test, effectively)<p>- Can a machine emulate human consciousness<p>And maybe a third bonus question:<p>- Is there a meaningful difference between these two propositions from a scientific perspective? I.e. can we make falsifiable claims that would let us suss out philosophical zombies?<p>At this point I'm absolutely convinced that the answer to the first question is affirmative. We're not there yet and there's quite a long way to go until we do, but I really don't see why there would be a fundamental mathematical hurdle on the way there. Maybe it exists, but I have yet to find a really compelling argument of where that hurdle would lie concretely. Give me an example of a thought that we couldn't teach a machine.<p>Regarding the 2nd question (and the 3rd) it just boils down to "what's consciousness exactly? Is it even knowable?" and I don't think anybody has an answer to that. My personal intuition is that it's unknowable.