This is a long verbose article that contains some confused ideas, however I think it also contains some very interesting ones which could serve as a basis for further consideration.<p>The two ideas I find most interesting here are:<p>A) True random numbers cannot be computed by a Turing machine whereas human minds may have access to them. The author does not address the fact that it is relatively easy to augment a TM with a physical RNG. I'm not aware of any evidence that such an augmented TM can solve problems a simple TM cannot.<p>B) Human minds seem to be able to always abstract to a higher level meta-language whereas programs/TM's seem to lack this ability. I have a feeling this issue may be key to the difficulty in producing machines that seem "truly intelligent" but I don't currently have a clear grasp of this problem.<p>The author makes a great deal of use of the term "consciousness" without being very clear about what he means by it. He seems to be using it in the sense of what I would call "true intelligence" or strong AI rather than in the sense of "capable of having subjective experiences".<p>I have generally tended to hold the following "beliefs":<p><pre><code> 1) Algorithms cannot have subjective experiences (strongly held).
2) A strong AI algorithm could exist (somewhat weakly held).
</code></pre>
Issue (B) above leads me to question belief (2). Perhaps it will turn out that "subjective experience" is necessary for strong AI and hence that no strong AI algorithm exists ?