The patterns picked up by the training don't seem to offer too much variation than a simple Markov chain. The author finds the generated texts similar to a conversation, because that's what they're looking for. But they look just as random as simply selecting the next random word that comes after the current word from the training set.
Am I missing something? This reads like gobbledygook. I have a sneaking suspicion that this very post is made to seem legitimate/credible while I don't necessarily believe it is? My mind is exploding a little bit. I am confused. Am I? Hmm.
Other idea: make a computer that is good at finding whether a chat user is a human or a computer (A sort of Turing Test judge bot). Then use it as a fitness function to evolve a bot.<p>It's also what I don't like about the Turing Test, the core trait that is rewarded by the test is deceit.