The deepemoji one is fooled by "was my flight delayed? no.". I feel for the computer when it meets that one "do I speak in questions?" person. <i>chuckles.</i><p>On a more serious note, Hinton and other alluded to the need to restructure NLP studies as to focus more on the nature of recursion within language, which is basically what Chomsky has been saying for decades. It's interesting to see whether they converge.
Hello, number 9) doesn't say what the task is?<p>Also, I always wondered, do those methods work universally on all languages? For example Chinese, Korean and Japanese, with different alphabets.
I'm wondering if these tasks have a form of bias that decreases the performance. If the model sees only positive examples and no negatives then it is biased on the positive paths of decisions. The moment where one changes the path to be incorrect, the model can't recover from the mistake because there weren't any negative examples during pretraining. There's many words that never follow some words but the model never sees that.