It can "write" like Nabokov but it can't read.<p>Symbolic A.I. Did not fail at text generation but it did not have a real market. Today the web is full of "mad-lib" text generated by SEO spammers. Practically many software users don't seem to care if forms in an application use the right articles (e.g. 'a', 'an'). For that matter you can get superhuman performance at grammar by squashing it's to its 100% compared to ordinary people who do not punctuate correctly.<p>The language embedding business has traded percentages of accuracy on benchmarks for percentages of possible commercial accuracy because of information that is provably lost in the front end.<p>The dialog between linguists and the embedding industry has been slow to develop because when you look for linguistic or semantic concepts in the embedding (say by training a classifier) they are hard to find.<p>For instance you might try a set of 10 color/non-color words, predict the result for another 10 words based on the WordNet embedding and superficially think it is promising. With every word you add to the training and test sets it will perform worse. It passes superficially because it knows many biases like 'anyone named Tyrone is a thug.'<p>That kind of machine may be able to write fiction, but you need a different level of understanding to act justly, do hard things correctly, etc.