I think that the work that is done on ELMo, BERT and others is great and useful. Unfortunately, there are many grandiose claims circling around these papers, such as the title of this blog post.<p>For example:<p><i>If we’re using this GloVe representation, then the word “stick” would be represented by this vector no-matter what the context was. “Wait a minute” said a number of NLP researchers (Peters et. al., 2017, McCann et. al., 2017, and yet again Peters et. al., 2018 in the ELMo paper ), “stick”” has multiple meanings depending on where it’s used. Why not give it an embedding based on the context it’s used in – to both capture the word meaning in that context as well as other contextual information?”. And so, contextualized word-embeddings were born.</i><p>This is blatantly false. Contextualized word representations have been around for a very long time. For example, the neural probabilistic language model proposed by Bengio et al., 2003 produces contextual word representations. There have been many papers about neural language models thereafter. However, the idea is even older, Schütze's 1993 paper (Word Spaces) produces context-dependent word representations with subword units (n-grams).<p>Researchers have been well-aware for decades that ideally one would need context-sensitive representations and that representations such as those produced by word2vec or GloVe have this shortcoming. However, one of the reason that word2vec became so popular is that it is damn cheap to train [1] and that the possibility to pretrain on much larger corpora gave these simpler models an edge.<p>ELMO, BERT, and others (even though they differ quite a bit) spiritual successors of earlier neural language models that rely on newer techniques (BiDi LSTMs, convolutions over characters, transformers, etc.), larger amounts of data, and the availability of <i>much</i> faster hardware than we had one or two decades ago (e.g. BERT was trained on 64 TPU chips, or as Ed Grefenstette called it <i>blowing through a forest's worth of GPU-time</i>).<p>Disclaimer: I have nothing against this work. I very much enjoyed the ELMo paper. I am just objecting to all the hype/marketing out there.<p>[1] The skip-gram model with negative sampling is very similar to logistic regression, where one optimizes parameters of two vectors rather than just one weight vector.