This is an odd story that seems to gloss over the downsides of neural networks. Computational power needed to build some of the models is enough to explain how slow uptake was. At least in large. I would be interested to see just how many multiplications go into a typical model nowadays. In particular, the training of one.<p>But that still skirts the big issue, which is generalization. We are moving, it seems, to transfer learning. The danger is that we don't seem to have a good theory to why it works. At a practitioner level, I don't think this is as much of a problem. For the research, though, it is pretty shaky.<p>I think there is more than a strong chance this remains the future for a while. And I am layman in this field. At best. But this story presupposes that the past was wrong for not being like the present. That is a tough bar.