Honestly, I think this is a good thing for both AI researchers as well as AI practitioners. One mans AI-winter is another mans stable platform.<p>While the number of world-shattering discoveries using DL may be on the decline (ImageNet, Playing Atari, Artistic Style Transfer, CycleGAN, DeepFakes, Pix2Pix etc), now both AI researchers and practitioners can work in relative peace to fix the problem of the last 10%, which is where Deep Learning has usually sucked. 90% accuracy is great for demos and papers, but not even close to useful in real life (as the Uber fiasco is showing).<p>As an AI practitioner, it was difficult to simply keep up with the latest game-changing paper (I have friends who call 2017 the Year of the GAN!), only to later discover new shortcomings of each. Of course, you may say, why bother keeping up? And the answer is simply that when we are investing time to build something that will be in use 5-10 years from now, we want to ensure the foundation is built upon the latest research, and the way most papers talk about their results makes you believe they are best suited for all use cases, which is rarely the case. But when the foundation itself keeps moving so fast, there is no stability to build upon at all.<p>That and what jarym said is perfectly true as well.<p>The revolution is done, now it's time to evolution of these core ideas for actual value generation , and I for one am glad about that.