My favorite of these is the Jeff Dean paper on using NN for database indexes. Then doing the processing on the TPUs. Really looking forward to see the difference in power required in using a TPU versus a CPU using a traditional approach.
The most promising area here to me seem like automl.
The promise of the new machine learning was that we get to move away from tedious feature engineering and everything will work and be simple. It may have become simpler but training/debugging new DL models is still painful causing the focus to move to extensive hyperparameter search. automl may become the next step in abstraction, where we design single models/algorithms that are able to build viable networks for many tasks/purposes.
the best one is "I'm too busy for romance". : <a href="https://google.github.io/tacotron/publications/tacotron2/index.html" rel="nofollow">https://google.github.io/tacotron/publications/tacotron2/ind...</a>