The notion of running one giant model that has many sub-talents is epic. I can imagine that all the disparate models they run today could fuse into a giant network that melds predictions and guides computation as required by the task. That seems like a very Jeff Dean scale endeavor.
As somebody who's recently starting to learn more about ML, a lot of the work of an ML engineer does seem to be automate-able (not doing research or pushing boundaries but just applying ML to some product need). For example, choosing hyperparameters, evaluating which features to collect, etc seem to be things that can be automated with very little human input.<p>His slide on "learning to learn" has a goal of removing the ML expert in the equation. Can somebody who's more of an expert in the field comment on how plausible it is? Specifically, in the near future, will we only need ML people who do research, due to the application being so trivial to do once automated?
If Tensorflow becomes the default library for Deep learning, is this a good thing or bad thing? Does it help in that all researchers can focus on what's important (the data and results) or does it hurt in that Google now controls an important paradigm for the next generation of computing?
As a ML enthusiast, this is incredible to watch!<p>I'm completely blown away that Google was working on full-scale physical architectures that were optimized for these problems. Talk about being two steps ahead of the game!
If a doctor misdiagnosis an eye ailment, they might end up with a malpractice lawsuit. If an ML program misdiagnoses an eye ailment, what is going to happen?
Once, in early 2002, when the index servers went down, Jeff Dean answered user queries manually for two hours. Evals showed a quality improvement of 5 points.