Hackernews likes this as a practicable application of transfer learning. Fans of machine learning want to see transfer learning as more than a cool trick.<p>Unfortunately, there is really no good reason for someone seriously interested in accurate information, like a researcher or journalist, to use machine learning for this particular task. Labeling a couple thousand images yourself or with friends is not that big of a task. Do it over a few evenings while watching TV and drinking beer. You could have mechanical turk workers do it for you for a few hundred dollars. In either case you will get extremely reliable information. If you use multiple judges you will have a good estimate of uncertainty for every classification. There is no way transfer learning can provide this uncertainty information.<p>The main advantage of this technique remains the ability to quickly label very large amounts of data on the order of hundreds of thousands of rows, or thousands of columns. For smaller data, machine learning can sometimes achieve marginal improvements in predictive performance through model complexity. However prediction in smaller data regimes is mainly useful for out-of-sample prediction. The machine learning paradigm offers limited support for measuring uncertainty for out of sample predictions, which is super important if you are a researcher.<p>One capability of transfer learning could be to support many many applications from one model, but I have yet to see demonstrations of this in practice. The problem is that knowing how well learning has transfered requires measuring generalizabity and so cannot be done blindly.