Moats and walls are pretty good analogies here. The magnitude of the barrier provided by a moat was considerable in the 1300's. By the late 20th century, it was far less.<p><a href="https://www.youtube.com/watch?v=bWMrY49qqDw" rel="nofollow">https://www.youtube.com/watch?v=bWMrY49qqDw</a><p>In the 11th century, a wooden palisade or an earthen berm fortification could be held for something like a half year. By the end of WWII, it constituted a delaying tactic.<p><a href="https://en.wikipedia.org/wiki/Rhino_tank" rel="nofollow">https://en.wikipedia.org/wiki/Rhino_tank</a><p>A phase change happened with military tactics in the lead-up to the 1st half of the 20th century, where the power of mobile mechanized armor and air support greatly reduced the value of fortifications.<p>That said, I don't think moats are dead. It's just that the time-scales have changed.
At the PyData DE I just saw an excellent talk about GANs and data augmentation in image recognition:<p><a href="https://www.slideshare.net/FlorianWilhelm2/performance-evaluation-of-gans-in-a-semisupervised-ocr-use-case" rel="nofollow">https://www.slideshare.net/FlorianWilhelm2/performance-evalu...</a><p>The authors were able to outperform Google ML by a large margin for a vision task that involved recognizing numbers from car registration documents. With just 160 manually collected training samples they were able to train a neural net that could recognize characters with 99.7 % accuracy. GoogleML performed very poorly in comparison, which I found very surprising because it didn't seem to be such a hard recognition task (clean, machine-written characters on a structured, green background).
I think you are overegenralizing applicability of Neural Architecture Search etc. and cherry picking individual examples. There is an enormous gap between what gets published in academia with what’s actually useful.<p>E.g. Compute wars have only intensified with TPUs and FPGA. sure for training you might be okay with few 1080ti but good luck building any reliable, cheap and low latency service that uses DNNs. Similarly big data for academia is few terabytes but real Big data is Petabytes of street level imagery, Videos/Audio etc.
Okay, I have a question about one of his assertions here:<p>> What may take a cluster to compute one year takes a consumer machine the next.<p>Is that not partly because the hardware is ever improving? I realize this is a bit of exaggeration, but does not yesterday's cluster end up fitting onto the die of tomorrow's GPU? And then since it's all on a single die, is not the overhead of the interconnect drastically reduced? It takes less time to push information to the next core over when the interconnect is a couple micrometers of silicon instead of the couple meters of silicon, copper, and fiber needed when the next core is in the next rack over.<p>Certainly improving the model will help; who hasn't marvelled at how better his code ran when he fixed that On^2 hot spot? But I can't help but think improving hardware plays a role too.<p>Am I off base here?
Two notes:
1. This article is not talking about how, for neural networks, you can just have pretrained networks — where the cost of compute and data is incurred there — and then use them to classify images or what not on your decades old computer. Correct?
2. Often times, some problems are “solved” in the sense that they become irrelevant. Is that also the case here. It seems compute and data were seemingly constraints, but technology (algorithms) just got more efficient. Should we not reframe this and say that algorithms are the constraint, then, and that’s what we should aspire to improve? Usually throwing compute and data marginally improves gains anyhow...
The real "moat" is more and better training data for commercially useful tasks.<p>You can write a lot of papers about Penn Treebank data but I can't imagine anything you do with Penn Treebank will be commercially useful.
I feel like that we're getting these huge gains on tasks that can be made faster via better architectures, regularization, normalization, data augmentation, etc, such that he's right.<p>I just wonder if it will ever feel this way for reinforcement learning.