A while ago there was an article about creating a "wasteland" of unprofittability <i></i>around<i></i> your core business so to become the monopolist and extract the highest margin out of the industry.<p>The position of Intel/Nvidia is then quite simple, they will open source any model, dataset, toolkit, library, etc that makes use of their hardware. Training new AI will become simpler and simpler and they will extract high margin from selling the hardware.<p>What about Google instead? They have the data and the engineering knowledge to make complex AI works, however it seems quite unlikely that they will be able to drive the price of AI hardware down. Moreover they are charging quite a lot for the use of their custom TPU.<p>From this analysis it seems like Google is bound to fail in the long term in the AI race.<p>Am I wrong? Why?
Looking forward to seeing the evaluation numbers!<p>I'm mostly curious about how their NER and parser compare against what I've implemented for <a href="https://spaCy.io" rel="nofollow">https://spaCy.io</a> . I've tried the architectures they're using, and I've found they need very wide (and therefore slow) hidden layers to get competitive accuracy.<p>I'm sure they have <i>some</i> evaluations, right? I mean you can't really develop these things without running experiments...
The more interesting announcement is that Intel Nervana, already postponed multiple times, is again postponed to "late 2019".<p>One theory is that Intel Nervana outperformed Nvidia Pascal, but didn't outperform Nvidia Volta, so it couldn't be released.