It took me a while to find how it interfaces with the system (driver? dedicated application? just drop model and data in a directory which appeared on mounted key?), so I'll post it here.<p>To access the device, you need to install a sdk which contains python scripts that allow to manipulate it (so, it seems like it's a driver embedded in utilities programs). Source: <a href="https://developer.movidius.com/getting-started" rel="nofollow">https://developer.movidius.com/getting-started</a>
> Movidius's NCS is powered by their Myriad 2 vision processing unit (VPU), and, according to the company, can reach over 100 GFLOPs of performance within an nominal 1W of power consumption. Under the hood, the Movidius NCS works by translating a standard, trained Caffe-based convolutional neural network (CNN) into an embedded neural network that then runs on the VPU.<p>This is sure to save me money on my power bill after marathon sessions of "Not Hotdog."
So what can you do with a deep-learning stick of truth?<p>EDIT: Looks like the explanation is in a linked article: <a href="https://techcrunch.com/2016/04/28/plug-the-fathom-neural-compute-stick-into-any-usb-device-to-make-it-smarter/" rel="nofollow">https://techcrunch.com/2016/04/28/plug-the-fathom-neural-com...</a><p><i>How the Fathom Neural Compute Stick figures into this is that the algorithmic computing power of the learning system can be optimized and output (using the Fathom software framework) into a binary that can run on the Fathom stick itself. In this way, any device that the Fathom is plugged into can have instant access to complete neural network because a version of that network is running locally on the Fathom and thus the device.</i><p>This reminds me of Physics co-processors. Anyone remember AGEIA? They were touting "physics cards" similar to video cards. Had they not been acquired by Nvidia, they would've been steamrolled by consumer GPUs / CPUs since they were essentially designing their own.<p>The $79 price point is attractive. I wonder how much power can be packed into such a small form factor? It's surprising that a lot of power isn't necessary for deep learning applications.
It's surprising how much attention this has had over the last few days, without any discussion of the downside: it's slow.<p>It's true that it is fast for the power it consumes, but it is way (way!) to slow to use for any form of training, which seems to be what many people think they can use it for.<p>According to Anandtech[1], it will do 10 GoogLeNet inferences per second. By <i>very</i> rough comparison, Inception in TensorFlow on a Raspberry Pi does about 2 inferences per second[2], and I think I saw AlexNet on an i7 doing about 60/second. Any desktop GPU will do orders of magnitude more.<p>[1] <a href="http://www.anandtech.com/show/11649/intel-launches-movidius-neural-compute-stick" rel="nofollow">http://www.anandtech.com/show/11649/intel-launches-movidius-...</a><p>[2] <a href="https://github.com/samjabrahams/tensorflow-on-raspberry-pi/tree/master/benchmarks/inceptionv3" rel="nofollow">https://github.com/samjabrahams/tensorflow-on-raspberry-pi/t...</a> ("Running the TensorFlow benchmark tool shows sub-second (~500-600ms) average run times for the Raspberry Pi")