It is a small turnoff that you need to use their cloud model ‘compiler’ but I still think I might get the USB Dev device.<p>I am retiring in a couple off weeks from my job managing a machine learning team and I intend on being a ‘gentleman scientist’ studying things of interest, without worrying about immediate practicality. Of most interest is local ML using tensorflow.js and devices like the Edge TPU, and also hybrid symbolic AI and deep neural net systems.<p>Anyway, good to see competition for edge devices.
> Upload your model<p>> It should take about one minute for compilation to complete.<p>...also, it should take about six months for Google to lose interest in this product, at which point the product you made when you integrated the Edge TPU -- is stuck without updates.
Previous discussion: <a href="https://news.ycombinator.com/item?id=19130896" rel="nofollow">https://news.ycombinator.com/item?id=19130896</a><p>They mentioned previously that you had to compile your models on the cloud, and not locally on your computer. Not sure if they've changed this policy.
The baseboard and SOM module split looks very well done. The module includes CPU+RAM+EMMC in addition to the TPU, so a custom baseboard can be quite simple.
A lot of audio input, ready for microphone arrays.
Curious to see what role the M4F microcontroller will play, hopefully that is for some sleep/low-power usage where it can wake up the beefy CPU (and TPU).
I wish Google created a development version of TPU for inference so that it's possible to debug models locally and then send them for training to the GCP.
The Edge TPU devices that Google has been promising since last year is now available under a new company called Coral. Would love to get one to compare to my Jetson TX2. The downside is that the unit can only use Tensorflow Lite.<p>E: Hah, seems like my topic got merged with this one. Interesting how I was short from OPs post by like a minute a two. Such a coincidence!
The datasheet says it features a "Cortex M4 with 16 KB of instruction cache and 16 KB of data cache". As far as I know, M4 don't have L1 cache. Maybe they're using an M7? Or there's just simply no cache?<p><a href="https://coral.withgoogle.com/tutorials/devboard-datasheet/" rel="nofollow">https://coral.withgoogle.com/tutorials/devboard-datasheet/</a>
Interesting that it's Debian Linux support only for the peripherals. I'd be interested to see if that support grows to other os's, especially if it's a restriction to adoption.<p>I'm not in the space per-say but what are the predominant OS choices for ML/AI Devs?
They were handing the USB ones out today to attendees at the TensorFlow Dev Summit. I'll test mine later.<p>However I <i>really</i> wish they would make something beefier, to compete with e.g. Nvidia's Xavier.