These were nice early in the TensorFlow evolution, for things like Frigate...<p>But even CPU inference is both faster and more energy efficient with a modern Arm SBC chip, and things like the Hailo chip are way faster for similar price, if you have an M.2 slot.<p>I haven't seen a good USB port alternative for edge devices though.<p>The big problem is Google seems to have let the whole thing stagnate since like 2019. They could have some near little 5/10/20 TOPS NPUs for cheap if they had continued developing this hardware ecosystem :(
Not sure if there is something new here but it looks like the same product that has been around for a few years now (wasn't Coral released in 2019-ish?)
I have a couple of these, unfortunately I've been waiting for the ecosystem to get better and run newer/improve models to no avail. I attempted some YOLO ports (since Coral uses a specific architecture) and not sure if I'm just bad at this or it's actually hard, but beyond the basic examples with Google's own ecosystem I wasn't able to run anything else on these. I was hoping an upgrade from seeing this on HN, but it seems to be the same old one.
This is way too outdated to be relevant in any way. Back in the day they had a board with a TPU on it, before everyone else did. That board ran object detection at a pretty good resolution at like 80fps in 2.5W power budget. I still have that board in my drawer - I never did find any use for it at its price point. Plus, because it's Google, I expected that they'd abandon the board within 2 years tops, which is exactly what happened. The board was like $100 IIRC which was a good chunk of cash when RaspberryPi was like $25. Nowadays there are _dozens_ of Chinese boards available with on-chip TPUs. Tooling still sucks mightily, but that's expected when dealing with embedded systems. Unlike with the Google board, you can usually build your own Linux for these using Yocto or Buildroot with minimal tweaks.
FWIW, The M.2 and mini PCIe form factors are more cost effective. I added one to the WiFi+Bluetooth slot on a refurbished Dell desktop to perform object detection in my CCTV NVR.<p><a href="https://frigate.video/" rel="nofollow">https://frigate.video/</a>
I have one of these. The USB model sucks. It overheats unless you put them in high efficiency (low performance) mode which defeats the purpose.<p>The mini-PCIe variant is much more reliable, but I ended up ditching the Coral entirely and replacing it with a GTX 1060.
It seems more than dead and only supports small neural networks. Viable alternatives are Hailo and Axelera (<a href="https://www.axelera.ai" rel="nofollow">https://www.axelera.ai</a>), which is a newer.
Maybe I’m
Stupid but I couldn’t figure out how to set the pull up or pull down resistors on these boards. Maybe with LLMs I can figure this out now…. Something to do with device tree confits?
a couple questions for people who have been using it. Where does this fit between a typical budget arm cortex and a gpu? and what are the practical sized models you could run on one of these?