Although it is priced competitively, ~ same as Nvidia Jetson Nano ($125) it seems underpowered when compared to Nano. Nano has 4GB RAM, 128 CUDA cores and can 4K encode/decode 30/60 FPS and also handle multiple streams when compared to 15/15? of BeagleBone.<p>Perhaps the Vision Engine is better for computer vision tasks, but having to use TIDL suite when compared to Jetson Nano's JetPack with tools which we use regularly on bigger GPUs is going be a hard compromise to make.<p>Jetpack includes CUDA 10, TensorRT, OpenCV 3.3.1 etc. by default and PyTorch is available separately for Jetson Nano. Besides the community is very active.
Is it just me or does 1GB of ram seem a little low for a $100+ board? I can't seem to find what speed it is either.<p>I'm not expecting anything crazy like 8gb or anything like that, but given how many boards sell at ~50$ with 4GB of ram this just seems kinda limited.
How does this comparing to Nvidia's Jetson-Nano(<a href="https://developer.nvidia.com/embedded/jetson-nano-developer-kit" rel="nofollow">https://developer.nvidia.com/embedded/jetson-nano-developer-...</a> for $99) which is cheaper and appears to be more powerful?<p>I used BB in previous projects, one thing definitely stands out for BB is that, it could be used as a product directly with a case and some certification(EMC,etc). Nvidia's Nano is more of a development platform.<p>Beagleboard predates RPi actually, though after Arduino, BB is arguably the very first board running a 32-bit ARM that is also open source, cheap, small, however it's overshadowed by RPi in recently years.
Dual Cortex-A15, 2 DSPs, 4 Vision Engines, 4 Real-time controllers (PRUs), 2 Cortex-M4s, 2D accelerators, dual 3D GPUs...<p>It's impressive but, being pretty much domain specific chip, can anyone make use of its capabilities at hobbyist levels where Beaglebone is targeting?
>low cost development board yet, and it’s driving a whole new AI revolution.<p>This press release is a disaster as far as grammar is concerned. I am legitimately unable to tell if it has any special properties regarding AI.<p>And NO, it came out yesterday, it's not driving any revolutions.
I'm not overly familiar with TI's SOCs post-2010. Anyone out there with a good overview of what the Sitara AM5729 includes besides the bullet points in that piece?<p>And what about TIDL adoption? I've been working on the Intel/NVIDIA-grade part of the ML scale and have a few ESP32 boards to fiddle with OV2640 cameras, but very little in between except what Broadcom has been doing.
What's so "AI" about it? It doesn't even have a TPU. Kendryte K210 has a fixed point TPU, 400MHz dual core RISC V with FPU, 8 channel audio DSP, FFT and crypto acceleration, and costs $8.90 with wifi and $7.90 without. And the module is the size of a half of a postage stamp. Runs TensorFlow Lite (a subset of ops, but good enough to do practical things).
Can someone tell me if there are easy to use libraries that can speed up existing ML code, say written in Python, on this?<p>Or do we have to write custom C/C++ code to make best use of available hardware?
The press release could have used a real life example, like "Training MNIST dataset takes .5 seconds" or something.<p>Where can I find info about how these edge computing boards are speeding up training time? Or how they compare to a 1080i?