I am Prathamesh, co-founder of <a href="https://nanonets.com" rel="nofollow">https://nanonets.com</a><p>We were working on a project to detect objects using deep learning with raspberry pi and we have benchmarked various deep learning architectures on pi. With ~100-200 images, you can create a detector of your own with this method.<p>In this post, we have detected vehicles in Indian traffic using pi and also added github links to code to train the model on your own dataset and then script to get inference on pi. Hope this helps!
How to draw an owl, yeah
<a href="http://i0.kym-cdn.com/photos/images/original/000/572/078/d6d.jpg" rel="nofollow">http://i0.kym-cdn.com/photos/images/original/000/572/078/d6d...</a>
Maybe I’m missing something but does this blog post conclude with a service to do inference off device? Why explain all of the steps to inference on device if you’re offering an API to do cloud inference?
The two pricing tiers for the hosted API don't seem practical for real usage.<p>$0 for 1,000 slow API calls<p>$79 for 10,000 fast API calls<p>To put that into perspective the 10k API calls is less than 10 minutes of 24 fps video. You should have a much higher plan or pay per request overage price.
Like the direction you are headed. Considering that use of ASICs is going to rise, think you should consider local installs through docker(like machinebox.io) or another technique.
Also federated learning would be next thing to take on.
I am very skeptical about pretraining which seems to be the key point of Nanonets. Sure, it will help to work work better than initialization from random weights, but, you will always do better if you collect more data for your problem. This may be fine for problems which do not need optimal classification and fast performance, but I am struggling to see any use case for that.