Totally. With computer vision especially, you often want to deploy the model on an edge device with limited bandwidth, power, and footprint (think remote camera monitoring, drones, robots, and autonomous vehicles).<p>Requiring a huge desktop or server-grade graphics card (much less a box full of many of them) to fit the model into memory misses the mark.<p>We’ve done a lot of work getting models to be performant on the Luxonis OAK (OpenCV AI Kit) and NVIDIA Jetson devices.
I've had several discussions with friends about how I think AI should be more modular. You could have a general model which, eg. classifies an object as a "fruit" then passes it off to a separate, more specialised model which could classify that fruit as a "banana".<p>This way you can improve your fruit classifier without needing to make changes to the general classifier. I think it also opens up the possibilities for things like having a general "offline" model on a smartphone but when connected to the internet it could make use of more specialised models.<p>It would also be cool if you could download offline models for things you're particularly interested in, like birds species etc.<p>I think one of the problems with having a really large AI that attempts to classify everything would be a sort of "tunnel vision" problem and eventually the AI has to make a guess as to what something is instead of saying "best I can do is this is an animal, but let me go ask a buddy of mine who's an expert on animals".