Not trying to nitpick here, but does anyone find it strange that a webpage/website like this doesn’t have a graph of the data, but just a poorly designed table?<p>Is this a trend I’ve missed … some sort of post-D3 nadir of a datavis hype curve or something where graphs are a cringey thing SEOd click bait articles or news pages do?
CoreML is really very good, both on its own, and tools for importing models from other platforms and compressing them. I wrote a book earlier this year on Swift and added a few CoreML examples (<a href="https://leanpub.com/SwiftAI" rel="nofollow">https://leanpub.com/SwiftAI</a>). Google provides something similar.<p>Federated privacy preserving learning, local models, etc. all help keep your private data on your devices. Good stuff.
A bit of a tangent, but where are we at when it comes to energy efficiency in AI?<p>Suppose I had one or two cameras attached to a computer and ran a software that would detect which object I'm pointing at and name it, how much power would that use?<p>The human brain would probably need around 0.5s - 1s to come up with an answer, consuming around 5 milliwatt hours of energy in that time.<p>How much power would the computer need to at least give it a fair shot compared to the human?<p>If we assume that a human is pretty close to the best theoretically achievable limit of overall usefulness vs energy usage (while, unlike current AI, having the ability to learn ad-hoc, self-correct and maintain itself), "work per watt" may give us an idea of how advanced our current technology really is compared to what already existed, and how far we can still go.
To truly understand perf, ideally one should compare many types and sizes of models. I suspect some model types perform substantially better on the newer ANE / OS compared with others.