The CEO butting in all the time was really annoying, had a business partner that did this in meetings all the time - its fucking annoying and rude.
On the surface I have no idea if this is ground breaking or not, my first thought was ahh nVidia using Linux!
Here's how to do the street sign part of this yourself: <a href="https://gist.github.com/iandees/f773749c47d088705199" rel="nofollow">https://gist.github.com/iandees/f773749c47d088705199</a>
Cool demo but I still wonder if fundamentally this is just a brute-force approach. Wouldn't it be better to do some traditional preprocessing (e.g. recognizing rectangles, circles, etc.) and feeding higher-level descriptors into the classifier?<p>If the net learns based on pixels you still have to somehow solve rotation and scale invariance. Or is there something new in deep-learning vs. old-school neural nets that fixes the issues that bedeviled neural nets the first time they were popular?
@10:08<p>on the right merc sls classified as SUV<p>on the left one SUV classified as two VANs<p>Their algorithm works at about 1Hz rate when doing signs. This is ~state of the art from 20 years ago, but running on small mobile SoC at a slow rate.