I successfully designed a commercial LPR system using non-ML methods that handles all those kind of weird real-world cases you get that are never mentioned in papers about LPR and OCR.<p>Check out some of the hard cases here: <a href="https://waysight.com/lpr-technology/#examples" rel="nofollow">https://waysight.com/lpr-technology/#examples</a><p>I couldn't use neural networks at the time, because 1) it had to run in a single core 200 MHz ARM at 20 fps, and 2) it took too long to "debug" the NNs (often you want to improve performance and the failure cases seem fine to you)<p>So the resulting performance of this system was and is state of the art in that in practice it captures everything correctly, so there is no point wasting cycles on an NN solution.<p>On the other hand, starting from scratch, an NN solution can allow you to advance quickly with little domain knowledge.<p>I would like to repeat a word of caution other posters mentioned as well - it takes years to collect the training data required for successful commercial LPR, both for NN and non-NN methods (as you need regression tests even if you don't need it for training), to get plates from all seasons and weathers and vehicle conditions.<p>Interestingly, now almost 10 years later, when I look at more modern deep NNs and analysing their initial stages with modern tools I see a lot of similarity to my original "standard" algorithms. In essence, the code I wrote did the same thing a modern convolutional deep NN would learn to do.<p>In particular, see this recent analysis of how (some) convnets learn:<p><a href="https://www.lyrn.ai/2019/02/14/bagnet-imagenet-with-a-simple-bof-model/" rel="nofollow">https://www.lyrn.ai/2019/02/14/bagnet-imagenet-with-a-simple...</a><p>Which, as it turns out, is really similar to how 80's and 90's style classic OCR algorithms work!