I ran this same image through our face detector, with results I could easily predict: <a href="http://demo.pittpatt.com/detection_demo/view.php?id=JSCV5AH9258041" rel="nofollow">http://demo.pittpatt.com/detection_demo/view.php?id=JSCV5AH9...</a> We don't find any of them (blue and yellow are low confidence faces). We do "well" by our definition, terrible by the writer's.<p>So, problem #1: using hand-drawn faces (these are fairly stylized) is a really bad way to test a face detector. No one "in the real world" wants to detect or recognize hand-drawn faces, and so no one trains with hand-drawn faces. We, specifically, err on the side of choosing to exclude stylized faces from the faces category (though most frequently these would be "don't-cares").<p>Problem #2: Using a single image is actually a bit misleading, because if you really want to "test" methods for thwarting detection, you need to use a video. Those slight variations in pose and lighting are going to make it much easier to pick up the face and filter out the misses.<p>Lastly, if you wanted to "defeat" a dystopian mass surveillance system, you don't want to prevent detection, but recognition. It's far easier. And to prevent good recognition over a huge dataset (ie, the population of the world), you just need to "remove" information from your face. Wear big ol' sunglasses and a hat. Far more effective, far less conspicuous.<p>(edit to add: if you click 'thesis' at the top, and read the top few entries, he shows alot more about how viola-jones and the haar wavelets are used to find face regions. It's an interesting visualization. It also does a great job explaining why the makeup trick works)