Feeding in doctored images, grainy social media pics of people that look "kinda similar" and other garbage data into these tools just to generate a list of possible suspects is a surefire way to generate garbage matches (i.e. non-matches).<p>I don't see how this is any better than just putting all the names of people who have the same demographic info into a hat and picking a set of them.<p>I guess incompetence might be the thing that saves us from a truly effective Orwellian state.
I had thought they were going to say that they use celebrity images to train the systems. Because celebrities are an <i>incredible</i> training set for facial recognition.<p>Consider: You want to train a one-shot matching system for taking photos from crime scene cameras and searching a database of drivers licence photos. You need a training set of different images of the same person, and then lots of those sets for different people. Boom: celebrities! They have thousands of photos from thousands of angles. Poorly taken shots by paparazzi. Stills from films at a zillion different angles. And some nice, face-on shots like a drivers licence photo. (Heck, if it's the police they have access to that celebrity's actual DL photo).<p>The one nice side about training that way is that most celebrities are really good looking people, so only the really pretty criminals will be caught thanks to training bias.
While I've never been detained, I've been around police enough to know that they generally know the likely suspects immediately after many crimes (turning in a statement after an SF Mission smash-grab; talking with a friend about a -significant- tool theft from their barn). I suspect the reason this celebrity-photo thing is happening is that the police already know their primary suspect and they're looking for "modern", crappy supporting evidence. It's like the JS/PHP of police work...
I cannot fathom how they put obvious mistakes in the training material (e.g. hairline in the last example). How are they selling something when their own training material shows it's total bunk.<p>The time and money spent on that could be spent on actual police work...
Fantastic, because I look like the literal twin of a very famous actor who appears in multiple current blockbusters. It's not that either of us is especially good looking, but we have that sort of face that is bland enough to take on many different looks.
It will be interesting to see how facial recognition is treated as evidence. Things like, will judges grant warrants based on a 80% match but not a 70% match? Will juries consider it more or less reliable than recognition by eye witnesses? Will defense experts be able to cast doubt on certain algorithms or techniques vs others?
I disagree that this is, per se, a violation of due process<p>As long as the matches are not being used as evidence on trials or as evidence to get warrants, the police can generate their list of leads by consulting a hamster, or by taking names out of a hat. Right? Or is there/should there be a right to not be considered a suspect on foolish motives?<p>Also, maybe these techniques are useful, in the sense that they allow for police to use a bad image to reduce a pool of suspects to a manageable pile that can be examined by hand.
This reminds me of the common practice of using drug-sniffing dogs not to actually sniff out drugs but as an excuse to do a search.<p>I think there should be an examination and maybe some additional rules around establishing probable cause using what amounts to a divining rod.
Considering that all the research points to image recognition having basically no robustness to noise, I imagine the path here is going to be to at least block it's use in court, if not to generate the lead to begin with.