I haven't read the paper, and I don't really know much about ML, but this part stuck out to me from the abstract:<p>> All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic.<p>I realize the authors are intentionally skirting around this bit (it's not really the point of their paper), but the "problem" isn't that some physical features may indicate criminality, with some level of success. That's cool or whatever I guess, but hardly an issue or truly revolutionary I think from a social perspective (in person, people tend to understand "vibes" rather well, and bad vibes come from a number of things like body language or visual cues about a person. Humans have their own inference systems for these things, flawed as they are.)<p>No, the problem -- the "controversy" surrounding the topic -- IMO, is that, almost with 100% certainty, any implementation of this system will be completely left unchecked, will effectively be private, and will be totally unaccountable by any practical means.<p>Do the authors of this paper really think any implementation of this system would be open to the public in any accountable way, if used by say, LEOs? You know, as opposed to it being a big "every-criminal.sql" dump, based on hoarded data mining, and driven along and utilized by proprietary algorithms, created by some company selling to governments? LEOs in places like the US have already shown their hands with strategies like parallel reconstruction and the downright willingness to fabricate evidence out of thin air.<p>Really, who cares what some data science nerds think of their fancy criminal face models, and whether they think they're "accurate despite the controversy", when the police can just say "It's accurate, I say so, you're going to jail" and they can completely make shit up to support it? It's not a matter of whether the actual thing is accurate, it's whether or not it gives them a reason to do whatever they like.<p>It reminds me of rhetoric people said, about building walls around Mexico wrt the election. That can't happen. Who would build it. It'd be huge. Hard. Realistically? It'd be "easy". Humans have been building walls for a long, long time. It's not unthinkable. The difficult part is actually murdering people who would try to cross the wall by gunning them down -- and they will try to cross. I mean, you probably don't have to kill <i>too many</i> people to send the message. Just, enough of them. The Iron Curtain was a real thing, too, after all.<p>This is similar. The algorithm is the "easy" part. It's "only" some science. No, the hard part is dealing with the consequences. The hard part is closing the box of Pandora after you opened it.