[I focus only on the performance of facial recognition system, not on whether it's good idea to have such systems in place.]<p>From the systems performance point of view, this is non-story. If you deploy detection system, you have to make trade-off between precision (how many false positives you will have) and recall (how many false negatives -- misses) you will have [1]. How you do this trade-off depends on cost of false positive, and cost of false negative.<p>For example, if false positive means, that someone will point camera to that part of stadium and someone will manually check if it's correct [2], and then arrest, then 2000 false positives might be good performance if recall is good, because there is no negative impact on false positive, only some wasted effort by police.<p>On the other hand, if this AI match would be used as evidence in court, then the it would be terrible, as it would cause much more harm than good. From the article it seems that police is using it more closely to the first case, than to the second case, so the performance of this system might be good, so really the article is non-story.<p>[1] I simplified the "precision" and "recall".
[2] Or send more security guards / police to that area, or whatever discrete measure.
As long as the data is used as one of many guideline sources which is scrutinised by human officers before the individuals are suspected and acted upon, I personally don't see a problem with the inaccuracy (which will get better over time).
While the article does go into the detail of the accuracy of the deployed systems, it's both sad and worrying that the counter arguments to mass facial recognition are reduced to a single positioning quote.
The algorithm isn’t arresting people. It’s cutting down the suspect pool from 17000 to 2500, the fact that 2000 of those are innoscent while 500 are criminals isn’t a damning issue, it’s just a point where it could improve. Without the system the 500 people would not have been identified at all.