TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Facial Recognition Leads To False Arrest Of Black Man In Detroit

661 点作者 vermontdevil将近 5 年前

49 条评论

ibudiallo将近 5 年前
Here is a part that I personally have to wrestle with:<p>&gt; &quot;They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day,&quot; said lawyer Phil Mayor with the ACLU of Michigan.<p>When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.<p>Not too long ago, I wrote a comment here about this [1]:<p>&gt; The trouble is not that the AI can be wrong, it&#x27;s that we will rely on its answers to make decisions.<p>&gt; When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.<p>Most of us here can be excited about Facial Recognition technology but still know that it&#x27;s not something to be deployed in the field. It&#x27;s by no means ready. We might even consider the moral ethics before building it as a toy.<p>But that&#x27;s not how it is being sold to law enforcement or other entities. It&#x27;s _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.<p>[1]:<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=21339530" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=21339530</a>
评论 #23636070 未加载
评论 #23635940 未加载
评论 #23640376 未加载
评论 #23637380 未加载
评论 #23635973 未加载
评论 #23635698 未加载
评论 #23639434 未加载
评论 #23637269 未加载
danso将近 5 年前
This story is really alarming because as described, the police ran a face recognition tool based on a frame of grainy security footage and got a positive hit. Does this tool give any indication of a confidence value? Does it return a list (sorted by confidence) of possible suspects, or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?<p>The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?
评论 #23628860 未加载
评论 #23628834 未加载
评论 #23628582 未加载
评论 #23628650 未加载
评论 #23628776 未加载
评论 #23629214 未加载
评论 #23630407 未加载
评论 #23639147 未加载
mnw21cam将近 5 年前
This is a classic example of the false positive rate fallacy.<p>Let&#x27;s say that there are a million people, and the police have photos of 100,000 of them. A crime is committed, and they pull the surveillance of it, and match against their database. They have a funky image matching system that has a false positive rate of 1 in 100,000 people, which is <i>way</i> more accurate than I think facial recognition systems are right now, but let&#x27;s just roll with it. Of course, on average, this system will produce one positive hit per search. So, the police roll up to that person&#x27;s home and arrest them.<p>Then, in court, they get to argue that their system has a 1 in 100,000 false positive rate, so there is a chance of 1 in 100,000 that this person is innocent.<p>Wrong!<p>There are ten people in the population of 1 million that the software would comfortably produce a positive hit for. They can&#x27;t all be the culprit. The chance isn&#x27;t 1 in 100,000 that the person is innocent - it is in fact at least 9 out of 10 that they are innocent. This person just happens to be the one person out of the ten that would match that had the bad luck to be stored in the police database. Nothing more.
评论 #23634766 未加载
评论 #23633120 未加载
评论 #23637743 未加载
评论 #23645611 未加载
评论 #23634336 未加载
ghostpepper将近 5 年前
He wasn&#x27;t arrested until the shop owner had also &quot;identified&quot; him. The cops used a single frame of grainy video to pull his driver&#x27;s license photo, and then put that photo in a lineup and showed the store clerk.<p>The store clerk (who hadn&#x27;t witnessed the crime and was going off the same frame of video fed into the facial recognition software) said the driver&#x27;s license photo was a match.<p>There are several problems with the conduct of the police in this story but IMHO the use of facial recognition is not the most egregious.
评论 #23632531 未加载
评论 #23630967 未加载
评论 #23629299 未加载
评论 #23630499 未加载
评论 #23628918 未加载
js2将近 5 年前
&gt; &quot;I picked it up and held it to my face and told him, &#x27;I hope you don&#x27;t think all Black people look alike,&#x27; &quot; Williams said.<p>I&#x27;m white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.<p>So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.<p>This is known as the cross-race effect and it&#x27;s only something I became aware of in the last 5-10 years.<p>Add to that the fallibility of human memory, and I can&#x27;t believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cross-race_effect" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cross-race_effect</a>
评论 #23630624 未加载
Anthony-G将近 5 年前
There is just so much wrong with this story. For starters:<p>The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.<p>Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.<p>This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.<p>That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is <i>not</i> probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.<p>The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.<p>Finally, the original NPR article states that<p>&gt; &quot;Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them,&quot; said Jameson Spivack<p>[1] <a href="https:&#x2F;&#x2F;www.wusf.org&#x2F;the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michigan&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wusf.org&#x2F;the-computer-got-it-wrong-how-facial-re...</a><p>[2] <a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2020&#x2F;06&#x2F;24&#x2F;technology&#x2F;facial-recognition-arrest.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2020&#x2F;06&#x2F;24&#x2F;technology&#x2F;facial-recogni...</a>
评论 #23635604 未加载
jandrewrogers将近 5 年前
It isn&#x27;t just facial recognition, license plate readers can have the same indefensibly Kafka-esque outcomes where no one is held accountable for verifying computer-generated &quot;evidence&quot;. Systems like in the article make it so cheap for the government to make a mistake, since there are few consequences, that they simply accept mistakes as a cost of doing business.<p>Someone I know received vehicular fines from San Francisco on an almost weekly basis solely from license plate reader hits. The documentary evidence sent with the fines clearly showed her car had been misidentified but no one ever bothered to check. She was forced to fight each and every fine because they come with a presumption of guilt, but as soon as she cleared one they would send her a new one. The experience became extremely upsetting for her, the entire bureaucracy simply didn&#x27;t care.<p>It took threats of legal action against the city for them to set a flag that apparently causes violations attributed to her car to be manually reviewed. The city itself claimed the system was only 80-90% accurate, but they didn&#x27;t believe that to be a problem.
评论 #23633597 未加载
评论 #23636607 未加载
danso将近 5 年前
Since the NPR is a 3 minute listen without a transcript, here&#x27;s the ACLU&#x27;s text&#x2F;image article: <a href="https:&#x2F;&#x2F;www.aclu.org&#x2F;news&#x2F;privacy-technology&#x2F;wrongfully-arrested-because-face-recognition-cant-tell-black-people-apart&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.aclu.org&#x2F;news&#x2F;privacy-technology&#x2F;wrongfully-arre...</a><p>And here&#x27;s a 1st-person account from the arrested man: <a href="https:&#x2F;&#x2F;www.washingtonpost.com&#x2F;opinions&#x2F;2020&#x2F;06&#x2F;24&#x2F;i-was-wrongfully-arrested-because-facial-recognition-why-are-police-allowed-use-this-technology&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.washingtonpost.com&#x2F;opinions&#x2F;2020&#x2F;06&#x2F;24&#x2F;i-was-wro...</a>
评论 #23630314 未加载
评论 #23628769 未加载
评论 #23628790 未加载
vermontdevil将近 5 年前
From ACLU article:<p><i>Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation.</i><p>I fear this is going to be the norm among police investigations.
vmception将近 5 年前
&gt; Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people.<p>The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.<p>Would have just been &quot;this edge case is not an edge case at all, axe it.&quot;<p>Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.
评论 #23629014 未加载
gentleman11将近 5 年前
The discussion about this tech revolves around accuracy and racism, but the real threat is in global unlimited surveillance. China is installing 200 million of facial recognition cameras right now to keep the population under control. It might be the death of human freedom as this technology spreads<p>Edit: one source says it is 400 million new cameras: <a href="https:&#x2F;&#x2F;www.cbc.ca&#x2F;passionateeye&#x2F;m_features&#x2F;in-xinjiang-china-surveillance-technology-is-used-to-help-the-state-control" rel="nofollow">https:&#x2F;&#x2F;www.cbc.ca&#x2F;passionateeye&#x2F;m_features&#x2F;in-xinjiang-chin...</a>
seebetter将近 5 年前
Reminds me of this-<p>Facial recognition technology flagged 26 California lawmakers as criminals. (August 2019)<p><a href="https:&#x2F;&#x2F;www.mercurynews.com&#x2F;2019&#x2F;08&#x2F;14&#x2F;facial-recognition-technology-flagged-26-california-lawmakers-as-criminals-this-bill-to-ban-the-tech-is-headed-to-the-senate&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.mercurynews.com&#x2F;2019&#x2F;08&#x2F;14&#x2F;facial-recognition-te...</a>
sneak将近 5 年前
Another reason that it&#x27;s absolutely insane that the state demands to know where you sleep at night in a free society. These clowns were able to just show up at his house and kidnap him.<p>The practice of disclosing one&#x27;s residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There&#x27;s absolutely no reason that an ID should be gated on the state knowing your residence. It&#x27;s none of their business. (It&#x27;s not on a passport. Why is it on a driver&#x27;s license?)<p>[1]: <a href="https:&#x2F;&#x2F;www.newsweek.com&#x2F;dmv-drivers-license-data-database-integrity-department-motor-vehicles-1458141" rel="nofollow">https:&#x2F;&#x2F;www.newsweek.com&#x2F;dmv-drivers-license-data-database-i...</a>
w_t_payne将近 5 年前
Perhaps we, as technologists, are going about this the wrong way. Maybe, instead of trying to reduce the false alarm rate to an arbitrarily low number, we instead develop CFAR (Constant false alarm rate) systems, so that users of the system <i>know</i> that they will get some false alarms, and develop procedures for responding appropriately. In that way, we could get the benefit of the technology, whilst also ensuring that the system as a whole (man and machine together) are designed to be robust and have appropriate checks and balances.
评论 #23637981 未加载
评论 #23639344 未加载
hpoe将近 5 年前
I don&#x27;t think using the facial recognition is necessarily wrong to help identify probable suspects, but arresting someone based on a facial match algorithm is definitely going too far.<p>Of course really I blame the AI&#x2F;ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.
评论 #23628717 未加载
评论 #23628664 未加载
评论 #23628550 未加载
评论 #23628718 未加载
czbond将近 5 年前
A few things I just don&#x27;t have the stomach for as an engineer, writing software that: - impacts someones health - impacts someones finances - impacts someones freedoms<p>Call me weak, but I think about the &quot;what ifs&quot; a bit too much in those cases. What if my bug keeps them from selling their stock and they lose their savings? What if the wrong person is arrested, etc?
评论 #23637406 未加载
at_a_remove将近 5 年前
I think that your prints, DNA, and so forth must be, in the interests of fairness, utterly erased from all systems in the case of false arrest. With some kind of enormous, ruinous financial penalty in place for the organizations for non-compliance, as well as automatic jail times for involved personnel. These things need teeth to happen.
rusty__将近 5 年前
any defence lawyer with more than 3 brain cells would have an absolute field day deconstructing a case brought solely on the basis of a facial recognition. What happened to the idea that police need to gather a variety of evidence confirming their suspicions before an arrest is issued. Even a state prosecutor wouldn&#x27;t authorize a warrant based on such flimsy methods.
评论 #23628752 未加载
评论 #23636022 未加载
jackklika将近 5 年前
The company that developed this software is Dataworks plus, according to the article. Name and shame.
FpUser将近 5 年前
And then in some states employers are allowed to ask have you eve been arrested (never mind convicted of any crime) on employment application. Sure, keep putting people down. One day it might catch up with China&#x27;s social scoring policies.
MikusR将近 5 年前
Is that different from somebody getting arrested based on mistaken eyewitness.
评论 #23628577 未加载
评论 #23629475 未加载
评论 #23628785 未加载
评论 #23628606 未加载
评论 #23628704 未加载
评论 #23628815 未加载
cpeterso将近 5 年前
What is a unique use case for facial recognition that cannot be abused and has no other alternative solution?<p>Even the &quot;good&quot; use cases like unlocking your phone have security problems because malicious people can use photos or videos of your face and you can&#x27;t change your face like you would a breached username and password.
renewiltord将近 5 年前
I&#x27;ve got to be honest: I&#x27;m getting the picture the police here aren&#x27;t very competent. I know I know, POSIWID and maybe they&#x27;re very competently aiming at the current outcome. But don&#x27;t they just look like a bunch of idiots?
crazygringo将近 5 年前
In this <i>particular</i> case, computerized facial recognition is <i>not</i> the problem.<p>Facial recognition produces <i>potential</i> matches. It&#x27;s still up to humans to look at footage themselves and <i>use their judgment</i> as to whether it&#x27;s actually the same person or not, as well as to judge whether other elements fit the suspect or not.<p>The problem here is 100% on the cop(s) who made that call for themselves, or intentionally ignored obvious differences. (Of course, without us seeing the actual images in question, it&#x27;s hard to judge.)<p>There are plenty of dangers with facial recognition (like using it at scale, or to track people without accountability), but this one doesn&#x27;t seem to be it.
评论 #23628663 未加载
评论 #23628828 未加载
TedDoesntTalk将近 5 年前
&gt; Even if this technology does become accurate (at the expense of people like me), I don’t want my daughters’ faces to be part of some government database.<p>Stop using Amazon Ring and similar doorbell products.
aritraghosh007将近 5 年前
The pandemic has accelerated the use of no-touch surfaces specially at places like an airport that are more inclined to now use a face recognition security kiosk. What&#x27;s not clear is the vetting process for these (albeit controversial) technologies. What if Google thinks person A is an offender but Amazon thinks otherwise. Can they be used as counter evidence? What is the gold standard for surveillance?
zro将近 5 年前
NPR article about the same, if you prefer to read instead of listen: <a href="https:&#x2F;&#x2F;www.npr.org&#x2F;2020&#x2F;06&#x2F;24&#x2F;882683463&#x2F;the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig" rel="nofollow">https:&#x2F;&#x2F;www.npr.org&#x2F;2020&#x2F;06&#x2F;24&#x2F;882683463&#x2F;the-computer-got-it...</a><p>I&#x27;ll be watching this case with great interest
ChrisMarshallNY将近 5 年前
Sadly, there&#x27;s plenty more where that came from.
loup-vaillant将近 5 年前
And now the poor guy has an arrest record. Which wouldn&#x27;t be a problem in reasonable legislations, where it&#x27;s nobody&#x27;s business whether you&#x27;ve been arrested or not, as long as you&#x27;ve not been <i>convicted</i>.<p>But in the US, I&#x27;ve heard that it can make it harder to get a job.<p>I believe I&#x27;m starting to get a feel for how the school to prison pipeline may work.
linuxftw将近 5 年前
Wait until you hear about how garbage and unscientific fingerprint identification is.
评论 #23634628 未加载
aussieguy1234将近 5 年前
In alot of police departments around the world, the photo database used is the drivers license database.<p>There is clothing available that can confuse facial recognition systems. What would happen if, next time you go for your drivers license photo, you wore a T shirt designed to confuse facial recognition, for example like this one? <a href="https:&#x2F;&#x2F;www.redbubble.com&#x2F;i&#x2F;t-shirt&#x2F;Anti-Surveillance-Clothing-by-Naamiko&#x2F;24714049.1YYVU?u" rel="nofollow">https:&#x2F;&#x2F;www.redbubble.com&#x2F;i&#x2F;t-shirt&#x2F;Anti-Surveillance-Clothi...</a>
MertsA将近 5 年前
I would love to see police trying to take a crack at this from the other side of things. Instead of matching against a database, Set up a style GAN and come up with a mask of the original photo or video to isolate just the face and have the discriminator try to match the face. Then at the end you can see the generated face with a decent pose and more importantly look through the range of generated faces that result in a reasonable match to give a somewhat decent idea of how confident you should be about any identification.
hkai将近 5 年前
While this case is bad enough, mistakes like this are not the biggest concern. Mistakenly arrested people are (hopefully) eventually released, even though they have to go through quite a bit of trouble.<p>The consequence that is much worse would be mass incarceration of certain groups, because the AI is too good at catching people who actually did something.<p>This second wave of mass incarceration will lead to even more single parent families and poor households, and will reinforce the current situation.
whatshisface将近 5 年前
How does computerized facial recognition compare in terms of racial bias and accuracy to human-brain facial recognition? Police are not exactly perfect in either regard.
评论 #23633916 未加载
ineedasername将近 5 年前
It&#x27;s supposed to be a cornerstone of &quot;innocent until proven guilty&quot; legal systems that it is better to have 10 guilty people go free than to deprive a single innocent person of their freedom. It seems like the needle has been moving in the wrong direction on that. I&#x27;m not sure it that&#x27;s just my impression on things, or if it&#x27;s because there&#x27;s more awareness with the internet&#x2F;social networking of issues...
tantalor将近 5 年前
No mention of whether a judge signed a warrant for the arrest. In what world can cops just show up and arrest you on your front lawn based on their hunch?
redorb将近 5 年前
If it&#x27;s statistically proven to not work with black people then I think the only options are<p>1) Make it avoid black people, i.e. they aren&#x27;t stored in the database and aren&#x27;t processed when scanned.<p>2) Put a 5 year hiatus on commercial &#x2F; public use.<p>Either of these things are more acceptable than too many false positives. #1 is really interesting to me as a thought experiment because it makes everyone think twice.
评论 #23635732 未加载
alex_young将近 5 年前
This technology will never be ready to use like this.<p>Similarly we shouldn’t collect vast databases of fingerprints or DNA and search them for every crime.<p>Why? Because error rates are unavoidable. There is some uncertainty, and in large enough numbers you will find false matches with perfect DNA matching.<p>We must keep our senses and use these technologies to help us rather than find the hidden bad guy.
atum47将近 5 年前
well, I&#x27;m going to take something out of my chest. every time I shared a project here using machine learning people always gave me crap. saying my models were simplistic, or I did something wrong or the solution didn&#x27;t work 100% of the time. well, I studied ML back in college. the basics, the algorithms that started all, linear regression, perceptron, adaline, knn, kmeans... and guess what? ML doesn&#x27;t work 100% of the time. I always wanted to see how people would react when a car driven by ml hits something or when they based an important decision based on the classification of an nn. ML should be used along side human intelligence not by itself. you don&#x27;t blindly trusts a black box.
kwonkicker将近 5 年前
Due process should not be abandoned in favour of automation. This was police negligence as much as it was a software mismatch.<p>One more thing, the article was being to dramatic about the whole incident.
d--b将近 5 年前
The worst part is they use facial recognition which finds someone that looks like the suspect, and then they put the guy in a lineup and have him identified by the victim. Wtf?
评论 #23641808 未加载
neonate将近 5 年前
The prosecutor and the police chief should personally apologize to his daughters, assuming that would be age appropriate.
paulorlando将近 5 年前
I&#x27;ve been thinking this sort of event has become inevitable. Tech development and business models support extending the environments in which we collect images and analyze them. Confidence values lead to statistical guilt. I wrote about it here if interested: <a href="https:&#x2F;&#x2F;unintendedconsequenc.es&#x2F;inevitable-surveillance&#x2F;" rel="nofollow">https:&#x2F;&#x2F;unintendedconsequenc.es&#x2F;inevitable-surveillance&#x2F;</a>
mistercool将近 5 年前
relevant: <a href="https:&#x2F;&#x2F;www.theregister.com&#x2F;2020&#x2F;06&#x2F;24&#x2F;face_criminal_ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theregister.com&#x2F;2020&#x2F;06&#x2F;24&#x2F;face_criminal_ai&#x2F;</a>
nebulous1将近 5 年前
&gt; In Williams&#x27; case, police had asked the store security guard, who had not witnessed the robbery, to pick the suspect out of a photo lineup based on the footage, and the security guard selected Williams.<p>Great job police
bosswipe将近 5 年前
Boston just banned facial recognition, as have San Francisco, Oakland and a bunch of other cities.<p>You can join this movement by urging your local government officials to follow suit.
throwawaysea将近 5 年前
A human still confirmed the match right? That makes this not a facial recognition issue but something else.
评论 #23633347 未加载
blackrock将近 5 年前
TOTAL FAIL.
VWWHFSfQ将近 5 年前
sounds like this guy is about to get a big payday.
评论 #23628480 未加载
评论 #23628547 未加载
评论 #23628474 未加载