Police Facial Recognition Accuracy
The cluster discusses the use of AI facial recognition by police, focusing on high false positive rates that risk wrongful arrests and suspicions, while comparing it to error-prone traditional policing methods like eyewitnesses and lineups.
Activity Over Time
Top Contributors
Keywords
Sample Comments
All sorts of police field tests have significant rates of false positives. People actually get arrested on the basis of such "evidence" all the time. The justice and law enforcement systems essentially operate on the notion that these things are "good enough".
The system performs much better than the existing tool of closing ones eyes and randomly picking a picture with your finger too that police have been using since the dawn of their existence. Doesn't mean it performs well because it outperforms a system that never worked to begin with like that or like drug dogs. I'm sure soon it'll be treated like it's 100% accurate just like the dogs despite its abysmal success rate. But I'm sure that's fine. So what if 81% of the
Theres already precedent with police and judiciary misusing technology they don't understand, I suspect the underlying issue is that this person would not trust government employees generally with a tool that issues false positives and requires discretions.
AI sucks, but on the other handJudges and police officers arent 100% accurate too
The problem is that it's only 95% perfect in this case (28 out of 534, being identified as criminals). That a pretty big margin of error when the result is possible arrest. The scope of its use is also in question, sure if you looking a someone who kidnapped a child, 95% is good enough. If you use it for any minor violation you risk harassing a large percentage of the population, who then will need to prove that it wasn't them. Some will flat out deny being the guilty party, and there&
In this specific example, police coercing or using shaky evidence in a perp lineup has always been known and a big problem and led to a lot of false convictions. This just has the added wrinkle of "AI" giving it more credibility than it should. You can see, even in this story, the victim tried to say he wasn't really sure and they basically ignored him. They aren't trying to be "right" or catch the right guy. Someone goes to jail, solved. If you're wrong, let t
Police add yet another massively inaccurate tool to their arsenal which they can easily put innocent people behind bars.In what way can this possibly put innocent people behind bars, never mind easily? The article specifically notes that any hits on the system are then checked by real police officers before any further action is taken, and from that point surely the same processes and controls will apply as if an officer thought they'd recognised a person of interest as part of th
If it changes the phenomena that trigger false-positives of suspicion towards "was actually present near the scene of the crime" and away from "wrong skin color in the neighborhood," it can be argued the tech has improved people's lives.
The problem is not with the technology, but with how it's used. A medical test is also not 100% error-proof which is why a professional needs to interpret the results, sometimes conducting other tests or disregarding it completely.A cop stopping someone that has a resemblance to a criminal for questioning seems like a good thing to me, as long as the cop knows that there's a reasonable chance it's the wrong guy.
False positives for this tech is absurdly high and law enforcement treats it like it’s perfect. That’s enough of a reason to make it illegal.