An innocent grandmother was jailed for months after AI facial recognition misidentified her as a fraud suspect, illustrating real-world harms of facial recognition in law enforcement.
The case highlights systemic risks when authorities rely on AI for identification without robust human review and error-correction processes.
It has sparked calls for policy reforms, greater transparency, and accountability around the use of facial recognition technology.
The incident demonstrates that improvements in AI accuracy do not eliminate false positives, underscoring the need for safeguards and oversight.