Artificial intelligence, while present in virtually every aspect of our daily lives, is far from perfect. That matters especially when its application impacts civil rights.
Take, for example, facial recognition technology, a type of AI that can scan massive datasets of facial images to determine whether two images belong to the same person. The U.S. government, realizing this enormous power, has adopted, deployed and facilitated the proliferation of facial recognition across law enforcement, homeland security and even public housing.
Facial recognition shows us that when complex and evolving technology such as AI is deployed in the real world, as opposed to in a laboratory, technological flaws that might otherwise be interesting data points in a simulation can undermine the basic freedoms of the American people.
At my request, the U.S. Commission on Civil Rights conducted a monthslong investigation into the federal government’s use of facial recognition within the Departments of Justice, Homeland Security and Housing and Urban Development. Then we published a bipartisan report last fall acknowledging the technology’s utility in solving crimes, combating terror threats and locating missing children.
However, the report also highlights the grave risks that facial recognition poses to the civil rights of all Americans. The report is significant not just for its contributions to an under-studied field, but also for the rare consensus it garnered at an agency evenly divided among Democratic and Republican appointees. This shows that the civil rights concerns about AI are not partisan.
Facial recognition technology is a system of interdependent components trained largely on images of white people, resulting in models that are worse at accurately recognizing non-white people in real-world settings. The facial recognition system can still interpret data from underrepresented groups, but with higher error rates.
The camera technology that facial recognition algorithms rely upon may capture drastically different images of the same person’s skin tone depending on the camera quality, camera positioning and the environment’s lighting. A facial recognition technology algorithm may then return a false positive match, meaning the system thinks two different people are the same, or a false negative hit, meaning the system thinks the same person is actually two different individuals.
In other words, facial recognition has flaws that are biased against and disproportionately harm people of color. This is also true for women and seniors.
It is critical to fully understand how these flaws in technology translate to real-world consequences. If law enforcement agencies rely on facial recognition that is not properly tested, and if they do not train agents in the proper use of facial recognition and disclose the use of facial recognition to defendants in criminal cases, a false positive match can ruin an innocent person’s life.
Michigan citizen Robert Williams experienced this firsthand when he was wrongfully arrested in front of his family for the robbery of a Shinola store in Detroit after two blurry surveillance photos became the basis for a mismatched facial recognition result.
His case involved omissions of facial recognition use in the arrest warrant and an unreliable photo lineup procedure. It led to an unprecedented settlement by the Detroit Police Department requiring training on the risks of facial recognition, especially when used on people of color, and a significant rollback of the department’s reliance on the technology.
But imagine the direct and collateral consequences for Williams and his loved ones if this exculpatory evidence had never emerged and he had been convicted and sentenced — or, more realistically, coerced under the weight of the criminal legal system to plead guilty.
A recent investigation by the Washington Post revealed that police departments across America frequently disregard their own internal policies intended to prevent these inaccurate identifications.
This is particularly alarming given the testimony of Assistant Police Chief Armondo Aguilar, who told the commission last spring that the ubiquitous Clearview AI software used by the Miami Police Department is only accurate 40 percent of the time prior to the human-level fact-check required by departmental guidelines. Even then, human reviewers can fall victim to “automation bias,” the tendency to favor suggestions from automated systems and avoid contradictory information.
Facial recognition technology isn’t just problematic in law enforcement. If public housing authorities use facial recognition to surveil their tenants, whose incomes afford them no meaningful alternative to such housing, those tenants must choose between housing and privacy, along with the risk of unfair consequences like eviction and the denial of entry due to false positive and false negative results.
Issues like these were central to a 2023 Washington Post investigation into the use of surveillance systems by public housing authorities, many equipped with facial recognition capabilities.
Facial recognition technology testing, training and deployment guardrails, as they exist today, are neither holistic nor standardized enough to account for the complex, real-world scenarios in which governments at the federal and local levels are deploying facial recognition.
To summarize, when the federal government deploys and heavily relies upon facial recognition technology in real-world scenarios without proper testing and oversight, such as in the form of a “human-in-the-loop” to independently review search results, it can become the basis for false arrests, wrongful convictions and unfair housing practices, to say nothing of the privacy risks inherent in mass surveillance.
As AI proliferates due to its usefulness, we must be mindful of its ever-growing risks to civil rights and civil liberties. Several key recommendations to the federal government contained in the commission’s report offer a framework for how governments at the federal and local levels, and even private actors, can guard against such harms.
First, facial recognition testing and training should be mandatory, standardized and involve real-world scenarios.
Second, public transparency in the use of facial recognition by a department or agency should be prioritized, such as posting use policies on their websites and informing criminal defendants when facial recognition has been used against them.
Third, individuals harmed by the misuse or abuse of facial recognition technology should have a statutory mechanism for redress of any harm suffered.
This moment in history presents a crucial opportunity for the U.S. government to meet this moment of immense technological potential with due consideration and protection for the civil rights and civil liberties of every American.
Mondaire Jones is a member of the U.S. Commission on Civil Rights and formerly a Democratic U.S. representative for New York’s 17th Congressional District serving on the House Judiciary and Ethics committees.