The use of facial recognition systems powered by algorithms and software continues to raise controversy given their potential use by law enforcement and other government agencies. For over a decade, the Department of Commerce’s National Institute for Standards and Technology (NIST) has evaluated facial recognition to identify and report gaps in its capabilities. Its most recent report in 2019 quantified the effect of age, race, and sex on facial recognition accuracy.
The greatest discrepancies that NIST measured were higher false-positive rates in women, African Americans, and particularly African American women. It noted, “False positives might present a security concern to the system owner, as they may allow access to impostors. False positives also might present privacy and civil rights and civil liberties concerns such as when matches result in additional questioning, surveillance, errors in benefit adjudication, or loss of liberty.”
But on balance, NIST’s finding of significant variances among different facial recognition algorithms that are used to match images against a large photo database is one that has often been overlooked. This is despite NIST’s explicit caveat that “users, policymakers, and the public should not think of facial recognition as either always accurate or always error prone.”
Some major cities such as San Francisco and Boston already have imposed absolute bans on facial recognition technologies for all government agencies they control. In doing so, these cities largely have rejected NIST’s methodological testing results, which also have served as a catalyst for the development of more accurate facial recognition systems over time.
Additionally, these broad-brush prohibitions can prohibit highly beneficial uses. For example, in New Delhi, India, police utilized facial recognition to identify 3,000 missing children out of 45,000 within only four days of a trial launch. The same approach could be employed in the United States now by helping to reunite children with their parents after forced separations that took place as part of enhanced border enforcement.
A second regulatory option has been to enact a narrower ban on a time-limited basis (e.g., three years) so that facial recognition technologies can be more closely studied. This is the statewide approach that California took in its 2019 law that imposed a moratorium for using facial recognition with police body cameras.
The effect of both these approaches has reverberated in the private sector. Amazon established a one-year moratorium on selling facial recognition systems to police departments nationwide. IBM has halted facial recognition system sales to any government agencies. So did Microsoft, for as long as there is no federal law regulating facial recognition. But that possibility seems well down the road for the current Congress, especially since with reduced innovation, the less likely it will be for test results to demonstrate a much higher level of accuracy among different demographic groups.
In context, some third-way thinking would be beneficial. Utah’s recently enacted facial recognition legislation is poised to become law as soon as it is signed by Gov. Spencer Cox.
It places limitations on the way government entities may use image databases for facial recognition comparisons; it also describes the process and requirements for conducting a facial recognition comparison, including a written request with a statement of the specific crime being investigated and a “factual narrative” establishing a “fair probability” the person is connected with the crime. A government employee can only comply with requests made for the purposes of investigating a felony, violent crime, or a threat to human life; or to identify a person who is dead, incapacitated, at-risk, or otherwise unable to provide an identity to law enforcement.
Additionally, it requires facial recognition training for Utah Department of Public Safety and government employees; provides that a department may only use a facial recognition system with respect to databases shared with or maintained by the same department; mandates a notice requirement for government entities that use facial recognition technology; and describes information that is required to be released or that is protected, in relation to a facial recognition comparison.
The law is not perfect; it would be even more privacy-protective if it imposed comparable restrictions on how government agencies could use facial recognition by analyzing private databases, such as those developed by social media companies. But for now, it represents a regulatory template that can enable further innovation, better accuracy, and real-world public safety applications, without imposing rigid barriers that may not be justified by algorithmic test results.
Utah’s legislation also reflects broad bipartisan support. It demonstrates that meaningful guardrails can be put in place now while leaving open the possibility that they be fortified in the future, as necessary. Most importantly, it addresses current public policy concerns about regulating facial recognition systems as they continue to evolve.
Stuart N. Brotman is a Distinguished Fellow at The Media Institute and is a member of the Institute’s First Amendment Advisory Council. He is the author of Privacy’s Perfect Storm: Digital Policy for Post-Pandemic Times. This article appeared in InsideSources.com.