Home IssueArtificial Intelligence Balancing the Conversation about Facial Recognition

Balancing the Conversation about Facial Recognition

by Joshua New
by
CCTV

When police recently arrested the Capital Gazette shooter in Annapolis, Maryland, they ran into a series of problems. The suspect was uncooperative and did not carry any identification, and when police attempted to run his fingerprints, the search proved to be too slow despite the fact that the suspect had a criminal record and was known to police. Arundel County police were finally able to identify the shooter by using used facial recognition software that compared the suspect’s face against millions of stored mugshots from state and federal databases to identify the shooter. This example of police using facial recognition for legitimate and desirable ends stands in stark contrast to the recent flurry of press highlighting fears about the potential for the technology to be abused. The conversation about how to ensure the responsible and ethical use of facial recognition technology is important and worth having–unlike the conversation about banning facial recognition, which overlooks many benefits the technology can provide.

The debate about the use of facial recognition has primarily been a hypothetical one with theoretical harms and benefits. However, this framing is wrong because the benefits are already being realized, while most of the harms are still theoretical. Facial recognition already provides substantial and tangible benefits to law enforcement in areas such as identifying suspects like the Annapolis shooter, and also in combating human trafficking, investigating child pornography, and preventing shoplifting.

For example, U.S. law enforcement agencies have partnered with a company called Marinus Analytics to help identify victims of human trafficking, which has contributed to the rescue of hundreds of victims. Marinus Analytics combines its own image analysis technology with facial recognition software from Amazon Web Services to identify victims of trafficking in ads for sex work. This software, called Rekognition, is the same software that some Amazon shareholders have tried to pressure the company to stop selling to law enforcement.

The Department of Homeland Security (DHS) runs a program called Child Exploitation image Analytics (CHEXIA) to evaluate and deploy facial recognition algorithms that can identify children in child pornography. The National Center for Missing and Exploited Children receives an increasing amount of tips per year, and the amount of exploitation imagery of children is growing exponentially. Identifying and locating victims as quickly as possible is crucial to their safety, but tracking and analyzing this imagery is particularly challenging, as over 300 deep web boards with over 500,000 members are creating, manipulating, and exchanging this material at any given time. Facial recognition algorithms allow analysts to quickly identify all instances of a particular individual’s face in seized imagery, which can greatly aid investigations. DHS is integrating facial recognition algorithms developed through CHEXIA into existing forensic software that it makes freely available to law enforcement agencies around the world.

Even the private sector has begun to use facial recognition to reduce crime. Retailers are beginning to deploy facial recognition software for their in-store cameras to spot shoplifting, which costs stores between 1 and 3 percent of revenue per year, on average. By identifying known shoplifters with facial recognition when they enter a store, some stores have been able to reduce theft by 34 percent.

The opposition to law enforcement use of facial recognition algorithms falls into two main camps. First, some are concerned that the use of facial recognition will have a disproportionate impact on people of color and women, as some facial recognition algorithms are less accurate for these groups as compared to white men. This is an understandable concern, and one that the U.S. government tests for in its regular assessment of facial recognition algorithms. But rather than abandon the technology, the better approach is to increase research and testing to address the disparity. Efforts to do the former are already underway. In June 2018, IBM announced it would publish an annotated dataset of over 1 million images, the world’s largest to date, to advance research into bias in facial analysis, as well as an annotated dataset of 36,000 images of faces with an equal distribution of skin tones, genders, and ages, to help researchers evaluate their facial recognition algorithms.

The second main type of concern about facial recognition has to do with potential threats to civil liberties. For example, the same system police used to identify the Capital Gazette shooter was used to monitor protesters during the rioting in Baltimore following the 2015 death of Freddie Gray. Again, while the objection is understandable, policymakers should not use this as a justification to prohibit the use of facial recognition by law enforcement altogether, but rather as evidence for the need to develop clear rules and norms for the use of the technology. In this case, for example, the rules might have been that the technology could be used, but only for identifying protesters that break the law, or that images could not be stored past a certain amount of time or reused for other purposes. Critics also claim the United States is on the path to becoming China, which is using facial recognition in alarming ways. But the United States has a robust framework of laws governing freedom of speech and assembly, and it would be disingenuous to interpret China’s use of the technology as a sign of things to come for the United States.

Opponents of facial recognition have overwhelmingly portrayed the technology as cause for concern. While there are legitimate questions about how U.S. law enforcement uses the technology, and a need for rules and norms to govern its use, the significant benefits from facial recognition should not be ignored. The myopic view that facial recognition is dangerous risks demonizing its legitimate, desirable, and even life-saving applications. As police departments continue to experiment with facial recognition, policymakers and the public should consider the benefits and risks of the technology fairly and avoid succumbing to knee-jerk alarmism.

Image: pxhere

You may also like

Show Buttons
Hide Buttons