TECHNOLOGY HEY, I RECOGNIZE YOU 41 | maceandcrown.com
Kyle Winfield Cameras are everywhere. Traffic cameras catch someone speeding or running a red light. Surveillance cameras can act as a deterrent from stealing in a store, or any number of things. Police wear them on their bodies. You have one in your pocket. At this point, they are just another part of everyday life. They have become so commonplace that we do not even notice them anymore. But what if these cameras could recognize you? Facial recognition software is the technology that is able to recognize and identify an individual from still images or a video frame. They work by selecting certain facial features and then matching those against a database of other faces. The technology has been around for longer than one would think, getting its start in the 1960’s. The original software, developed by Woody Bledsoe, Helen Chan, and Charles Bisson, worked by manually picking out various aspects of human faces and then compared against other faces in a database in order to determine the identity of said face. This technology would evolve from having humans manually enter faces and the distances between features, to involving AI learning. Since then, facial recognition has been used in a variety of circumstances. The most common instance is on smartphones. Replacing the old PINs, passwords and fingerprint scanners is facial recognition which has emerged as a new way to unlock your phone. All that’s required is the phone owner’s face and a front facing camera. The software stores an image of the owners face and then scans their face against the stored image whenever they open up their phone. Another usage of this technology is in the world of medicine. Similar to how smartphones will unlock by scanning the user’s face, some hospitals dispense medicine by scanning the face and matching it with a prescription. While those former uses seem relatively benign, there are some other uses, and flaws, of this technology that should raise some serious eyebrows. Using facial recognition software to catch criminals sounds like something out of a science fiction film. It conjures images of grainy camera footage zooming in on a crowd, where it singles out the perp’s face, scans it, and then enhances the image to a sharper clarity. The police then do their job, and the case is solved. Or it could go in a different direction, where the software misidentifies a suspect, and the police arrest the wrong man. These fears were shared by many in the field of facial recognition software who feel that this would give the police and other government organizations too much power in monitoring the populace. While the software has the potential to be used to make breakthroughs in missing persons cases, or track criminals, it is still not perfect. This would lead to mistakes and false positives being used as the basis for making arrests. Or even worse abuses. This has led to companies who produce the software refusing to sell it to law enforcement agencies, and even calls to congress to limit the uses of this technology. Brian Brackeen, CEO of Kairos, a company that produces facial recognition software, expressed his concern for the potential uses of this software, saying “It’s not too late for someone to take a stand and keep this from happening.” Brackeen’s concerns comes from the software’s lack of ability to differentiate faces of people who have darker complexions. This concern was also noted in a New York Times article, which detailed the success rate that these technologies had in terms of differentiating between lighter skinned faces and darker skinned faces. Steve Lohr, the writer of the New York Times article, noted that in the cases of the software trying to identify white males, “the software was right 99 percent of the time.” Contrast this with the software’s success in identifying darker skinned women which was “35 percent for images of darker skinned women.” These disparities were documented by M.I.T. Media Lab researcher Joy Buolamwini, who demonstrated that biases can creep into artificial intelligence, due to what kind of data is fed to it in order to train it. When the AI is given more images of white male faces, it will be better able to differentiate which face belongs to who with a much higher rate of success. The inverse is true, in that when given less faces of African American women, the lower the rate of success will be in terms of differentiating between them. Think back to how this could be used in the field of law enforcement. If a law enforcement agency purchases a software that was trained disproportionately to identify white men with a higher rate of success when compared to black women, the chances of the latter group being wrongfully misidentified and targeted will drastically increase. Or, consider how the technology has been used by law enforcement abroad in places like Hong Kong. While political protestors fill the streets of Hong Kong, the police have taken to using facial recognition tied to surveillance cameras and phones to target individuals who lead the protests in order to arrest them. This has led to protestors covering their faces in order to avoid being targeted by the increasingly aggressive police tactics. The same goes for protestors, who have taken to using facial recognition apps to target undercover police who have tried to infiltrate said protests. One protestor, Colin Cheung who was wrongfully arrested, defended the use of the apps, saying in a New York Times article “If law enforcement officers don’t wear anything to show their identity, they’ll become corrupt.” These are just some of the ways that facial recognition software can be used. While there are some positive uses, there can be just as many negative uses that could ultimately harm a society and it’s people. <strong>Fall</strong> <strong>2019</strong> | 42