DIGITAL PRIVACY, PART ONE: THE DANGERS OF FACIAL RECOGNITION

Side view of conceptual face recognition technology.

DIGITAL PRIVACY, PART ONE: THE DANGERS OF FACIAL RECOGNITION

There’s been a lot of talk in recent weeks regarding facial recognition technology. Much of the conversation has centered on privacy concerns. Other aspects concern the technical flaws in the software, which impacts the technology’s accuracy. Still others center on the demonstrated gender and racial biases of such systems, and the potential of governments and police forces using facial recognition to weaponize racial bias.

Indeed, the media has been following the conversations. Reports have dealt with China’s current use of facial recognition in its crackdown on a minority group; the questionable accuracy of the technology itself, particularly when involving people of color; and, of course, the intersection of privacy, law enforcement and racial bias when U.S. agencies and local police forces use facial recognition technologies.

A few other examples:

  • Concern that PimEyes, which identifies itself as a tool to help prevent the abuse of people’s private images, could instead “enable state surveillance, commercial monitoring and even stalking on a scale previously unimaginable.”
  • Concern that use of Clearview AI’s facial recognition system could easily be abused, as the app’s database was assembled by “scraping” pictures from social media, enabling the company to access your name, address and other details — all without your permission. The app, although not available to the public, is being “used by hundreds of law enforcement agencies in the U.S, including the FBI.” In May, Clearview AI announced that it would cease selling its software to private companies.
  • In response to the mask-related laws connected to the spread of COVID-19, tech companies have been attempting to update their facial recognition software so that it still works even when the subject of the scan is wearing a face mask.
  • Business Insider, Wired, U.S. News & World Reports, Popular Mechanics, the Guardian, and the Washington Post have all published reports on ways to defeat facial recognition systems.
  • IBM’s announcement, in a letter to Congress, that “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
  • Amazon’s announcement that they are “implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.”
  • Microsoft CEO Brad Smith confirmed that the company “will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
  • Other tech companies — NEC and Clearview AI among them — restated their commitment to providing facial recognition technology to police departments and governmental agencies.

So, yes, people are talking about facial recognition technology. And as the conversation grows, more people and corporations joining the conversation. MTN Consulting, like Amazon, IBM and Microsoft, and others, is expressing alarm at how the technology is used and, in a growing number of instances, abused.

Oddly, many people don’t know a great deal about the technology, such as how it works, how accurate it is, or how much of a threat it poses.

Let’s explore:

What is facial recognition?

According to the Electronic Frontier Foundation, facial recognition “is a method of identifying or verifying the identity of an individual using their face. Facial recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops.”

How does it work?

According to Norton, a picture of your face is saved from a video or photograph. The software then looks at the way your face is constructed. In other words, it “reads the geometry of your face. Key factors include the distance between your eyes and the distance from forehead to chin. The software identifies facial landmarks — one system identifies 68 of them — that are key to distinguishing your face. The result: your facial signature.”

Next, your facial signature, “is compared to a database of known faces. And consider this: at least 117 million Americans have images of their faces in one or more police databases. According to a May 2018 report, the FBI has had access to 412 million facial images for searches.”

Finally, the system determines whether your face matches any of the other stored images.

How is it used?

Facial recognition has many uses. For example, the 2002 film “Minority Report” imagined the potential outcomes of the technology. In the film, when the main character, played by Tom Cruise, enters a mall, he is inundated by personalized greetings and advertising, all holographic, and all keyed to his facial scan – particularly, his eyes. Later, he enters the subway system, and facial recognition is again used, this time in lieu of immediate payment or carrying identification.

“Minority Report,” in its own way, was prescient in its prediction that facial recognition software would be everywhere, although it primarily addressed the commercial applications. In real life, however, the technology is used by both corporations and governments. A few examples:

  • The Moscow Times recently reported that Russia plans to equip more than 43,000 Russian schools with facial recognition. The 2 billion ruble ($25.4 million) project, named “Orwell,” will “ensure children’s safety by monitoring their movements and identifying outsiders on the premises, said Yevgeny Lapshev, a spokesman for Elvees Neotech, a subsidiary of the state-controlled technology company Rusnano. According to Vedomosti, a Russian-language business daily, “Orwell” has already been in more than 1,608 schools.
  • Mobile phones are sold with facial recognition software that is used to unlock the phone, replacing the need for a password or PIN. Many companies – including Apple, Guangdong OPPO, Huawei, LG, Motorola, OnePlus and Samsung — offer phones with this technology.
  • As for laptops, Apple is lagging behind other manufacturers at the moment. The company recently announced that it is planning to add facial recognition software to its MacBook Pro laptop and iMac screen lines. Meanwhile, Acer, Asus, Dell, HP, Lenovo, and Microsoft. have offered the technology in its laptops for years.

There are, of course, many other ways in which the technology is used:

  • The U.S. government uses it at airports to monitor passengers.
  • Some colleges use it to monitor classrooms, as it can be used for security purposes, as well as something simpler like taking roll.
  • Facebook uses it to identify faces when photos are uploaded to its platform, so as to offer members the opportunity to “tag” people in the photos.
  • Some companies have eschewed security badges and identification cards in favor of facial recognition systems.
  • Some churches use it to monitor who attends services and events.
  • Retailers use surveillance cameras and facial recognition to identify regular shoppers and potential shoplifters. (“Minority Report,” anyone?)
  • Some airline companies scan your face while your ticket is being scanned at the departure gate.
  • Marketers and advertisers use it at events such as concerts. It allows them to target consumers by gender, age, and ethnicity.

 So, what’s the concern?

Well, there are three main concerns, mainly in the areas of privacy, accuracy, and governmental abuse. There is, however, a strong thread of racism that is integral to all three concerns.

Privacy

Although using a facial scan to gain access to your phone is more secure than, say, a short password, it isn’t perfect. There are some concerns about how and where the data is stored.

Admittedly, many people use facial recognition systems for fun. Specialized apps designed for, or that offer, the technology include B612, Cupace 4.8, Face App 4.2, Face Swap (by Microsoft), and Snapchat. The apps permit you to scan your face, and swap it with, say, that of a friend or film star.

The easy accessibility of such apps is a boon for those who would use them. However, the very popularity of the apps give rise to certain questions. For example, if the company stores the facial images in the cloud, how good is the security? How accessible is the data to third parties? Does the company ever sell that data to other companies? A simple leak of data, or a more aggressive hacking of the database, could result in many peoples’ data being compromised.

Another privacy aspect involves monitoring people without their knowledge or consent. People going about their daily business don’t typically expect to be monitored … but there are exceptions, depending on where you live.  Last year, China was accused of human rights abuses in Xinjiang, a province populated by hundreds of thousands of the mostly Muslim ethnic group known as Uighurs. The New York Times reported on how the government used facial recognition systems to identify Uighurs, who were then seized and imprisoned in clandestine camps. Millions of others are monitored daily to track their activities.

In the U.S., reports circulated that some police departments were using technology developed by Clearview AI. The startup had scraped billions of photos from social media accounts in order to assemble a massive database that law enforcement officials could access – all without people’s consent. In other words, any photos that you’ve posted on SnapChat, Twitter, Facebook, Instagram, or other social media platform, could be part of the database without your knowledge. The only way you would find out is if the police connect your face to a crime and come knocking on your door.

Indeed, Clearview AI has raised the ire of the American Civil Liberties Union, the European Data Protection Board, members of the U.S. Senate, as well as provincial and federal watchdogs in Canada.

Admittedly, some will argue that, although the collection of the data is likely an invasion of people’s privacy, the data itself is useful to assist law enforcement. Granted, that interpretation is subjective, but relevant to the argument at hand. However, it also assumes two things: that people being surveilled by the police are suspects; and that the technology is accurate.

In both cases, however, the reverse is often true. And because of that, innocent people can be surveilled without their knowledge or consent; the wrong people can end up arrested, tried and convicted for crimes they didn’t commit; and racial bias can be weaponized. More on that latter point in a bit.

Accuracy

In December 2019, researchers at Kneron decided to put facial recognition to the test. Using images of other people —in the form of 2-D photos, images stored on cell phones, and 3-D printed masks — they managed to penetrate security at various locations. Although most sites weren’t fooled by the 2-D image or video copies, the 3-D mask sailed through most of the scans, including at a high-speed rail station in China and point-of-sales terminals. Worse, the team was able to pass through a self-check-in terminal at the Schiphol Airport, one of Europe’s three busiest airports, with a picture saved on a cell phone. They were also able to unlock at least one popular cell phone model.

So, we know that the face-matching aspect of facial recognition can be fooled. Granted, one might argue that using a 3-D printer isn’t that common an occurrence. However, given that the worldwide sales of 3-D printers generated $11.58 billion USD in 2019; that 1.42 million units were sold in 2018; and that annual global sales are expected to hit 8.04 million units by 2027, it can be safely assumed that 3-D masks pose a risk to facial recognition systems.

Still, obvious attempts to beat the system notwithstanding, there’s an even deeper concern regarding facial recognition — the face-matching aspect of the software isn’t always that accurate, and it has shown a demonstrated bias against women and people of color:

  • In 2018, the ACLU used Amazon’s facial recognition tech to scan the faces of members of Congress. Amazon’s “Rekognition” tool “incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.”
  • The FBI admitted in October 2019 that its facial recognition database “may not be sufficiently reliable to accurately locate other photos of the same identity, resulting in an increased percentage of misidentifications.”
  • In the United Kingdom, police departments use facial recognition systems that generate results with an error rate as high as 98 percent. In other words, for every 100 people identified as suspects, 98 of them were not, in fact, actual suspects.
  • In June 2019, a problem with a Chinese company’s facial recognition system went viral after one employee’s facial scan, used to clock into and out of work, “kept matching (the) employee’s face to his colleagues, both male and female. People started joking that the man must have one of those faces that looks way too common.”
  • Back in January, Robert Williams, a Black man, was arrested by Detroit police in his driveway. He then spent over 24 hours in a “crowded and filthy cell,” according to his attorneys. Police thought Williams was a suspect in a shoplifting case. However, the inciting factor for the arrest was a facial recognition scan, which had incorrectly suggested that Williams was the suspect. And while the charges were later dropped, the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” says the American Civil Liberties Union, which has filed a complaint with Detroit police department. “Study after study has confirmed that face recognition technology is flawed and biased, with significantly higher error rates when used against people of color and women. And we have long warned that one false match can lead to an interrogation, arrest, and, especially for Black men like Robert, even a deadly police encounter. Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about,” the ACLU warns.
  • In May, Harrisburg University announced that two of its professors and a graduate student had “developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face.” On June 23, over 1,500 academics condemned the research paper in a public letter. In response, Springer Nature will not be publishing the research, which the academics blasted as having been “based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The academics also warn that it is not possible to predict criminal activity without racial bias, “because the category of ‘criminality’ itself is racially biased.”

As I indicated earlier, aspects of racism exist with the argument surrounding facial recognition. It’s not just that the technology can be used in a discriminatory manner (more on that later). It is also because the scan results themselves can show bias against women and people of color.

“If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong. That’s a hell of a combination.”

-Congressman Elijah Cummings, March 2017.

In 2012, a joint university study that was co-authored by the FBI showed that the accuracy of facial recognition scans was lower for African Americans than for other demographics. The software also misidentifies “other ethnic minorities, young people, and women at higher rates.” The fact that more recent studies, including some as recent as last year, show these same problems indicates that the bias is known, and yet is still not being addressed.

Another joint university study, this one published in 2019, found that the facial recognition software used by Amazon, IBM, Kairos, Megvii, and Microsoft were significantly less accurate when identifying women and people of color. Among their findings were that Kairos and Amazon’s software performed better on male faces than female faces; that their software performed much better on light-skinned faces than on darker faces; that they perform the worst on dark-skinned women, with Kairos showing an error rate of 22.5 percent, and Amazon showing an error rate of 31.4 percent; and that neither company showed an error rate for lighter-skinned men.

In December 2019, a National Institute of Standards and Technology study demonstrated the results of testing 189 facial recognition from 99 companies. The study found that the majority of the software had some form of bias. Indeed, among the broad findings:

  • One-to-one matching revealed higher error rates for “Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
  • Among U.S.-made software, “there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups (which include Native American, American Indian, Alaskan Indian and Pacific Islanders). The American Indian demographic had the highest rates of false positives.”
  • For software made in Asian countries doing one-to-one matching, there was no dramatic difference in false positives for Asian and Caucasian faces.
  • “For one-to-many matching, the team saw higher rates of false positives for African American females. Differentials in false positives in one-to-many matching are particularly important because the consequences could include false accusations.”

As we discussed earlier, three of America’s top technology companies recently announced that they would temporarily halt, or end altogether, the sale of facial recognition technology to police departments. The announcement by Amazon, IBM and Microsoft surprised police departments, market analysts and journalists for a specific reason: those particular companies had previously shown no real interest in what advocates for racial justice and civil rights had to say.

Although such advocates have complained for years about the threat posed to their communities by mass surveillance, and corporate complicity in that surveillance, it wasn’t until nationwide protests against police brutality and systemic racism that America’s top tech companies began to listen. As we’ve already determined, facial recognition is not all that accurate when dealing with people who are not White men. Even low error rates can result in mistaken arrests. And, as there is a demonstrated police bias against people of color, as shown in arrest rates, the idea of such technology being abused when used against “suspects” of color is not so unbelievable.

In a March 2017 hearing of the U.S. House of Representatives’ oversight committee, ranking member Elijah Cummings warned against law enforcement’s use of facial recognition software. “If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” Cummings said. “That’s a hell of a combination.”

So, we know that the technology isn’t foolproof, that it discriminates against women and people of color, and that it being increasingly used by governmental agencies and police departments.

What can this lead to?

Remember the earlier observation about China?

Governmental Abuses

Last year, PBS went undercover into China’s Xinjiang province to investigate accusations of mass surveillance and detentions of Uighurs, a mostly Muslim ethnic group.  As the New York Times reported, hundreds of thousands of Uighurs were then seized and imprisoned in clandestine camps, while millions of others are monitored daily to track their activities.

In January, Amnesty International warned that, “In the hands of Russia’s already very abusive authorities, and in the total absence of transparency and accountability for such systems, the facial recognition technology is a tool which is likely to take reprisals against peaceful protest to an entirely new level.”  The warning came as a Moscow court took on a case by a civil rights activist and a politician who argued that Russia’s surveillance of public protests was a violation of their right to peacefully assemble.

And, of course, we have the United States, where governmental agencies and police departments use demonstrably racially biased facial recognition software.

As the ACLU reported after Amazon, IBM and Microsoft halted or ended the sale of facial recognition technology to law enforcement agencies, “racial justice and civil rights advocates had been warning (for years) that this technology in law enforcement hands would be the end of privacy as we know it. It would supercharge police abuses, and it would be used to harm and target Black and Brown communities in particular.”

The ACLU warned that facial technology “surveillance is the most dangerous of the many new technologies available to law enforcement. And while face surveillance is a danger to all people, no matter the color of their skin, the technology is a particularly serious threat to Black people in at least three fundamental ways”:

  • The technology itself is racially biased (see above).
  • Police departments use databases of mugshots, which “recycles racial bias from the past, supercharging that bias with 21st century surveillance technology. … Since Black people are more likely to be arrested than white people for minor crimes like cannabis possession, their faces and personal data are more likely to be in mugshot databases. Therefore, the use of facial recognition technology tied into mugshot databases exacerbates racism in a criminal legal system that already disproportionately polices and criminalizes Black people.”
  • Even if the algorithms were equally accurate across race (again, see above), “government use of face surveillance technology will still be racist (because) … Black people face overwhelming disparities at every single stage of the criminal punishment system, from street-level surveillance and profiling all the way through to sentencing and conditions of confinement.”

And, indeed, fresh concerns about law enforcement’s use of facial recognition technologies have surfaced as the Black Lives Matter protests gain steam in the wake of George Floyd’s May 25th death, while unarmed, under the knee of a White police officer. The protests, which consist of American citizens exercising their First Amendment rights, have been met by heavily armored police, aerial surveillance by drones, fake cellular towers designed to capture the stored data on protesters’ phones, covert government surveillance, and threats from President Donald J. Trump.

Of course, it would be wrong to say that all police officers, all governmental officials, are racist. It would be ludicrous, however, to say that the various systems that make up the infrastructure of the United States do not have a strong foundation that is racist in origin – particularly when it comes to law enforcement.

As the ACLU warned, “(the) White supremacist, anti-Black history of surveillance and tracking in the United States persists into the present. It merely manifests differently, justified by the government using different excuses. Today, those excuses generally fall into two categories: spying that targets political speech, too often conflated with ‘terrorism,’ and spying that targets people suspected of drug or gang involvement.” One currently relevant example is the FBI surveillance program that targets what the federal government considers to be “Black Identity Extremists” — the FBI’s way of justifying surveillance of Black Lives Matter activists, much as it kept a close watch on the Rev. Dr. Martin Luther King Jr. during the civil rights protests of the 1960s.

That some of America’s technology companies have decided, at least for now, to no longer be complicit in exacerbating racist policies is something to be applauded. However, it remains to be seen how long these changes will last, who will follow their lead … and whether any important lessons will be learned.

Time will tell.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

 

Image credit: iStock, by Getty Images

Share this post