September 23, 2020

Dear Subscribers,

In June, MTN Consulting commissioned a series of articles on topics outside our usual focus on network infrastructure. The first examined bots and their effect on social media, the next two addressed privacy (focusing on data privacy, and facial recognition technology), and the last assessed digital advertising's rise and role in the "death of objective journalism."

With the U.S. Presidential election just a few weeks away, these pieces are all the more relevant today. Bad actors continue to use bots to manipulate public opinion and spread falsehoods (with some unpredictable consequences). It gets harder every day to secure your personal data from hackers and government authorities abusing their powers. Facial recognition is being used by police and online platforms with limited accuracy and often no regard for personal privacy. As for journalism, the Internet sector has done just fine during COVID-19 in part because of soaring ad revenues - while journalists face staff cutbacks and PE takeovers left and right.

The situation isn't great, but the answer is not to look away. Educated, engaged citizens are the only protection against authoritarianism. And authoritarianism isn't good for technology innovation or competition, two things MTN Consulting cares very much about in its coverage of network infrastructure.

These pieces were written by Contributing Writer, Melvin Bankhead, an experienced journalist from upstate New York. Below are short introductions to each with links to the full articles. Please feel free to send along feedback.

Best regards,
Matt Walker
Chief Analyst

So … let’s talk about bots.

You’ve probably heard about them already … most likely connected to social media and the 2016 presidential election.

But, do you know what they are? Or what makes them so dangerous?

Let’s review:

What’s a bot?

A bot is an automated program that has been programmed to perform a specific task. By their nature, bots themselves are neutral. One of the things that makes them so useful is that they can be programmed to simulate human interaction. A common example of bots is the automated customer service that many websites offer. You log in, seek customer service, and a chat window opens. The person you end up talking to may, in fact, not be a person at all.

How do they work?

Bots are designed to automatically perform tasks that a human would normally perform. For example, you could pick up your phone, enter your search engine (we’ll use Google), and type in “What are bots?” Or, you could simply say, “Hey, Google … what are bots?” And your phone, thanks to the bot linked to your voice recognition software, would answer you. In many ways, bots simplify our lives. Regrettably, they increasingly also make things more complex and difficult.

Why should you care?

Ever been enraged when, after waiting a long time for ticket sales to open to your favorite event, the event sells out in mere minutes? In December 2016, President Barack Obama signed the “Better Online Ticket Sales Act,” which banned “the circumvention of control measures used by Internet ticket sellers to ensure equitable consumer access to tickets for certain events.” In other words, it banned people from using bots to scoop up huge numbers of tickets in order to resell them, usually at exorbitant rates, on secondary markets.

Unconvinced? In 2018, the Pew Research Center released a study showing that bots were making a disproportionate impact on social media. During a six-week period in the summer of 2017, the center examined 1.2 million tweets that shared URL links to determine how many of them where actually posted by bots, as opposed to people.

Among the findings:

  • Sixty-six percent of all tweeted links were posted by suspected bots, which suggests that links shared by bots are actually more common than links shared by humans....CONTINUE READING

There’s been a lot of talk in recent weeks regarding facial recognition technology. Much of the conversation has centered on privacy concerns. Other aspects concern the technical flaws in the software, which impacts the technology’s accuracy. Still others center on the demonstrated gender and racial biases of such systems, and the potential of governments and police forces using facial recognition to weaponize racial bias.

Indeed, the media has been following the conversations. Reports have dealt with China’s current use of facial recognition in its crackdown on a minority group; the questionable accuracy of the technology itself, particularly when involving people of color; and, of course, the intersection of privacy, law enforcement and racial bias when U.S. agencies and local police forces use facial recognition technologies.

A few other examples:
  • Concern that PimEyes, which identifies itself as a tool to help prevent the abuse of people’s private images, could instead “enable state surveillance, commercial monitoring and even stalking on a scale previously unimaginable.”
  • Concern that use of Clearview AI’s facial recognition system could easily be abused, as the app’s database was assembled by “scraping” pictures from social media, enabling the company to access your name, address and other details — all without your permission. The app, although not available to the public, is being “used by hundreds of law enforcement agencies in the U.S, including the FBI.” In May, Clearview AI announced that it would cease selling its software to private companies.
  • In response to the mask-related laws connected to the spread of COVID-19, tech companies have been attempting to update their facial recognition software so that it still works even when the subject of the scan is wearing a face mask...CONTINUE READING
As I indicated in Part One of these reports on digital privacy, digital tools such as facial recognition are used for many beneficial purposes. However, as I demonstrated, those tools are also extremely easy to abuse, particularly in the hands of governments and the law enforcement community.

One of the films of the blockbuster film series, the Marvel Cinematic Universe, demonstrated the threat in a most capable manner.

Reel life reflecting real life

In “Captain America: The Winter Soldier,” Steve Rogers, the titular super-soldier, finds himself in a race against time to stop a deadly conspiracy that is fueled by abuse of digital surveillance. It’s discovered that the government security agency SHIELD has been infiltrated by a terrorist group known as Hydra. As Hydra scientist Arnim Zola explains, “Hydra was founded on the belief that humanity could not be trusted with its own freedom. What we did not realize is that if you try to take that freedom, they resist. (World War II) taught us much. Humanity needed to surrender its freedom willingly. After the war … the new Hydra grew. For 70 years, Hydra has been secretly feeding crises, reaping war. … Hydra created a world so chaotic that humanity is finally ready to sacrifice its freedom to gain its security.”

Hydra infiltrator Jasper Sitwell explains how digital information is being used to determine the targets for the imminent lethal uprising. “The 21st century is a digital book. Zola taught Hydra how to read it. Your bank records, medical histories, voting patterns, emails, phone calls, your damn SAT scores. Zola’s algorithm evaluates people’s past to predict their future. … And then the Insight helicarriers [heavily armed aerial transports] scratch people off the list a few million at a time.”

Yeah, that’s a frightening scenario: “Big Brother” writ large. Depending on your age and education, you might wonder what the hit CBS television show has to do with digital privacy. After all, the reality TV show is designed for entertainment. But the phrase “Big Brother” debuted in George Orwell’s 1949 novel “1984,” in which a totalitarian government maintains control through constant electronic surveillance of its citizens. Today, the phrase “Big Brother” is “a synonym for abuse of government power, particularly in respect to civil liberties, often specifically related to mass surveillance.”

And, as I demonstrated in the previous essay, digital information, particularly facial recognition, can easily be misused and abused … as demonstrated by these most recent examples, which were made public after the last essay was published:

  • In Michigan, Robert Williams, a Black man, was arrested by Detroit police in his driveway. Police thought Williams was a suspect in a shoplifting case. However, the inciting factor for the arrest was a facial recognition scan, which had incorrectly suggested that Williams was the suspect. And while the charges were later dropped, the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” says the American Civil Liberties Union, which has filed a complaint with Detroit police department.
  • In May, Harrisburg University announced that two of its professors and a graduate student had “developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face.” On June 23, over 1,500 academics condemned the research paper in a public letter. In response, Springer Nature will not be publishing the research, which the academics blasted as having been “based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The academics also warn that it is not possible to predict criminal activity without racial bias, “because the category of ‘criminality’ itself is racially biased"...CONTINUE READING

In the beginning, the people of Earth told their truths, voiced their opinions, and advertised their wares and services in print media such as newspapers and magazines.

And the people of Earth looked upon their works, and called it good.

In time, humanity harnessed the power of parts of the electromagnetic spectrum. And lo, the people of Earth told their truths, voiced their opinions, and advertised their wares and services on radio.

And the people of Earth looked upon their works, and called it good.

In time, humanity harnessed the power of more of the electromagnetic spectrum. And lo, the people of Earth told their truths, voiced their opinions, and advertised their wares and services on television.

And the people of Earth looked upon their works, and called it good.

In time, humanity harnessed the power of even more of the electromagnetic spectrum. And lo, the people of Earth told their truths, voiced their opinions, and advertised their wares and services in the digital realm.

And the people of Earth looked upon their works, and … well, that’s where all kinds of things got screwed up … particularly for news outlets.

Getting to the point

Of course, “screwed up” is a completely subjective interpretation of the current reality of the state of journalism. Still, I consider it appropriate. At least as it relates to digital marketing and social media. An advantage of my journalism career starting in 1997 and ending in 2018 is that I was able to watch in real time as journalism as a whole shifted to accommodate the new reality of the internet. I’ve also been able to watch as the rise and growth of social media was seemingly accompanied by a loss of many Americans’ critical thinking skills and, from there, a hyper-partisan nation where your political affiliation dictated your news source.

My biased version of history notwithstanding, there is no arguing that there has been a massive shift in how journalism is defined and perceived in this country. With the rise of the internet came the birth of social media. With social media came an expansion of where people could share their news and voice their opinions.

Where things got “screwed up,” as I said, is when people stopped looking at their opinions as their own thoughts and biases, and started perceiving them as “facts.” And then they spread these “alternative facts” (more on this later) via social media. And as the social media platforms gained power, and truth became more and more subjective, news organizations lost power.

Recent news

Digital advertising has fueled the growth of Facebook, Google, Baidu, Tencent, and other internet services companies. The companies don’t charge their users, which enhances the social media platforms’ popularity. Still, the proliferation of the social media platforms hasn’t been the best thing for the news business. Indeed, digital companies have been in the news fairly often in recent years, as has their connection to news outlets. A few examples:

  • In fall 2019, it became apparent that the two reigning giants of digital advertising would have to acknowledge a third member of the club. Facebook and Google, which had ruled the industry for most of the decade, was about to be joined by Amazon. Indeed, Amazon’s advertising revenue has continued to grow, with even its 1Q20 numbers, from the beginning of the COVID-19 crisis, showing a year-over-year increase of 44 percent. Still, that growth comes with a price. Amazon is owned by Jeff Bezos, the owner of The Washington Post. The Post’s investigative reporting on President Donald J. Trump drew Trump’s wrath, and that rage spilled over onto Amazon...CONTINUE READING
To see our most recently published reports, click here
For information on subscribing to our research services, click here

You are receiving this because you are signed up to receive MTN Consulting's latest blogs and research alerts. We hope you enjoy our content, but you can unsubscribe at any time with the link at the bottom of this email - or by replying with "unsubscribe".