What Can President Biden Do to End the Disinformation Age?

What Can President Biden Do to End the Disinformation Age?

On Jan. 6, 2020, something extraordinarily dangerous occurred. During Congress’ certification of the Electoral College votes from the 2020 election, armed protesters stormed the Capitol Building, overwhelming police officers and forcing lawmakers to seek shelter. What made this occurrence so out-of-the-ordinary was that the protesters were supporters of then-President Donald J. Trump, who had been defeated in his bid for re-election by former Vice President Joe Biden.

Five people died in the attack, or as a result of the attack. Brian Sicknick, a Capitol Police officer, died of his injuries after a savage beating from the insurrectionists. Ashli Babbitt, a Trump supporter, was shot and killed by Capitol Police. Three other Trump supporters died that day, as well: Kevin D. Greeson, who suffered a fatal heart attack; Rosanne Boyland, who was apparently trampled by in the crowd as they attempted to breach police lines; and Benjamin Philips, founder of a pro-Trump website called Trumparoo, who reportedly suffered a fatal stroke.

As for what sparked the violence? Disinformation.

What is that, anyway?

According to Merriam-Webster, disinformation is “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.” In other words, disinformation is always intentional.

This is in contrast to misinformation, which is simply “incorrect or misleading information.” Admittedly, members of the news media frequently use these terms interchangeably. That needs to change, because they could not be any more different.

Misinformation can happen by accident, and be a simple mistake without malicious intent. Disinformation, however, is intentionally malicious, as it purposefully aims to spread falsehoods.

In this case, Trump, prior to Election Night, continually claimed (falsely) that the election was rigged against him.   Later, rather than accept his election loss, Trump loudly and frequently insisted (again, falsely) that the election was stolen from him. He claimed that a massive plot of voter fraud had robbed him of his second term and, after various state recounts failed to sway the election results, took to the courts in an effort to prove his case. In the overwhelming majority of the court cases, the Trump campaign’s arguments were heard and dismissed, typically for lack of evidence, although all too often for improper legal preparation.

Trump, throughout it all, continued to call upon his followers to resist, to “Stop the Steal,” and hinted that there would be violence if the so-called plot against him didn’t end. And his followers believed his disinformation,  culminating with their storming the Capitol Building.

The events of Jan. 6 demonstrate one of the most visible – and increasingly common — outcomes of disinformation: violence.

HOW DID THIS ALL START?

Let’s walk things back a bit. Over a decade ago, an Ohio State University study warned that news consumers, rather than access a diverse array of ideas, instead were migrating to news networks that reinforced the beliefs they already held. That bias has, in the decade-plus since, metastasized into the disinformation age we see today, as people increasingly looked at their own values and points-of-view as “correct,” and any dissenting viewpoint as “wrong.” In time, dissenters were attacked as being “anti-American.” Even the way people perceived the world is different, with the concept of “common ground” having been relegated to the dustbin of social history.

At some point, many people stopped considering their opinions as merely their own beliefs, but as “facts.”  With a growing segment of the population, even considering facts counter to what they believed led to cognitive dissonance, defined as a “psychological conflict resulting from incongruous beliefs and attitudes held simultaneously.”  More, it has been shown that attempting to “simple change someone’s mind” on a deeply held belief actually triggers parts of the brain associated with self-identity and negative emotions. In other words, the brain actually rejects concepts that run counter to what the person believes.

So, now you know why people “believe the crazy things they do,” and why they aren’t swayed by facts. Even when that information is wrong, they still believe it. And then they spread their bad beliefs, their “alternative facts,” creating a dangerous, and ever-growing, cycle that eventually leads to the demise of objective facts, and of truth.

Fun fact: Science, journalism, and voting, the bedrock of politics, rely on the idea that facts are objective, not subjective, and are thus reliable. When one starts to question whether “truth” is real or not, it jeopardizes belief in those institutions. And that can lead to outcomes like we see today, with a growing, partisan divide between those who believe in science, in journalism … and those who do not.

Disinformation sparks lack of trust in journalism, science, and political structures. That lack of trust creates a void. And ironically enough, as nature abhors a vacuum, something will always attempt to fill that empty space.

Enter disinformation.

On Dec. 30, 2020, Sen Ben Sasse, R-Nebraska, noted that “America has always been fertile soil for groupthink, conspiracy theories, and showmanship. But Americans have common sense. We know up from down, and if it sounds too good to be true, it probably is. We need that common sense if we’re going to rebuild trust.”

However, Emily Dreyfuss, editor of Harvard University’s Media Manipulation Casebook, warns that the proliferation of disinformation has a way of overriding common sense:

“Social science studies have shown that the more a person hears something or is exposed to something, the more true it sounds. It’s kind of a glitch in the human brain. It has evolutionarily served us before. But in a disinformation ecosystem, it really is dangerous. And what these hashtags do, what viral slogans and all of these – even memes – what they do is they take really complicated, nuanced issues that people can debate about, that people feel passionate about, and they distill them down to this really simple piece of information that becomes unstoppable in some ways.”

These days, everyone has their own definition of reality. Even as I write this, many in this country believe, whole-heartedly, in two different, conflicting, realities. In one, Biden won the 2020 presidential election, making him the president-elect. In the other, Trump won re-election by a landslide. In the first, people believe that an authoritarian president with aspirations to dictatorship was unseated. In the latter, people believe that the Chosen One was brought down by a massive conspiracy of fraud

It is important to point out that, for the purpose of this presentation, we will be preceding with the objective reality: that Biden won the U.S. presidency with 81.2 million votes, compared to Trump’s 74.2 million; that Biden won the Electoral College vote, 306 to Trump’s 232; that the Electoral College certified that victory; and that the U.S. Congress certified the Electoral College results.

I bring this up because the current social and political atmosphere is a direct result of disinformation. Indeed, the repeated assertion that the election was “stolen” from Trump directly led to the assault on Congress. 

Elizabeth Neumann, former assistant secretary of counterterrorism at the Department of Homeland Security, put it simply:

“A huge portion of the base of the Republican party has now bought into a series of lies that the election was stolen from them, that there is rampant fraud, and, therefore, their voice is no longer heard.”

Indeed, Hallie Jackson, chief White House correspondent for NBC News, mentioned this in December 2020. She referenced Trump counselor Kellyanne Conway’s 2017 comment that “alternative facts” were used to estimate the size of the crowd present at Trump’s inauguration. Jackson warned that the U.S. is “reaching peak alternative fact-cism,” adding that “here we are four years later and it’s not just alternative facts … It’s alternative realities.”

Effectively, those who attacked Congress firmly believed the alternate reality pushed by Trump and his allies. Despite the lack of provable facts behind the argument, this disinformation radicalized Trump’s followers to the brink of violence. One more push, provided by Trump himself, and the insurrection exploded.

But how did we get to that point?

Let’s take a look.

AUTHENTIC HUMAN CONNECTIONS

The entire concept of social media hinges on the idea that it creates social connections online between people. The key word in that simplified explanation is “people.” When you’re on social media, you expect to be communicating and sharing ideas with other people. That honest communication, the authenticity of the human connection, is what makes the entire concept of social media thrive. 

Let’s be honest: we humans worship celebrities. From composers to pop singers, actors of stage and screen, athletes to politicians, we equate celebrity with power. The more popular a person is, the more power we assume that they have.  (Money, of course, is also associated with power. But more money does not automatically equate to more popularity or influence – at least, not in the eyes of the public.  After all, if you had a list of the world’s top billionaires, how many of the names would you actually recognize?)

So, popularity equals power online. On social media, popularity is measured in the number of followers, and the number of accounts that respond to your posts. And if the popularity isn’t enough, there’s a more personal payoff for social media users: a quick high, as though you’ve taken a drug. According to the research magazine Now:

Neuroscientists are studying the effects of social media on the brain and finding that positive interactions (such as someone liking your tweet) trigger the same kind of chemical reaction that is caused by gambling and recreational drugs.

 According to an article by Harvard University researcher Trevor Haynes, when you get a social media notification, your brain sends a chemical messenger called dopamine along a reward pathway, which makes you feel good. Dopamine is associated with food, exercise, love, sex, gambling, drugs … and now, social media. Variable reward schedules up the ante; psychologist B.F. Skinner first described this in the 1930s. When rewards are delivered randomly (as with a slot machine or a positive interaction on social media), and checking for the reward is easy, the dopamine-triggering behavior becomes a habit.

In other words … “Hello. My name is Social Media User, and I am an addict.”

So, you have a system that a) rewards social media users by giving them more influence and power when they attain enough followers, and b) provides an addictive instant-high reward system. And we tend to believe that the system is honest and fair and true.

The problem, of course, is that it isn’t.

ENTER THE BOTS

Bots, as we’ve covered before, are automated computer algorithms that have been programmed to perform specific tasks. One of the things that makes them so useful is that they can be programmed to simulate human interaction. A common example of bots is the automated customer service that many websites offer. Bots are designed to automatically perform tasks that a human would normally perform.

However, technological aids such as bots jeopardize that human connection we were discussing, particularly when social media users’ all-too-human responses are driven not by a post conceived by a human, but by a computer algorithm designed to provoke an emotional, and sometimes irrational, response.

For a long time, Trump was at the top of the news cycle, so he’s an easy example. Large parts of his popularity, prior to his general exile from social media, were because of his social media followers, who he frequently rewarded by mentioning them. Indeed, during the first presidential debate of the 2016 election campaign, he noted that he had 30 million followers on Twitter and Facebook. That number had, prior to Jan. 6, 2021, risen to 88.5 million followers on Twitter, and 35.1 followers on Facebook. An impressive following, to be sure.

But was it real?

A 2016 Oxford University study revealed that, between the first and second presidential debates that year, more than a third of pro-Trump tweets, and nearly a fifth of pro-Clinton tweets, came from bot-controlled accounts — a total of more than a million tweets.

The study also found:

  • During the debates, the bot accounts created up to 27 percent of all Twitter traffic related to the election
  • By the time of the election, 81 percent of the bot-controlled tweets involved some form of Trump messaging

And this isn’t just a problem during high-profile events like presidential debates. Two years later, a Pew Research Center study showed that bots had made a disproportionate impact on social media.  In summer 2017, the center examined 1.2 million tweets that shared URL links to determine how many of them where actually posted by bots, as opposed to people. The findings were worrisome:

  • Sixty-six percent of all tweeted links were posted by suspected bots, which suggests that links shared by bots are actually more common than links shared by humans.
  • Sixty-six percent of links to sites dealing with news and current events were posted by suspected bots. Higher numbers were seen in the areas of adult content (90 percent), sports (76 percent), and commercial products (73 percent).
  • Eighty-nine percent of tweeted links to news aggregation sites were posted by bots.
  • Putting it all a bit more in perspective: The 500 people who were the most active online generated only an estimated six percent of links to news sites. In contrast, the 500 most active bot accounts were responsible for 22 percent of the tweeted links to popular news and current events sites. In other words, bot accounts tweeted more than three times as much as their human-controlled counterparts.

In other words, bots had essentially seized control of a large portion of social media. The digital province of humans was, instead, being partially ruled by bots. A few more examples of how that manifests, and the results:

  • In 2016, Congress passed the Better Online Ticket Sales Act, which banned the use of bots to “circumvent a security measure, access control system, or other technological control or measure on an Internet website or online service that is used by the ticket issuer to enforce posted event ticket purchasing limits or to maintain the integrity of posted online ticket purchasing order rules.”
  • November 2018: The FBI warned that “Americans should be aware that foreign actors—and Russia in particular—continue to try to influence public sentiment and voter perceptions through actions intended to sow discord. They can do this by spreading false information about political processes and candidates, lying about their own interference activities, disseminating propaganda on social media, and through other tactics.” The statement was a joint release with the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence.
  • February 2019: A study showed that bots, including thousands based in Russia and Iran, were much more active during the 2018 midterm elections than previously thought. In nearly every state, more than a fifth of Twitter posts about the elections in the weeks before Election Day were posted by bots.
  • 2019: Twitter detected and removed more than 26,600 bot-controlled accounts. Granted, that sounds like a lot, until you consider that, at the time, the platform had more than 330 million active users.
  • May 2020: Researchers determine that nearly half of the Twitter accounts posting information about COVID-19 were, in fact, actually bots. Researchers found more than 100 fake narratives about COVID-19 being published by the bot accounts, including conspiracy theories “about hospitals being filled with mannequins,” or that the spread of the coronavirus was connected to 5G wireless towers.
  • September 2020: Facebook and Twitter warn that the Russian group that interfered in the 2016 presidential election had again set up a network of fake accounts, as well as a website designed to look like a left-wing news site.
  • October 2020: Emilio Ferrara, a data scientist at the University of Southern California in Los Angeles, warns that bot-controlled social media accounts have become more sophisticated and harder to detect.

As we’ve discovered, bots are excellent at shaping ongoing public narratives to influence public opinion. Another example: Bots have been discovered being used by scammers to write and post fake consumer reviews for ride-share companies, restaurants, hotels, and many other industries. The very information you rely upon to make informed decisions might have been subtly influenced by bots designed to shift your thinking along a predetermined narrative.

But, of course, the bots don’t just appear out of thin air. They are created, and controlled, by humans.

SEND IN THE TROLLS

According to Merriam-Webster, a troll is a person who intentionally antagonizes others online by posting inflammatory, irrelevant, or offensive comments or other disruptive content.

Now, this isn’t necessarily a bad thing. Trolls, of course, can serve a useful purpose in society by generating conversations that people may be reluctant to begin. Writing and publishing a controversial post can be a useful way to get people talking.

Of course, there’s the other kind of troll that is more concerning. Some people post disinformation in order to control the narrative. This type of troll has no interest in an honest, open dialogue. Rather, they want to spread their message, regardless of how harmful it is. And that is always a danger in a social media environment … particularly in a politically polarized nation further traumatized by a global pandemic.

To a large degree, trolls are responsible for a great deal of the disinformation plaguing the internet. Some countries establish troll farms to carry out disinformation campaigns against other sovereign nations, or even just to target specific individuals. As has been previously established, Russia did just that during the 2016 election campaign, acting both to support Trump and weaken Democratic nominee Hillary Rodham Clinton.

Trolling has always been a problem on the internet, but it picked up in 2020 during the COVID-19 crisis.

“Because so many in such a brief span of time have experienced the pandemic and indirectly the sudden increase in unemployment, the contagion effect associated with trolling behavior should be more extensive,” warns Dr. Kent Bausman, a Maryville University professor in the Online Sociology program. “Therefore, what may be grotesquely cathartic at the individual level simultaneously blooms into a toxic form of expression that ultimately erodes collective good will.”

Adds Jevin West, an associate professor at the University of Washington’s Information School: “It is difficult to measure whether trolls during this crisis are worse than others, but are we seeing a lot of troll activity and misinformation. We are swimming in a cesspool of (disinformation). The pandemic likely makes it worse because increased levels of uncertainty (create) the kinds of environments that trolls take advantage of.”

Trolls can simply post disinformation on social media networks. And, of course, that’s a relatively simple task. But, to really make an impact, they turn to more automated techniques.

Bots, anyone?

 DIGITAL PANDEMIC

With the help of bots and trolls, disinformation spreads like wildfire over social media networks. Clare Wardle, of First Draft News, a truth-seeking non-profit based at Harvard’s Shorenstein Center, covered this in an interview with the BBC:

“In the early days of Twitter, people would call it a ‘self-cleaning oven,’ because yes there were falsehoods, but the community would quickly debunk them. But now we’re at a scale where if you add in automation and bots, that oven is overwhelmed.

“There are many more people now acting as fact-checking and trying to clean all the ovens, but it’s at a scale now that we just can’t keep up.”

For example, fake content was widespread during the 2016 presidential campaign. Facebook has estimated that 126 million of its platform users saw articles and posts promulgated by Russian sources. Twitter has found 2,752 accounts established by Russian groups that tweeted 1.4 million times in 2016. Despite billions of dollars spent annually by big tech on R&D, they still haven’t solved these problems.

Dreyfuss, who is also a Harvard Shorenstein Center journalist, explained recently why disinformation is so pervasive:

“A lot of these media manipulation campaigns, and especially when it comes to vaccine hesitancy, they really prey on existing social ledges and cultural inequalities. So groups of people who may already be hesitant and distrustful of doctors are often targeted. …  But in that environment where people are looking for answers and there aren’t necessarily simple and easy answers readily available, into that environment flows disinformation.”

Indeed, disinformation poses a clear threat — particularly when people desperately need information like health guidelines during a global pandemic. It can also stoke anger and spark violence, as we saw on Jan. 6.

It is disingenuous to suggest that all of Trump’s supporters advocate the violence that occurred in the Capitol attack. What is known, however, is the composition of the mobs that ran riot at the Capitol.

According to ABC News:

 “Members of far-right groups, including the violent Proud Boys, joined the crowds that formed in Washington to cheer on President Donald Trump as he urged them to protest Congress’ counting of Electoral College votes confirming President-elect Joe Biden’s win. Then they headed to the Capitol. Members of smaller white supremacist and neo-Nazi groups also were spotted in the crowds. Police were photographed stopping a man identified as a leading promoter of the QAnon conspiracy theory from storming the Senate floor.”

White supremacy and neo-Nazi philosophies, of course, are forms of disinformation that have a negative impact on society because they a) promulgates a false narrative of inherent racial superiority to its believers, and b) cause varied and widespread types of harm to those they deem “inferior.” Conspiracy theories also are forms of disinformation spreading contradictory and often nonsensical ideas.

Security officials and terrorism researchers warn that the embrace of conspiracy theories and disinformation causes a “mass radicalization,” which increases the potential for right-wing violence.

Back in December 2020, National Public Radio delivered this warning:

“At conferences, in op-eds and at agency meetings, domestic terrorism analysts are raising concern about the security implications of millions of conservatives buying into baseless right-wing claims. They say the line between mainstream and fringe is vanishing, with conspiracy-minded Republicans now marching alongside armed extremists at rallies across the country. Disparate factions on the right are coalescing into one side, analysts say, self-proclaimed ‘real Americans’ who are cocooned in their own news outlets, their own social media networks and, ultimately, their own ‘truth.’

BAD ACTORS

The debate over free speech vs. hate speech has persisted … oh, pretty much since forever. Granted, the U.S. Supreme Court has never “created a category of speech that is defined by its hateful conduct, labeled it hate speech, and said that that is categorically excluded by the First Amendment.” Because of that, hate speech cannot be made illegal simply because of its hateful content. However, when you examine the context, then “speech with a hateful message may be punished, if in a particular context it directly causes certain specific, imminent, serious harm — such as a genuine threat that means to instill a reasonable fear on the part of the person at whom the threat is targeted that he or she is going be subject to violence.”

That said, after the Capitol attack, social media platforms moved to further restrict hate speech, conspiracy theories, and other harmful disinformation. Granted, they had been attempting to do so for years, but critics said that the companies’ pattern of what they considered half-measures had helped cause the crisis.

“Blame for the violence (at Congress) will appropriately fall on Trump and his enablers on Capitol Hill and in right-wing media,” said Roger McNamee, an early advisor to Facebook founder Zuckerberg. “But internet platforms — Facebook, Instagram, Google, YouTube, Twitter, and others — have played a central role.”

The Capitol attack had been organized on social media platforms for months. Red State Succession, a Facebook group, was administered by a group that called for a revolution on Jan. 6. After Buzzfeed reporter Ryan MacNamee exposed the group, Facebook shut it down the same day as the attack. Without Buzzfeed’s alert, Facebook may still today be booking revenues based on the ads served up to supporters of this group. Any thoughtful observer would wonder why Facebook doesn’t spend more on self-policing. It’s worth noting that Facebook ended 3Q20 with nearly $56 billion of cash and cash equivalents on its books, over twice what it had before Trump took office. The company has benefited enormously from looking the other way.

McNamee warns that internet platforms “amplify hate speech, disinformation and conspiracy theories, while only selectively enforcing their terms of service.” It is an argument with which others agree.

 Let’s face it: While we’d like to blame trolls for all of the disinformation free-flowing on social media, we can’t. To some degree, this is because the tech companies that run the social media platforms a) have a difficult time keeping up with the sheer amount of false information, and b) possibly have no real interest in reining in such information, as doing so might negatively impact their financial goals.

Admittedly, there is evidence to support both arguments. For example, in March 2020, Twitter made an effort to update its Developer Policy. It sought to, among other goals:

  • Take “a more proactive approach to improving the health of our developer platform by continuing to remove bad actors, which resulted in over 144,000 app suspensions during the last six months.”
  • Ask that “developers clearly indicate (in their account bio or profile) if they are operating a bot account, what the account is, and who the person behind it is, so it’s easier for everyone on Twitter to know what’s a bot – and what’s not.”

In the context of U.S. politics, critics blasted the effort as too little, too late, and demanded that the platform do more to remove disinformation from its content. One critic, CNN journalist Lisa Ling, attacked Twitter on Jan. 2, 2021, saying, “At least you’re trying to call out disinformation but so much damage has been done. TRY TO FIX IT! Our country has never been more divided and you have played a massive role in it.”

James Murdoch, the youngest son of Rupert Murdoch, recently continued that theme in a joint statement with his wife Kathryn:

“Spreading disinformation — whether about the election, public health or climate change — has real world consequences,” the two said. “Many media property owners have as much responsibility for this as the elected officials who know the truth but choose instead to propagate lies. We hope the awful scenes we have all been seeing will finally convince those enablers to repudiate the toxic politics they have promoted once and forever.”

Indeed, after the November election, Newsmax, and elements of Fox News began to walk back their false “massive voter fraud” narrative, as the threat of legal liability became too great to ignore.

And in the aftermath of the Capitol insurrection, Twitter and Facebook moved more aggressively against disinformation – specifically against Trump. Twitter temporarily shuttered Trump’s account for 12 hours, noting that he had violated the platform’s standards against disinformation and glorifying violence. The next day, Facebook suspended Trump’s account on their platform and on Instagram until after Biden’s inauguration.

“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” wrote Facebook chief executive Mark Zuckerberg. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”

After that, social media companies began banning Trump from their platforms, or restricting his use. In addition, they stepped up their battles against disinformation by targeting content that glorified violence, much of which involved Trump, QAnon adherents, or idealogues for support neo-Nazi or White supremacist beliefs. A few examples:

Guy Rosen, vice president of integrity at Facebook, summarized measures that had been implemented, or were going to be implemented for Facebook and Instagram, that were designed to battle the spread of hate speech and incitements to violence. The measures included:

  • Taking “enforcement action consistent with our policy banning militarized social movements like the Oathkeepers and the violence-inducing conspiracy theory QAnon.
  • We’ve also continued to enforce our ban on hate groups including the Proud Boys and many others. We’ve already removed over 600 militarized social movements from our platform.”
  • Boosting the “requirement of Group admins to review and approve posts” prior to publication”
  • “Automatically disabling comments … (in groups with) a high rate of hate speech or content that incites violence”
  • Using artificial intelligence to identify and remove content that likely violates Facebook policies.

Again, critics say the moves are too little, too late.

“While I’m pleased to see social media platforms like Facebook, Twitter and YouTube take long-belated steps to address the President’s sustained misuse of their platforms to sow discord and violence, these isolated actions are both too late and not nearly enough,” said Sen. Mark R. Warner, D-Virginia. “Disinformation and extremism researchers have for years pointed to broader network-based exploitation of these platforms.”

A growing number of people on both sides of the political divide have called for more regulation of social media platforms. Trump and conservatives want more regulations because they say they believe that the platforms censor conservatives … even though ample evidence exists showing that conservatives rule the platforms – making their argument more disinformation. Democrats and liberals are also calling for change, mostly because of how much hate speech exists online that can be directly traced to conservatives.

“The social media sphere is, at its core, a connection and amplification machine, which can be used for both bad and good,” says Morten Bay, a research fellow at the University of Southern California Annenberg’s Center for the Digital Future. “… But unlike, say, the ‘public square’ that social media CEOs want their platforms to be, we have no established ethics for social media, and so neither platforms nor users know what can be considered good and right, except for obvious cases, like extremism and hate speech,” Bay noted. “If we did, most people would know how to handle trolls best, which is to simply ignore them.”

However, human nature makes it difficult to ignore trolls, as we’re compelled to respond to information that we either strongly believe in, or seriously disagree with. Add in that fact that trolls and bots tend to reinforce the messaging of other trolls and bots, and you begin to see a feedback loop that can easily spread. As a result, online discourse can quickly get hijacked by disinformation specialists, whether they are human or not.

WHAT CAN BIDEN DO?

In December 2020, a group of Democratic lawmakers asked Biden to, after his inauguration, combat the “infodemic” of disinformation plaguing America:

“Understanding and addressing misinformation – and the wider phenomena of declining public trust in institutions, political polarization, networked social movements, and online information environments that create fertile grounds for the spread of falsehoods – is a critical part of our nation’s public health response.”

In a previous blog, we discussed what Biden, once inaugurated as president of the United States, might do to enhance our security and protect our privacy on the digital front. The purpose was to relay suggestions on approaches that could be used to deal with threats to our privacy and security in the form of cyberattacks, over-reaching retailers, and the abuse of authority when using biometric technologies such as facial recognition.  

However, the blog did not delve into the threat posed by disinformation. Let’s correct that now, and reflect upon the various actions the newly inaugurated president can help bring the Disinformation Age to an end.

REGULATION OF SOCIAL MEDIA COMPANIES

President Biden should consider several of the recommendations proposed by the Forum on Information and Democracy:

  • New transparency standards “should relate to all platforms’ core functions in the public information ecosystem: content moderation, content ranking, content targeting, and social influence building.”
  • “Sanctions for non-compliance could include large fines, mandatory publicity in the form of banners, liability of the CEO, and administrative sanctions such as closing access to a country’s market.”
  • “Online service providers should be required to better inform users regarding the origin of the messages they receive, especially by labelling those which have been forwarded.”

Getting a bit more in-depth, Biden should:

  • Set new legal guidelines establishing that “whoever finances dissemination of fake news, or orders it from an institution, (will be held legally responsible) for the disinformation,” and held accountable.
  • Draft new definitions of protected speech, designed to eliminate hate speech as a protected class of free speech. Biden can, perhaps, take cues from Germany’s laws, in which, as Wired describes, there are limitations to freedom of speech:

 Germany passed laws prohibiting Volksverhetzung—“incitement to hatred”—in 1960, in response to the vandalism of a Cologne synagogue with black, symmetrical swastikas. The laws forbid Holocaust denial and eventually various forms of hate speech and instigation of violence, and they’re controversial chiefly outside Germany, in places like the US, which is subject to interpretive, precedent-based common law and, of course, a rousing if imprecise fantasy of “free speech.” 

  • Establish new rules, perhaps in the form of additions to the Communications Decency Act of 1996, defining acceptable content, and setting penalties for violations of those definitions. (In 2017, Germany passed its Network Enforcement Act, a law requiring internet companies to remove “obviously illegal” content within 24 hours of being notified about it, and other illegal content within a week. (It should be noted that, unlike the United States, Germany has long had some of the world’s toughest laws involving hate speech. For example, denying the Holocaust or inciting hatred against minorities results in federal criminal charges. Companies can be fined up to $57 million for content that is not deleted from the platform.
  • Make it illegal to profit from disinformation. “Clickbait” in general is designed to generate profit from brand-building and/or ad revenues. Profiting from disinformation should result in legal action and financial penalties against the executives running the companies that violate the regulation.
  • Require social media platforms to police the accuracy of their content, and hold them legally liable for any disinformation published on their platform. This would require changes to the Communications Decency Act, specifically, Section 230.
  • Consider the recommendation of the News Media Alliance, which sent Biden’s staffers suggestions on how to “work with Congress on a comprehensive revision” of Section 230 in order to remove legal immunity for platforms that “continuously amplify – and profit from – false and overtly dangerous content.” This would be a punitive measure that would affect only those platforms that refuse to alter their format.
  • Demand “real name” requirements for social media platforms, in which the accounts can only be opened with a photocopy of a government-issued ID card. Admittedly, there would be a loss of privacy here, but it would lead to a decrease in the frequent “mob” mentality we see online, and an increase in the accountability of account users for their content. For verification, require the platform to confirm the account applicant’s information with two-factor verification: one via text or email, and the other via snail mail, such as a code sent in a letter. (Corporate accounts would likewise have to have verifiable people behind the accounts.)
  • Require social media platforms to pay for news content that was created not on the platform, but by a journalism outlet. The platform should be required to pay the originating news outlet for the use of its content. Australia took the lead on this type of legislation back in December 2020.
  • Require social media platforms to ban the use of bot-controlled accounts, and require big tech to use its deep pockets to scrutinize accounts more closely in order to detect and delete such accounts.

NEW EDUCATION GUIDELINES

  • The U.S. Department of Education should be ordered to develop better training for students in the areas of critical thinking and news literacy. These guidelines should them be disseminated to states for consideration in elementary education.
  • The Education Department should launch grants “to support partnerships between journalists, businesses, educational institutions, and nonprofit organizations to encourage news literacy.”

NEW REGULATION OF JOURNALISM OUTLETS

  • Expand the reach of the Federal Communications Commission to include newspapers, as well as cable and online news outlets. Currently, the FCC covers only radio and broadcast television.
  • Reestablish the Fairness doctrine, a communications policy established 1949 by the FCC. The rule, which applied to licensed radio and television broadcasters, required them to present “fair and balanced coverage of controversial issues of interest to their communities, including by devoting equal airtime to opposing points of view.” The FCC repealed the guidance in 1987. Biden should also update the FCC guidance to include cable news channels and online news outlets. He should then push for the Fairness doctrine to be made into law, and ensure that adherence to the law is a vital part of the licensing for broadcast, cable and online journalism outlets.
  • Set new legal guidance, based on the Fairness doctrine, defining what constitutes factual, objective news, as opposed to the “slanted” takes we see so often on news platforms such as Huffington Post, MSNBC, Fox News, Breitbart, One America Network and NewsMax. Hold news outlets accountable for broadcasting disinformation.
  • Establish concrete definitions over what constitutes a news outlet, as opposed to a venue for entertainment. Ban disinformation from being disseminated by news outlets.

CONCLUSION

These are but a few of the approaches that President Biden might take to end the Disinformation Age. He’ll need to make changes to education, as well as the laws and regulations governing education and social media platforms. Of course, some of the above recommendations will likely be seen as controversial. Some need to be fleshed out within legislative and regulatory bodies. And, of course, there will be those that will inevitably argue that fighting disinformation is a violation of the freedom of speech.

What good is this freedom, though, when it is being abused to spread disinformation? The freedom of speech already has one intelligent exception: the classic “shouting fire in a crowded theater.” If we can make that exception, which is aimed at preventing harm, then we should do the same with disinformation. After all, disinformation is all about taking advantage of others, which inevitably leads to harm. No one should have the right to cause harm in the name of politics, or some insane idea of racial superiority, or because of belief in some fantastical conspiracy myth.

Disinformation does not benefit society. It tears it apart. If the “United” – currently divided – States of America is to continue as a coherent nation, we would do well to remember that.

Abraham Lincoln, after accepting the Illinois Republican Party’s nomination as U.S. senator, spoke in 1858 on the “agitation” caused by differing opinions on slavery. Although the White supremacy component of the agitation is smaller today than it was in Lincoln’s day, I believe parts of the speech still apply, particularly if we apply it to the “agitation” of disinformation:

 If we could first know where we are, and whither we are tending, we could better judge what to do, and how to do it.

 We are now far into the fifth year, since a policy was initiated, with the avowed object, and confident promise, of putting an end to slavery agitation.

 Under the operation of that policy, that agitation has not only, not ceased, but has constantly augmented.

 In my opinion, it will not cease, until a crisis shall have been reached, and passed –

 A house divided against itself cannot stand.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.  

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Image credits: (1) iStock, by Getty Images (cover); (2) Gayatri Malhotra (Biden flag); (3) Charles Deluvio (troll doll); (4) Joshua Hoehne (smartphone close-up); (5) Joshua Bedford (Abraham Lincoln statue).

Share this post