Skip to main content

Want to fight online voting misinformation? A new study makes a case for targeting Trump tweets

Want to fight online voting misinformation? A new study makes a case for targeting Trump tweets

/

Research suggests disinformation starts at the top

Share this story

President Trump Holds A Press Briefing At The White House
Photo by Alex Wong/Getty Images

As the 2020 presidential election approaches, social networks have promised to minimize false rumors about voter fraud or “rigged” mail-in ballots, a mostly imaginary threat that discourages voting and casts doubt on the democratic process. But new research has suggested that these rumors aren’t born in the dark corners of Facebook or Twitter — and that fighting them effectively might involve going after one of social media’s most powerful users.

Last week, Harvard’s Berkman Klein Center put forward an illuminating analysis of voting misinformation. A working paper posits that social media isn’t driving most disinformation around mail-in voting. Instead, Twitter and Facebook amplify content from “political and media elites.” That includes traditional news outlets, particularly wire services like the Associated Press, but also Trump’s tweets — which the paper cites as a key disinformation source.

The center published the methodology and explanation on its site, and co-author Yochai Benkler also wrote a clear, more succinct breakdown of it at Columbia Journalism Review. The authors measured the volume of tweets, Facebook posts, and “open web” stories mentioning mail-in voting or absentee ballots alongside terms like fraud and election rigging. Then, they looked at the top-performing posts and their sources.

The authors overwhelmingly found that spikes in social media activity echoed politicians or news outlets discussing voter fraud. Some spikes involved actual (rare) cases of suspected or attempted fraud. But “the most common by far,” Benkler writes, “was a statement Donald Trump made in one of his three main channels: Twitter, press briefings, and television interviews.”

In other words, during periods where lots of people were tweeting or posting on Facebook about the unfounded threat of mass mail-in voting fraud, they were most often repeating or recirculating claims from the president himself. The authors themselves aren’t directly calling to pull Trump’s content from Twitter, and as noted above, that’s not the only way he communicates. But they offer lots of evidence that his tweets — and the resulting press coverage — provide major fuel for misinformation.

Trump’s tweets, press conferences, and interviews were “by far” the most common misinformation driver

One of the highest peaks on all three platforms came in late May — just after Trump tweeted that there is “zero” chance mail-in ballots will be “anything less than substantially fraudulent.” Another appeared at the end of August, when Trump warned that 2020 would be “the most inaccurate and fraudulent election in history.” (It should go without saying that there’s no evidence for either claim.) The biggest Twitter-specific spike arrived amid a flurry of Trump tweets, press briefings, and Fox News segments in April.

“We have been unable to identify a single episode” of major election fraud posting that was “meaningfully driven by an online disinformation campaign” without an “obvious elite-driven triggering event,” the authors write. And often, those triggering events were clear disinformation — baseless claims that mail-in voting was dangerous.

As the authors note, voter fraud story patterns don’t necessarily generalize across other topics. QAnon-specific conspiracies, for instance, were clearly generated online and only later condoned by politicians like Trump. Some coronavirus misinformation has come from non-mainstream conspiracy videos like Plandemic, although Trump played a key role in promoting experimental hydroxychloroquine treatments as a “miracle” cure, as well as purveying more general COVID-19 misinformation.

The study is a working paper, not a peer-reviewed publication — although Stanford Internet Observatory researcher Alex Stamos tweeted that it “looks consistent” with other work on election disinformation. It also doesn’t necessarily exonerate social media as a concept. Twitter’s design, for instance, encourages the kind of blunt, off-the-cuff statements that Trump has turned into misinformation super-spreader events. He could still use press conferences and interviews to set the tone of debate, but without Twitter, he wouldn’t have access to a powerful amplification system that encourages his worst impulses.

“Social media clickbait is alive and well.”

Similarly, the authors acknowledge that hyperbolic, misleading online news can spread widely across social networks. “Looking at the stories that were linked to by the largest number of Facebook groups over the course of April 2020 certainly supports the proposition that social media clickbait is alive and well on the platform,” the study says. But they argue that these “clickbait” outlets are echoing stories set by more powerful politicians and news outlets — not driving American politics with “crazy stories invented by alt-right trolls, Macedonian teenagers, or any other nethercyberworld dwellers.” Far from being filled with specific “fake news” stories, Trump’s tweets (and the equivalent messages he posts on Facebook) often don’t even mention specific incidents of fraud, real or imagined.

Even with its caveats, the work indicates that it’s valuable to look beyond the threats of social media trolling campaigns and recommendation algorithms — if only because that offers more concrete solutions than demanding nebulous and potentially impossible crackdowns on all false information. Facebook and Twitter periodically tout the removal of foreign “coordinated inauthentic behavior” networks, and in the lead-up to the presidential election, Facebook announced that it would temporarily stop accepting political ads on its network. But while these broad-reaching efforts may end up being helpful, the Harvard study implies that pulling a few specific levers might be more immediately effective.

If this research is accurate, a primary lever would be limiting the president’s ability to spread misinformation. “Donald Trump occupies a unique position in driving the media agenda,” the authors contend, and his appearances on new and old media alike have “fundamentally shaped the debate over mail-in voting.”

Twitter has taken steps toward fighting this, restricting the ability to like or retweet some of Trump’s misleading claims. Facebook’s response has been much weaker, simply adding a generic link to its Voting Information Center. But this research makes an indirect case for treating Trump as a deliberate serial purveyor of disinformation — an offense that would get many lower-profile accounts banned.

“Donald Trump occupies a unique position in driving the media agenda.”

Other solutions are outside the scope of social media. The authors write, for instance, that smaller newspapers and TV stations rely on syndicated newswire services, and that Americans tend to trust these sources more than national news outlets. The AP and similar publications are centralized institutions controlled by traditional journalists. And the authors were less than impressed by the way they framed mail-in voting stories, criticizing syndicated outlets for creating a sense of false balance or a “political horse race” instead of pointing out false claims.

This isn’t a new criticism, nor one that’s restricted to voting. This spring, some TV networks stopped airing Trump’s rambling and misinformation-filled briefings on the coronavirus pandemic. But Harvard’s research methodically examines just how influential the president’s messaging is online.

Even if Trump loses the election in November, there’s a valuable lesson here for news outlets and social media sites. If a public figure establishes a clear pattern of bad behavior, refusing to let them spread false statements might be just as effective as looking for underhanded disinformation campaigns. On social media, the worst trolls aren’t legions of conspiracy theorists or Russian operatives hiding in the dark corners of the web — they’re politicians standing in plain sight.

Moderation at scale is incredibly technologically difficult. But this study suggests platforms could also just straightforwardly ban (or otherwise limit) powerful super-spreaders, especially if traditional media outlets also reevaluate what they’re amplifying. If more research backs up this idea, then the most immediate disinformation fix isn’t urging platforms to develop sophisticated moderation structures. It’s pushing them to apply simple rules to powerful people.