As the United States gets ready for the 2020 presidential election, there is reason to think that this time, the country might be spared the massive interference campaign that Russia carried out in 2016. Back then, Moscow had a clear opportunity. The cost of running the Internet Research Agency (IRA), the St. Petersburg–based troll farm set up by the Kremlin to spread disinformation during the U.S. election, was about $1.25 million a month. That was a small price to pay for a remarkable foreign policy coup: a seemingly pro-Russian U.S. president in Donald Trump, a humiliating defeat for Hillary Clinton (whom Russian President Vladimir Putin had long disliked), and, above all, a chance to expose U.S. democracy as dysfunctional. Unprepared and seemingly unaware of the planned Russian operation, the United States was low-hanging fruit.

Four years on, Moscow’s calculus is less straightforward. The pandemic and the ensuing crash in oil prices hit the country hard, and Putin’s approval ratings have taken a nosedive. In the past, the Russian president has used foreign policy wins, such as the 2014 invasion of Crimea and Russia’s years-long intervention in Syria, to maintain his support at home. The unspoken contract behind this strategy—that making Russia great again on the world stage was worth some economic sacrifices by its citizens—had grown fragile even before the pandemic. Now, with the Russian economy on a path to long-term stagnation, the majority of Russians want their government to focus on the problems at home. Selling them another foreign policy adventure will be a tall order.

On top of these domestic concerns, the Kremlin would need to work harder in order to manipulate U.S. voters and cover its tracks this time around. A growing cottage industry of analysts now monitors Russia’s disinformation operations across the world. Social media companies have become more aggressive in taking down networks of inauthentic accounts and bots, and they are more willing to point the finger at Moscow and other governments. And the investigation by the U.S. special counsel Robert Mueller revealed the Kremlin’s operational tactics in impressive detail, naming both IRA employees and operatives of the GRU, Russia’s military intelligence unit, which carried out cyberattacks against the Democratic National Committee and the Clinton campaign.

Yet it’s equally plausible that Russia might try again. As Putin positions himself to be Russia’s leader for life, undermining faith in democracy writ large is still very much in the Kremlin’s interest. Most of Russia’s interference in 2016 aimed to amplify divisions around hot-button social issues such as race, immigration, and religion. These divisions have only deepened in the coronavirus era, providing even more ample opportunities to incite chaos. A more divided United States means a more inward-looking White House that will be less concerned with pushing back against Russia’s activities in Syria, Ukraine, and elsewhere. And if the Kremlin once feared the potential consequences of exposure, the United States’ mild response after 2016 put those fears to rest. Although it laid bare the extent of Russia’s meddling, the special counsel’s investigation resulted in only 13 indictments of Russian nationals, mostly low-level IRA and GRU operatives. The U.S. Congress imposed additional targeted sanctions on individual Russian officials and entities but shied away from more aggressive measures, such as instituting broad sanctions on Russian business sectors or restricting Russian financial institutions’ access to the SWIFT international banking payment system. All the while, Trump, who considers any mention of Russian meddling an attack on his own legitimacy, repeatedly went against his country’s intelligence community by believing Putin’s denials.

The Russian government came away emboldened, judging from its daring covert actions in the years since. In 2018, the GRU poisoned and nearly killed the former double agent Sergei Skripal in the United Kingdom, and earlier this year, it was reported that Russia had orchestrated a scheme in 2019 to pay Taliban fighters bounties for attacks on U.S. troops in Afghanistan. At the same time, Russia’s disinformation peddlers have refined their tactics, with social media accounts linked to Russia spreading falsehoods on a number of topics, from the Skripal attack to the Catalan independence movement to the pandemic.

The U.S. government, meanwhile, has responded tepidly to Russian meddling and is now consumed by the pandemic. Russia and others know they are pushing on an open door. With new players in the disinformation game, in all likelihood, 2020 will not be a replay of 2016. It will be far worse.

A TSUNAMI OF FALSEHOODS

A big part of the risk is that Russia is no longer the sole danger. The lack of serious retaliation or long-lasting consequences for its behavior has effectively left the door open for others to follow Russia’s lead. To these newcomers, the Kremlin’s 2016 operation against the United States offers a handy step-by-step guide.

Step one is to build an audience. As early as 2014, the IRA had set up fake social media accounts purportedly belonging to ordinary Americans. Using those accounts, it created online content that was not necessarily divisive or even political but simply designed to attract attention. One IRA Instagram account, @army_of_jesus, initially posted image stills from The Muppet Show and The Simpsons. Between 2015 and 2017, the IRA also purchased a total of over 3,500 online ads for approximately $100,000 to promote its pages.

When it comes to disinformation, 2020 will not be a replay of 2016. It will be far worse.

Step two is to flip the switch. Once an IRA-run account gained some following, it suddenly began publishing increasingly divisive content on race, immigration, and religion. One prominent account was the anti-immigrant Facebook group Secured Borders; another was a pro–Black Lives Matter pair of Facebook and Twitter accounts called “Blacktivist.” The most popular IRA-controlled group, United Muslims of America, had over 300,000 followers on Facebook by mid-2017, when Facebook deactivated the account. Many of the accounts began publishing anti-Clinton content in 2015, adding pro-Trump messaging to the mix the following year.

Step three is to make it real. In time, the IRA’s fake accounts sent private messages to their real-life followers, urging Americans to organize rallies that would sometimes pit opposing groups against each other. According to the special counsel’s investigation, the IRA Instagram account Stand for Freedom tried to organize a pro-Confederate rally in Houston as early as 2015. The next year, another IRA-organized rally in Houston, against the “Islamization” of Texas, pitted protesters and counterprotesters against each other outside the Islamic Dawah Center. In all, the special counsel’s investigation identified dozens of IRA-organized rallies in the United States.

The IRA was able to reach millions and millions of people—126 million through Facebook alone, according to the company, and 1.4 million through Twitter. The GRU’s publication of thousands of stolen Clinton campaign emails dominated news headlines for months, tarnishing the image of the Democratic Party and the Clinton campaign. Such success in reaching large numbers of Americans at a relatively low cost did not go unnoticed, especially by authoritarian regimes. The Iranian government, for example, has stepped up its disinformation operations over the last two years, using methods that are often reminiscent of the IRA’s. In 2018, Facebook removed accounts, pages, and groups associated with two disinformation campaigns (or “inauthentic coordinated behavior,” in the company’s language) originating in Iran. One of the campaigns targeted users in the United Kingdom, the United States, Latin America, and the Middle East. It copied the IRA’s focus on divisive social issues, especially race, promoting memes in support of the former NFL player and social justice activist Colin Kaepernick and cartoons criticizing the future U.S. Supreme Court justice Brett Kavanaugh. Another Iranian campaign, in January 2019, focused on the Israeli-Palestinian conflict and the wars in Syria and Yemen and targeted Facebook and Twitter users in dozens of countries, including France, Germany, India, and the United States. At least one of the Iranian-controlled Facebook pages involved had amassed some two million followers. Earlier this year, Facebook removed another set of accounts linked to Iran that it suspected of targeting the United States ahead of the presidential election.

An anti-Trump protest allegedly organized by the IRA, New York, November 2016
Bria Webb / Reuters

A host of other countries, including Bangladesh, Egypt, Honduras, Indonesia, Iran, North Korea, Saudi Arabia, Serbia, and Venezuela, have also fallen afoul of Facebook’s and Twitter’s rules against disinformation campaigns. But perhaps the most important new player is China. Until recently, Beijing mostly limited its propaganda efforts to its own neighborhood: at the height of the Hong Kong protests in the summer of 2019, Facebook and Twitter for the first time removed accounts and pages linked to the Chinese government; these had been spreading false information about the protests and questioning their legitimacy. In its attempts to change the narrative on how it handled its COVID-19 outbreak, however, Beijing has grown more ambitious: at the peak of the pandemic in Europe this past spring, China unleashed a series of disinformation attacks on several European states, spreading false information about the origins of the virus and the effectiveness of democracies’ responses to the crisis. This prompted the EU to take the unprecedented step of directly and publicly rebuking Beijing in June of this year.

Future elections in the United States and other democracies will face an onslaught of disinformation and conspiracy theories emanating not just from Russia but also from China, Iran, Venezuela, and beyond. The attacks will come through a number of channels: traditional state-sponsored media, fly-by-night digital outlets, and fake social media accounts and pages. They will deploy artificial intelligence technologies to produce realistic deepfakes—audio and video material generated by artificial intelligence that cannot be easily discerned as such. They will be coordinated across major social media platforms, including Facebook, Instagram, Twitter, and YouTube, but also across smaller platforms, such as Medium, Pinterest, and Reddit, which are less equipped to defend themselves. New Chinese social media platforms, such as the fast-growing video-sharing app TikTok, will be unlikely to bow to U.S. political pressure to expose disinformation campaigns, especially those carried out by Beijing. Russia’s “firehose of falsehood,” as researchers at the RAND Corporation have called it, will turn into a worldwide tsunami.

The Russian playbook has been copied by others, but it has also evolved, in large part thanks to Moscow’s own innovations. After social media companies got better at verifying accounts, for instance, Russia began looking for ways to roll out its campaigns without relying on fake online profiles. In the run-up to the 2019 presidential election in Ukraine—long a testing ground for Moscow’s new forms of political warfare—Russian agents tried their hand at account “rentals.” At least one apprehended agent confessed to trying to pay unsuspecting Ukrainians to temporarily hand over some control of their Facebook accounts. The agent planned to use these authentic accounts to promote misleading content and buy political ads.

A barrage of attacks could leave governments and social media companies scrambling to catch up.

Moscow has tested similar methods elsewhere. In the lead-up to the 2018 presidential election in Madagascar, Russian agents established a print newspaper and hired students to write positive articles about the incumbent president. The agents also bought billboards and television ads, paid protesters to attend rallies, and then paid journalists to write about them. In the fall of 2019, a massive disinformation campaign linked to Yevgeny Prigozhin, the Russian businessman and Putin confidant who allegedly set up the IRA, brought the new rental strategy to several other African countries, including Cameroon, the Central African Republic, Côte d’Ivoire, the Democratic Republic of the Congo, Libya, Mozambique, and Sudan. In each case, Russian operatives worked with locals in order to hide the true origins of the campaign, disguising a foreign influence operation as the voices of domestic actors.

Setting up shell media and social media entities, as Russia did in Africa, is more scalable than the co-optation of individual social media accounts, allowing Russia to reach a larger audience. Most important, however, it lets Russia eradicate that telltale of foreign interference: foreign-based accounts whose location gives away their true identity. In just four years, the once clear line between domestic and foreign disinformation has basically disappeared.

Americans could also be induced to rent out their social media accounts—or, in a twisted version of the gig economy, convinced to run disinformation campaigns themselves. U.S. citizens could even become unwitting pawns in such an effort, since Russian agents could easily set up seemingly legitimate shell companies and pay in U.S. dollars. They could also reach out to their targets through encrypted messaging platforms such as WhatsApp (as they did in Africa), adding another layer of secrecy. And because false content that is in fact pushed by foreigners could look like genuine domestic conversations protected by the First Amendment, it would be trickier to crack down on it. A barrage of attacks, combined with the increasingly sophisticated methods used to avoid detection, could leave governments, social media companies, and researchers scrambling to catch up.

BRACE FOR IMPACT

The United States is woefully underprepared for such a scenario, having done little to deter new attacks. Since 2016, the U.S. Congress has not passed any major legislation targeting disinformation peddlers other than the limited sanctions against individual Russian officials and entities, nor has it mandated that social media companies take action. In fact, it is unclear who in the U.S. government even owns the problem. The Global Engagement Center is tasked with countering state-sponsored disinformation, but as part of the State Department, it has no mandate to act inside the United States. A group of government agencies has published guidance on how the federal government should alert the American public of foreign interference, but it is weak on specifics. The Cybersecurity and Infrastructure Security Agency produced an entertaining leaflet showing how easy it is to polarize an online community on seemingly benign issues, such as putting pineapple on a pizza. That agency’s parent organization, the Department of Homeland Security, has worked to secure the physical machinery of elections, updating and replacing electronic voting machines and strengthening security around the storage of voter data. And it has tried to improve information sharing among federal, state, and local election authorities. Those are important measures for defending against an election hack, but they are useless against foreign disinformation operations. And Trump’s tendency to blur the facts and undermine U.S. intelligence agencies has only worsened Americans’ confusion about the nature of the 2016 Russian attack, which in turn leaves them vulnerable to future operations aimed at undermining trust in the democratic process.

Social media companies, for their part, have their own patchwork of responses and policies. Whereas Twitter has banned all political advertising (and even restricted the visibility of some of Trump’s tweets for violating its policy against abusive behavior), Facebook has said it will allow political ads regardless of their veracity. Concerned with user privacy, social media companies have also been reluctant to share data with outsiders, which makes it difficult for governments and independent groups to inform the public about the scope of the threat. In the United States, the First Amendment’s far-reaching protections for free speech add another layer of complexity as companies attempt to navigate the gray areas of content moderation.

It is late, but not too late, to shore up U.S. defenses in time for the November election.

A bevy of research groups, consultancies, and nonprofits have emerged to expose disinformation campaigns, advise political campaigns about them, and develop potential tools for responding to future threats such as deepfakes. But exposure in itself is not enough to deter adversaries or even to keep up with the rapid evolution of their tactics. Sometimes, detailing the methods of a disinformation campaign merely provides others a blueprint to follow. The same can happen when Russia watchers explain their methods for detecting disinformation operations: once those methods are out in the open, Russia and others will seek to circumvent them. And so companies, researchers, and governments are stuck playing a game of whack-a-mole, shutting down disinformation campaigns as they arise without any proactive strategy to prevent them in the first place.

It is late, but not too late, to shore up U.S. defenses in time for the November election. The focus should be Russia, given its status as the main originator and innovator of disinformation operations. Fortunately for Washington, the Kremlin tends to make carefully calculated decisions. Putin has shown himself willing to take risks in his foreign policy, but there is a limit to the costs he will incur. Washington’s task is therefore to increase the pain Moscow will feel if it engages in further disinformation campaigns. Doing so would in turn send a clear message to other states looking to mimic Russia.

As a first step, the U.S. government should add individuals and state-linked entities that engage in disinformation campaigns to its sanctions list. Existing executive orders and the Countering America’s Adversaries Through Sanctions Act, passed by Congress in 2017, give the government the authority to be far more aggressive on this front. Changing states’ behavior through sanctions, as the United States aimed to do with the now defunct Iran nuclear deal, requires an expansive sanctions regime that ties good behavior to sanctions relief. That effort has been lacking in the case of Russia. A more assertive sanctions policy, which would likely require new legislation, could sanction the entire Russian cyberwarfare apparatus—government agencies, specific technology companies, and cybercriminals.

Second, the State Department and the U.S. Agency for International Development should expand funding for independent research groups and investigative journalists working on exposing Russian-linked corruption across the world. The 2017 Panama Papers investigation by the International Consortium of Investigative Journalists revealed rampant corruption in Putin’s inner circle. Little is known about how such corruption helps finance state-sponsored disinformation campaigns, but the funds devoted to setting up the IRA most certainly came from illicit sources. Identifying Russia’s complex web of illicit finance is critical in order to cut the lifeline to such operations. Once companies, individuals, and other entities are identified as being involved in illicit financing schemes in support of disinformation campaigns and cyber-operations, they should be sanctioned. But such investigative work is expensive and sometimes dangerous. In 2018, for example, three Russian journalists were killed in the Central African Republic while investigating the activities of the Wagner Group, a Prigozhin-controlled private military organization linked to Russia’s 2019 disinformation campaigns in Africa.

Perhaps most important, the U.S. government must do much more to explain to its citizens what state-sponsored disinformation is and why they should care. Ahead of national elections in 2018, the Swedish government went as far as mailing every household in the country an explanatory leaflet detailing what disinformation is, how to identify it, and what to do about it. Other European governments, such as the United Kingdom during the Skripal scandal, have developed strategic communications campaigns to counter false narratives. The European Union, through its foreign affairs arm, has set up a rapid-response mechanism for member states to share information about foreign disinformation campaigns. Washington could learn from the experiences of its partners. With a president who still questions the overwhelming evidence of Russian interference four years ago, this will be a hard task for the U.S. government to take on, if it is possible at all. Unless Washington acts now, however, Americans may soon look back at the 2020 election with the same shock and incredulity that they felt in 2016. This time, they will have only themselves to blame.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • ALINA POLYAKOVA is President and CEO of the Center for European Policy Analysis and Adjunct Professor of European Studies at the Johns Hopkins University School of Advanced International Studies. This article is part of a project of the Library of Congress’s John W. Kluge Center, supported by the Carnegie Corporation of New York.
  • More By Alina Polyakova