A report prepared for the Senate that provides the most sweeping analysis yet of Russia’s disinformation campaign around the 2016 election found the operation used every major social media platform to deliver words, images and videos tailored to voters’ interests to help elect President Trump — and worked even harder to support him while in office.
The report, a draft of which was obtained by The Washington Post, is the first to study the millions of posts provided by major technology firms to the Senate Intelligence Committee, led by Sen. Richard Burr (R-N.C.), its chairman, and Sen. Mark Warner (Va.), its ranking Democrat. The bipartisan panel hasn’t said whether it endorses the findings. It plans to release it publicly along with another study later this week.
The research — by Oxford University’s Computational Propaganda Project and Graphika, a network analysis firm — offers new details of how Russians working at the Internet Research Agency, which U.S. officials have charged with criminal offenses for interfering in the 2016 campaign, sliced Americans into key interest groups for targeted messaging. These efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions, the report found.
The data sets used by the researchers were provided by Facebook, Twitter and Google and covered several years up to mid-2017, when the social media companies cracked down on the known Russian accounts. The report, which also analyzed data separately provided to House Intelligence Committee members, contains no information on more recent political moments, such as November’s midterm elections.
“What is clear is that all of the messaging clearly sought to benefit the Republican Party — and specifically Donald Trump,” the report says. “Trump is mentioned most in campaigns targeting conservatives and right-wing voters, where the messaging encouraged these groups to support his campaign. The main groups that could challenge Trump were then provided messaging that sought to confuse, distract and ultimately discourage members from voting.”
Representatives for Burr and Warner declined to comment.
The report offers the latest evidence that Russian agents sought to help Trump win the White House. Democrats and Republicans on the panel previously studied the U.S. intelligence community’s 2017 finding that Moscow aimed to assist Trump, and in July, they said investigators had come to the correct conclusion. Despite their work, some Republicans on Capitol Hill continue to doubt the nature of Russia’s interference in the last presidential election.
The Russians aimed particular energy at activating conservatives on issues such as gun rights and immigration, while sapping the political clout of left-leaning African American voters by undermining their faith in elections and spreading misleading information about how to vote. Many other groups — Latinos, Muslims, Christians, gay men and women, liberals, Southerners, veterans — got at least some attention from Russians operating thousands of social media accounts.
The report also offered some of the first detailed analyses of the role played by YouTube, a subsidiary of Google, and Instagram, owned by Facebook, in the Russian campaign, as well as anecdotes about how Russians used other social media platforms — Google+, Tumblr and Pinterest — that have received relatively little scrutiny. The Russian effort also used email accounts from Yahoo, Microsoft’s Hotmail service and Google’s Gmail.
The authors, while reliant on data provided by technology companies, also highlighted the companies’ “belated and uncoordinated response” to the disinformation campaign and, once it was discovered, their failure to share more with investigators. The authors urged that in the future they provide data in “meaningful and constructive” ways.
Facebook, for example, provided the Senate with copies of posts from 81 Facebook pages and information on 76 accounts used to purchase ads, but it did not share posts from other user accounts run by the IRA, the report says. Twitter, meanwhile, has made it challenging for outside researchers to collect and analyze data on its platform through its public feed, the researchers said.
Google submitted information in an especially difficult way for the researchers to handle, providing content such as YouTube videos but not the related data that would have allowed a full analysis. The YouTube information was so hard for the researchers to study, they wrote, that they instead tracked the links to its videos from other sites in hopes of better understanding YouTube’s role in the Russian effort.
Facebook and Google did not immediately respond to requests for comment.
In a statement, Twitter stressed it had made “significant strides” since the 2016 election to harden its digital defenses, including the release of a repository of the tweets that Russian agents previously sent so that for researchers can review them. “Our singular focus is to improve the health of the public conversation on our platform, and protecting the integrity of elections is an important aspect of that mission,” the company added.
Facebook, Google and Twitter first disclosed last year that they had identified Russian interference on their sites. Critics previously said that it took too long to come to an understanding of the disinformation campaign, and that Russian strategies have likely shifted since then. The companies have awakened to the threat — Facebook, in particular, created a “war room” this fall to combat interference around elections — but none has revealed interference around the midterm elections last month on the scale of what happened in 2016.
The report expressed concern about the overall threat social media poses to political discourse within nations and among them, warning that companies once viewed as tools for liberation in the Arab world and elsewhere are now threats to democracy.
“Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement to being a computational tool for social control, manipulated by canny political consultants and available to politicians in democracies and dictatorships alike,” the report said.
Researchers also noted that the data includes evidence of sloppiness by the Russians that could have led to earlier detection, including the use of Russia’s currency, the ruble, to buy ads and Russian phone numbers for contact information. The operatives also left behind technical signatures in computerized logs, such as Internet addresses in St. Petersburg, where the IRA was based.
Many of the findings track, in general terms, work by other researchers and testimony previously provided by the companies to lawmakers investigating the Russian effort. But the fuller data available to the researchers offered new insights on many aspects of the Russian campaign.
The report traces the origins of Russian online influence operations to Russian domestic politics in 2009 and says that ambitions shifted to include U.S. politics as early as 2013 on Twitter. Of the tweets the company provided to the Senate, 57 percent are in Russian, 36 percent in English and smaller amounts in other languages.
The efforts to manipulate Americans grew sharply in 2014 and every year after, as teams of operatives spread their work across more platforms and accounts to target larger swaths of U.S. voters by geography, political interests, race, religion and other factors. The Russians started with accounts on Twitter, then added YouTube and Instagram before bringing Facebook into the mix, the report said.
Facebook was particularly effective at targeting conservatives and African Americans, the report found. More than 99 percent of all engagement — meaning likes, shares and other reactions — came from 20 Facebook pages controlled by the IRA, including “Being Patriotic,” “Heart of Texas,” “Blacktivist” and “Army of Jesus.”
Together, the 20 most popular pages generated 39 million likes, 31 million shares, 5.4 million reactions and 3.4 million comments. Company officials told Congress that the Russian campaign reached 126 million people on Facebook and 20 million more on Instagram.
The Russians operated 133 accounts on Instagram, a photo-sharing subsidiary of Facebook, that focused mainly on race, ethnicity or other forms of personal identity. The most successful Instagram posts targeted African American cultural issues and black pride and were not explicitly political.
While the overall intensity of posting across platforms grew year by year — with a particular spike during the six months after Election Day 2016 — this growth was particularly pronounced on Instagram, which went from roughly 2,600 posts a month in 2016 to nearly 6,000 in 2017, when the accounts were shut down. Across all three years covered by the report, Russian Instagram posts generated 185 million likes and 4 million user comments.
Even though the researchers struggled to interpret the YouTube data submitted by Google, they were able to track the links from other sites to YouTube, offering a “proxy” for understanding the role played by the video platform.
“The proxy is imperfect,” the researchers wrote, “but the IRA’s heavy use of links to YouTube videos leaves little doubt of the IRA’s interest in leveraging Google’s video platform to target and manipulate US audiences.”
The use of YouTube, like the other platforms, appears to have grown after Trump’s election. Twitter links to YouTube videos grew by 84 percent in the six months after the election, the data showed.
The Russians shrewdly worked across platforms as they refined their tactics aimed at particular groups, posting links across accounts and sites to bolster the influence operation’s success on each, the report shows.
“Black Matters US” had accounts on Twitter, Facebook, Instagram, YouTube, Google+, Tumblr and PayPal, according to the researchers. By linking posts across these platforms, the Russian operatives were able to solicit donations, organize real-world protests and rallies, and direct online traffic to a website that the Russians controlled.
The researchers found that when Facebook shut down the page in August 2016, a new one called “BM” soon appeared with more cultural and fewer political posts. It tracked closely to the content on the @blackmatterus Instagram account.
The report found operatives also began buying Google ads to promote the “BlackMatters US” website with provocative messages such as, “Cops kill black kids. Are you sure that your son won’t be the next?” The related Twitter account, meanwhile, complained about the suspension of the Facebook page, accusing the tech company of “supporting white supremacy.”