Facebook has acknowledged publicly that fake accounts linked to Russian sources bought $100,000 in political ads, accounting for over 3,000 ads. The announcement on Wednesday was the first time Facebook has acknowledged that many of the fake accounts it shut down came from Russian sources.
Creating fake accounts and buying ads was part of a larger Russian campaign to spread misinformation and political divisiveness across social media in the runup to the 2016 election, according to US intelligence officials.
Widespread evidence of Russian meddling in the election has emerged in recent months, including a hack of the Democratic National Committee and evidence suggesting that the Russians hacked voting systems in 39 states (although no evidence shows they influenced vote totals). The revelation about Facebook ads is yet more evidence that Russian firms weren’t just sowing chaos in the DNC and voting systems — they were actively trying to influence American public opinion on social media.
It is unclear how successful the so-called Russian “troll farm” on Facebook was. Many of the accounts were crudely designed and used stilted, awkward language, and many of their posts were not widely shared throughout social media. Also, the majority of them ran in 2015, the year before the election, when Hillary Clinton and Donald Trump were still competing with other candidates in the primaries.
The Russia-linked accounts worked to spread misinformation in two ways, according to a recent New York Times report. In one strategy, they bought political ads that focused on social issues including immigration, race, gay rights, and gun control, rather than touting one candidate over another.
Their second strategy was to create hundreds of fake accounts that linked back to their own websites, filled with hacked material on Hillary Clinton and prominent Democrats like businessman and investor George Soros. The Times investigation found a concerted effort to spread misinformation and direct traffic to these sites using these fake accounts.
Fake accounts are nothing new in social media; Twitter is rife with fake “bots” that can spread dubious stories or popularize hashtags. As the Times pointed out in their report, enough fake Twitter bots can push certain hashtags into Twitter’s “trending” category, where tweets with those hashtags can then be seen by more people.
Twitter has mechanisms in place to try to prevent bots from spreading fake trends around the internet, but new research by a cybersecurity firm called FireEye found that one bot-propelled hashtag still broke through, and Twitter’s relative lenience on fake accounts compared to Facebook doesn’t help.
Facebook, on the other hand, is taking new steps to crack down on fake accounts. It recently announced it wouldn’t allow pages to advertise on its site if they repeatedly posted fake content, and that it has been increasingly monitoring and shutting down fake accounts.
A former FBI agent named Clinton Watts recently told the New York Times that Facebook and Twitter are both experiencing a “bot cancer eroding trust on their platforms.” To Facebook’s credit, Watts said the site is currently doing much more to combat the issue, “cutting out the tumors by deleting false accounts and fighting fake news.”
It’s worth noting that even the most successful fake Facebook accounts have nothing on Fox News when it comes to influencing voter’s decisions, according to a new study.
Vox · by Ella Nilsen · September 8, 2017