How Facebook fights fake news with machine learning and human insights
5 min read

How Facebook fights fake news with machine learning and human insights

How Facebook fights fake news with machine learning and human insights

SAN FRANCISCO—In the wake of the recent high-school shooting in Parkland, Fla., stories started cropping up on fringe right-wing blogs and conservative news outlets reporting that the kids fighting for stricter gun laws were actually paid “crisis actors” posing as student survivors.

People who expressed their skepticism of those stories in Facebook comments helped Facebook flag them as fake news—or what Facebook employees call “false news,” says Michael McNally, the company’s director of engineering.

The “false” moniker, McNally told an audience of 500 Silicon Valley tech company employees gathered here Wednesday for the Fighting Abuse @Scale event, “emphasizes its toxicity and harmfulness.”

McNally’s statement at the @Scale event, one of an ongoing series of technical conferences for engineers, data scientists, product managers, and operations specialists aiming to fight fraud, spam, and abuse among the large-scale Internet user bases they serve, is a notable departure from comments CEO Mark Zuckerberg made shortly after the 2016 election.



READ MORE ON FACEBOOK AND PRIVACY

What’s in your Facebook data? More than you think
Ready to #DeleteFacebook? Follow these 7 steps
7 ways to boost your Facebook privacy
How to block Facebook (and others) from your microphone
Facebook, EFF security experts sound off on protecting the vulnerable
Facebook’s Stamos on protecting elections from hostile hackers (Q&A)
How to recover from a Facebook hack


On November 10 of that year, Zuckerberg said, “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea.”

While McNally says Facebook doesn’t aim to judge or alter its users’ views, the company is taking action because the spread of “falsehoods” has become “an industry” that operates “at scale,” meaning that it could impact millions or tens of millions of people at once.

Most fake-news networks are geared toward making money, he says, but some are designed to influence politics in a specific region or country. And they can negatively impact consumers’ choices with respect to everything from elections to the environment.

To combat fake (or “false”) news, McNally says, Facebook now employs a wide range of tools ranging from manual flagging to machine learning. It’s actively searching for websites, accounts, and domain names created with the sole intent of propagating false news, he says. And it is regularly scanning its platform for activity, including the creation of accounts and groups, designed to boost readership of misleading content through views, likes, shares, and paid promotion.

“We look for inauthentic engagement or inauthentic activity” designed to drive readers to a site or Facebook Group, McNally says, which often can be automated using Amazon’s Mechanical Turk system or chatbots.

Facebook then determines whether the activity is in violation of its terms of service. If so, it shuts down the accounts and groups of the perpetrators. (The scale of overall abuse on the platform, of course, is massive, and the pace at which the company identifies and shuts down abusers isn’t always fast. This month, the company shut down 120 cybercrime groups, totaling more than 300,000 members, and each had been active for an average of two years.)

If flagged activity is found to be in compliance with its terms of service, Facebook might simply “demote” flagged content, essentially hiding it from feeds.

Downranking of a story can reduce its spread across Facebook by about 80 percent, says Lauren Bose, a Facebook data scientist. And it bears strong similarities to the most effective weapon implemented thus far against spam. While the unwanted emails, often laced with malicious software, still comprised more than 59 percent of all email sent in 2017, email providers greatly reduced their impact by simply pushing them out of the user’s view and into spam folders.

As with spam, whether a company labels stories as false is less important than how it discerns the intentionally misleading from the simply misled and, moreover, how differently it treats the two. Demotion places the company squarely in the realm of adjudicating content, a responsibility it has long shied away from, according to Zuckerberg.

Facebook now works with a global network of fact-checking organizations to verify that content posted on Facebook Groups and pages is authentic, not designed to drive misinformation or hate. This network includes Factcheck.org, Snopes, and Politifact in the North America; Consejo de Redaccíon, La Silla Vacía, and Animal Politico in Latin America; Rappler, Vera Files, Tirto, and Boom Live in Asia; and Le Monde, Pagella Politica, NU.nl, Correctiv, and NieuwsCheckers in Europe.

With “an article that we give them, they’ll tell us, in their opinion, if it’s true, false, or mostly false,” Bose says. “We use these as the single source of truth in how we treat this information on the platform.” She also notes that because of the “three to five days” it can take to receive an answer to a request for a fact-check, and the minimum “6 to 12 hours” a fact-checking organization takes to verify a story as authentic, Facebook has come to rely on machine learning to flag stories likely to be fake.

One indicative pattern of fake-news propagation that Facebook’s machine-learning programs have picked up on, McNally says, is that “clusters” of similar accounts will share or like the same content. This was the case when the Parkland “crisis actors” stories started spreading across the platform.

The fraudulent news sites to which Facebook accounts and groups typically drive traffic also have recognizable similarities, he says. Their most common trait: a user interface cluttered with ads.

While Facebook has significantly ramped up its fight against the spread of fraudulent media on its platform in recent months, and on Tuesday made public for the first time its internal guidelines for when it removes user-posted content, it remains unclear whether its efforts will prove adequate. But it clearly is taking platform improvement suggestions from outside analysts more seriously than it was 18 months before the 2016 election, when one expert wrote that Facebook could better police content on its platform by showing misinformation-debunking stories to its users.

Facebook has tried warning users as they’ve shared stories its fact checkers have debunked. It now regularly accompanies shared fake-news stories with “related” articles that debunk the claims they make. It also rescinds advertising and monetization privileges from publishers that share them.

Bhaskar Chakravorti, the senior associate dean of international business and finance at Tufts University’s Fletcher School, wrote in February that in addition to combining machine learning with human insights, Facebook needs to engage with its users more directly, in their own cities and countries. It needs to develop services that are not dependent on its current core advertising business model, given that policing fake news means curtailing ads from those that publish and promote it. And it needs to balance its gains with its duties.

“Growth may be the easy part; being the responsible grown-up is much harder,” Chakravorti said.

Enjoying these posts? Subscribe for more