Exacerbating our ‘fake news’ problems: Chatbots
5 min read

Exacerbating our ‘fake news’ problems: Chatbots

Exacerbating our ‘fake news’ problems: Chatbots

VANCOUVER—Of all the tactics hackers use to attack a target, the most common by far is the innocuous-sounding “social engineering.” It’s what gets you log in to an imposter site that’ll steal your password, or to open an email attachment that’ll install ransomware on your computer.

Social engineering has become a big business for cybercriminals. But it’s most effective when it’s acutely targeted, says Sara-Jayne Terp, an adjunct professor who teaches data science at Columbia University, who presented research at the CanSecWest conference here last week. Similar to hacking bits and bytes, she says, manipulating humans usually involves understanding their ecosystem and, moreover, how to exploit their vulnerabilities.

So effectively social engineering at scale—targeting millions of people, across thousands of diverse ecosystems, at once—isn’t necessarily a plausible concept. Not without artificial intelligence, that is.

When it comes to exploiting people by spreading contradictory or inflammatory ideas they’re predispositioned to believe, Terp says, various organizations have been deploying armies of Internet chat robots, or chatbots, disguised as human social-media users.



READ MORE ON SOCIAL ENGINEERING

Why (and how) China is tying social-media behavior to credit scores
Those tweets about your meds? They’re being analyzed
What to watch for in fake LinkedIn profiles
What it takes to dupe a paranoid target (Q&A)
How political campaigns target you via email


Chatbots are similar to Web bots, which are designed to improve the results of Internet tasks like Web searches. Most today are tasked with customer service: Banks use them to help customers update their balances; hotels use them to free up their reception desks; Domino’s Pizza even uses them to help customers order Cinna Stix.

Customer service chatbots can be manipulated to be less than helpful. Twitter users “taught” Microsoft’s Tay Twitter chatbot, for example, to spout racist drivel. Other chatbots are created specifically to spread misinformation, partial truths, or flat-out lies, Terp says, like public-relations agents intent on deceiving and manipulating.

On social-media platforms such as Facebook, Reddit, Instagram, and Twitter, these chatbots are taking advantage of the biases of their targets by telling them what they expect (or want) to hear—in essence, confirming their biases—through supportive, pre-programmed comments, as well as platform-specific feedback, such as likes, retweets, and upvotes.

“It’s one thing to work at popularity, to build a base. It’s quite another to use mass sock puppets,” she says, to convince an audience that a story is truthful. “There are the white lies [politicians say] to get elected versus the outright lies like Pizzagate.”

Chatbots, which researchers have discovered can have a measurable impact on human behavior, are becoming increasingly prevalent among businesses. In 2016, Slack created an $80 million fund to run its chatbot network. And last year, as Juniper Research predicted that chatbots could help the health care and banking industries save up to $8 billion per year, Facebook revealed that more than 100,000 chatbots had been created for its Messenger platform.

When you combine the widespread accessibility of chatbots with the advent of Cambridge Analytica-style big-data trend analysis, which supported Donald Trump’s campaign for U.S. president, she says, it becomes much easier to exploit people. “It’s classic stuff you would do in an advertising campaign. [The use of data from] Cambridge Analytica was basically an advertising campaign on steroids, but for its deception and negative message.”

Online disinformation campaigns, indeed, became a central and incredibly effective aspect of the 2016 election, and the term “fake news,” ironically championed by Trump, became a central part of the conversation.

The ongoing reaction to Trump’s unexpected election win, including the investigation of the Trump campaign by special counsel Robert Mueller and the public reaction to Cambridge Analytica’s role in the manipulation of Facebook users, increases the notoriety of bots pushing disinformation. Revelations about how the company’s analysis was used to manipulate Facebook users have led to an increased distrust in content on the Internet, including from legitimate news outlets. Some argue that this has been the goal all along.

What to do about chatbots

To distinguish a helpful chatbot on social media from a deceitful one (or a real human), Terp says people should look for contextual signals in the content of a message, such as the use of specific keywords known to attract a certain audience.

Other telltale signs of chatbot content are repeated blocks of text, and references or links to it from known bots, she says, adding that people should also be wary of pages promoting dubious “news” that display an extraordinary amount of ads. Unlike legitimate news sites, which generally endeavor to create a pleasant user experience and develop an audience, she says, fake-news sites are often set up to make easy money from repeat bot visits while helping elevate propaganda in search engine rankings.

There is no simple way to deal with chatbots, says a 29-year-old programmer based in the Pacific Northwest, who requested anonymity because he suspects government tracking of his online activities in the wake of creating his own chatbot network in 2014.

He built the network, which at its peak was made up of about 1,000 bots on Reddit, to help spread his views on Edward Snowden’s leaks of classified documents. He believed that mainstream news organizations were focusing too much on Snowden’s “celebrity,” and downplaying the documents’ revelations about how the U.S. government had spied on its citizens.

Having first noticed as a teenager how social media was manipulated to influence opinion on the second Iraq war, he decided to build a presence on Reddit as a “social-media influencer” on the Snowden leaks. And he spun it into a chatbot network because, he says, he “needed a little bit of help” in getting his message out to a wider group of people.

He started with bots that would just upvote his own posts and soon wanted them to do more. “I decided to have these bots link to the things that I’m saying, or repeat the things that I’m saying. So I introduced some of those capabilities,” he says. “I didn’t have a master plan to the whole thing; it was very tactical.”

He decided to shut down the network after about two months, when, he says, a series of Reddit countermessaging began to diminish the impact of his chatbots; he learned that his Internet traffic was being routed through Virginia; and his car and apartment were broken into.

He believes that the solution to dealing with politically focused chatbots will be agreements between countries to stop interfering in each other’s elections. “Internet regulation internationally is very difficult,” he says. “It’s a people problem and a technical problem, and it probably requires solutions for both.”

Terp offers more immediate solutions to disinformation-pushing chatbots. She wants to see social-media networks setting stronger terms for banning spam and hate speech. She wants to see chatbot accounts clearly labeled as such. She wants to see owners of unmarked chatbots banned from social-media platforms. And she wants to see chatbots banned altogether from paying to promote content.

Consumers who think that they’ve encountered a bot or troll account should check on the Twitter account Probabot or Data.World, Terp says. They should also see whether suspicious sites they’ve encountered are listed on the Sewanee fake-news evaluation site.

Terp would also like to see a computer-programming solution.

“I gave a talk to software engineers at work, and somebody stood up at the end and said, ‘Isn’t this like spam? Why don’t they just build something and make it go away?’” Terp says. “That would be nice. It’s just another variant of spam.”

Enjoying these posts? Subscribe for more