by Deepak Puri

How to make fake friends and influence people politically with botnets

Opinion
Apr 06, 2017
Development ToolsElection HackingEmerging Technology

Botnets have changed how political campaigns are run. They're programmed with AI and NLP to manipulate public opinion through social media. Bots can’t vote (yet), but by influencing public opinion and drowning out competing messages, they achieve the same outcome by having people vote how the botnet master wants them to.

Dale Carnegie would be horrified. His classic on how to influence people has been perverted for online political manipulation.

How do botnets work? How are they used for mass deception? How to recognize that you might be interacting with a political bot?

Background Social media has evolved from personal communications to an impersonal one. Chatbots can be programmed to post and tweet automatically. They’re versatile and interactions appear human. They even get smarter as they learn from interactions with people. In reality, they’re programs that interact through a chat interface and respond based on preset rules and artificial intelligence. Chatbots are now responsible for much of the traffic on rate Facebook Messenger and Twitter. Botlist features hundreds of bots everything from checking stock prices to online dating.

Facebook reports that its Messenger and WhatsApp now process 60 billion messages per day which is more that three times the volume on SMS. Sending a message via Facebook Messenger costs a fraction of sending one by SMS.

Bots & botnets Most social media users interact with information through a web-interface. Bots, however interact through an application programming interface (API) which enables them to analyze posts and respond in real-time. Botnets are networks of bots. A botnet may consist of hundreds of accounts, all controlled by a single user. Social botnets are interconnected and programmed to follow and re-message each other.

Bot creation has been simplified. ChatFuel even allow bots to be developed without coding. Conversational rules are defined on a dashboard. Bots recognize phrases from users and reply with predefined answers using Natural Language Programming (NLP). Jerry Wang explains how to develop bots for Facebook Messenger with Heroku and Node.

chatfuel dashboard screenshot ai https://chatfuel.com/

Chatbots gone bad Chatbots can be used for both legitimate and malicious purposes. They’re used to suppress voices, and hate speech. They’re now increasingly for political purposes.

To simulate a big following, it helps to have many ‘virtual’ users. But, recruiting real followers and developing bots takes time. Online marketplaces offer both, with different IP addresses and further obfuscated with proxy servers to mask their identities. A chatbot is a form of ‘sockpuppet’, which is an online identity used for deception. Misleading online identities, are used to  praise, defend or support a person in order to manipulate public opinion.

How to spot a bot?
No matter how well they masquerade themselves, botnets can often be spotted by these traits.

1. Response time: Bots are programmed to respond automatically. So when tweets and responses appear within a fraction of a millisecond, it’s a good bet that the response came from a bot. 2. Volume: Political bots lurk in certain chat groups waiting for the mention of certain topics or keywords. This could result in dozens of opposing, vicious posts and tweets immediately. Some twitter accounts bots work hard. One account  was reported to have sent 400–500 tweets around the clock, every day for six days a week. Even puppet masters have to take a break for a day. 3. Novel words: Real human conversations include multiple words and phrases. Bots, however repeat ‘novel words’ to emphasize one viewpoint and drown out others. This suggests that the sentences were written by a single author, or a group of authors working from a shared messaging playbook. “Instead of many of thousands of unique, individual voices, it was as if one voice became dominant”, explains Jonathon Morgan Co-founder & CEO of NewKnowledge.io 4. Unusual names: Online merchants generate thousands of fake bots for sale so they’re often not picky about the names. A would-be user named ‘@stanbieberfan’ might be worth scrutinizing. ‘Bot or Not’ is a free online service to determine if you might be talking to a bot. 5. Number of followers: Bots tend to not have many followers themselves. 6. Devices used:  Most response tweets originate from iPhone, Android and Windows devices. Many bot responses seem to originate from a Windows phone. 7. Bots are anti-social: Bots never retweet or mention any other Twitter user. 8. Mob behavior:  Political botnets are usually controlled by a single person (puppet master). The volume of bot posts on an issue typically increase and stop in unison.

Political games How can army of political bots influence campaigns? A research paper on the ‘Star Wars Botnet’ lists some chilling examples.

1. Make someone more prominent: Bots become ‘fake’ followers for the tweets by someone. This helps raise their profile and attract more real (human) followers. 2. Fake trending topics: Bots can fake how many followers a particular topic has. Most social media platforms do not distinguish between human and bot followers. This results in topics that a puppetmaster selects to become a ‘trending topic’. 3. Manipulate public opinion: Botnets can be programmed to make positive or negative posts in a coordinated manner. This information distorts the input used by researchers and pollsters to report on public sentiment. 4. Astroturfing: An army of bots is programmed to agree amongst themselves on a topic. These bots have different names and locations giving the semblance that the community has come to an agreement on a topic by themselves.

Fighting the good fight An army of bots to seed the web with negative information about the opposition? The threat from political bots and automated speech is real. Today it is politics, but tomorrow bots could also be used to slander a competitor’s commercial product as well. What can be done?

1. Understand the technology and trends in this field. Political Bots perhaps the best resource to stay current about new developments in this field. They reports on bot algorithms, computational propaganda, and digital politics. 2. Support efforts to combat automated speech. Social media platforms have the technology to label automated speech generated through APIs as ‘Bot Generated’. They should use it. 3. Donate to groups creating bots for transparency and civic activism. Three of the best are : ResistBot – turns your text messages to ‘50409’ into daily letters to Congress @StayWokeBot – helps answer tweets related to the Black Lives Matter and other causes Congress Edits – tweets anonymous Wikipedia edits made from IP addresses in the US Congress. Call With Jefferson – this friendly bot uses your location to identify the three congressional representatives for your area, and their contact information and a script designed to give the operator the information they need in as few words as possible.

4. Stay abreast by attending conferences such as the upcoming SuperBot Conference to learn more about bots and network with the experts.

“There is only one way… to get anybody to do anything. And that is by making the other person want to do it”, wrote Dale Carnegie. Bots can’t vote (yet), but botnet masters achieve the same outcome by using bots to influence public opinion and manipulate people’s voting.