Beware of Twitter Robots Telling People How to Vote
Voting is partially a social endeavor, in which people consider the opinions of others when making up their own minds. Increasingly, though, they’re being influenced by an inhuman force: software robots specifically designed to deceive them.
Two years ago, in a report filed with the U.S. Securities and Exchange Commission, the social networking site Twitter estimated that more than 23 million of its active user accounts were being run by “bots” — software agents or bits of code that act on their own to respond to news and world events. They interact with real users, never revealing their true nature.
Bots of this kind have been used in efforts to sway public opinion in Central America already for a decade. The hacker and political operative Andrés Sepúlveda claims to have employed armies of bots to influence at least a half-dozen major election results in Mexico, Colombia, Nicaragua and elsewhere.
Could the same happen in the U.S. and Europe? Probably so, given recent findings on bot activity ahead of Britain’s vote to leave the European Union. As part of the Computational Propaganda Research Project at Oxford University researchers looked at some 300,000 Twitter accounts and found that a mere 1 percent of them generated about one third of all tweets relevant to the Brexit debate. They believe that many of those accounts were run by bots, because human users could not have sustained such a level of activity without the help of automation. It’s not clear whether the activity swayed the result, though the Leave campaign did generate more automated tweets.
This issue is far bigger than Brexit: The disturbing reality is that computational propaganda is already with us. In the U.S. presidential race, Twitter bots support both Trump and Clinton. Bots of various kinds live on cloud servers and operate 24 hours a day, and account for about 50 percent of all activity on the web. According to the Central American hacker, Sepúlveda, peoples’ opinions tend to be swayed more by views they see as coming spontaneously from real people than by views expressed on television or in newspapers.
What to do? Oxford Professor Philip Howard, who led the research on Brexit, and Samuel Woolley of the University of Washington suggest that the first step is making it easier for everyone to recognize bots. Some researchers have developed algorithms that aim to distinguish real people from Twitter bots based on their patterns of tweeting behavior, but these are only partially successful. Twitter and other social networks have access to much more specific data, which they could use to identify fake users on the site with visible red flags or equivalent markers. Research shows, encouragingly, that people aren’t influenced nearly as much when they know that an opinion is coming from a software agent rather than a real person.
People need to know with whom or what they are interacting on the internet. If social networks don’t choose to help on their own, the public — or the government — should pressure them to do so.