Credit Damon Winter/The New York Times
SAN FRANCISCO — An automated army of pro-Donald J. Trump chatbots overwhelmed similar programs supporting Hillary Clinton five to one in the days leading up to the presidential election, according to a report published Thursday by researchers at Oxford University.
The chatbots — basic software programs with a bit of artificial intelligence and rudimentary communication skills — would send messages on Twitter based on a topic, usually defined on the social network by a word preceded by a hashtag symbol, like #Clinton.
Their purpose: to rant, confuse people on facts, or simply muddy discussions, said Philip N. Howard, a sociologist at the Oxford Internet Institute and one of the authors of the report. If you were looking for a real debate of the issues, you weren’t going to find it with a chatbot.
“They’re yelling fools,” Dr. Howard said. “And a lot of what they pass around is false news.”
The role fake news played in the presidential election has become a sore point for the technology industry, particularly Google, Twitter and Facebook. On Monday, Google said it would ban websites that peddle fake news from using its online advertising service. Facebook also updated the language in its Facebook Audience Network policy, which already says it will not display ads in sites that show misleading or illegal content, to include fake news sites.
In some cases, the bots would post embarrassing photos, make references to the Federal Bureau of Investigation inquiry into Mrs. Clinton’s private email server, or produce false statements, for instance, that Mrs. Clinton was about to go to jail or was already in jail.
“The use of automated accounts was deliberate and strategic throughout the election,” the researchers wrote in the report, published by the Project on Algorithms, Computational Propaganda and Digital Politics at Oxford.
Because the chatbots were almost entirely anonymous and were frequently bought in secret from companies or individual programmers, it was not possible to directly link the activity to either campaign, except for a handful of “joke” bots created by Mrs. Clinton’s campaign, they noted.
However, there was evidence that the mystery chatbots were part of an organized effort.
“There does seem to be strategy behind the bots,” Dr. Howard said. “By the third debate, Trump bots were launching into their activity early and we noticed that automated accounts were actually colonizing Clinton hashtags.”
A hashtag is used to indicate a Twitter post’s topic. By adopting hashtags relating to Mrs. Clinton, the opposition bots were most likely able to wiggle their way into an online conversation among Clinton supporters.
After the election, the bot traffic declined rapidly, with the exception of some pro-Trump programs that gloated, “We won and you lost,” Dr. Howard said.
Trump campaign officials did not respond to requests for comment. Twitter executives argued that more people would not follow the programs and so they would be picked up only by those who looked for particular hashtags.
“Anyone who claims that automated spam accounts that tweeted about the U.S. election had an effect on voters’ opinions or influenced the national Twitter conversation clearly underestimates voters and fails to understand how Twitter works,” said Nick Pacilio, a Twitter spokesman.
The researchers based their study on a collection of about 19.4 million Twitter posts gathered in the first nine days of November. They selected tweets based on hashtags identifying certain subjects and identified automated posting by finding accounts that post at least 50 times a day.
“For example, the top 20 accounts, which were mostly bots and highly automated accounts, averaged over 1,300 tweets a day and they generated more than 234,000 tweets,” the researchers noted. “The top 100 accounts, which still used high levels of automation, generated around 450,000 tweets at an average rate of 500 tweets per day.”
The Oxford researchers had previously reported that political chatbots had played a role in shaping the political landscape that led to Britain’s “Brexit” vote.
The researchers have coined the term “computational propaganda” to describe the explosion of deceptive social media campaigns on services like Facebook and Twitter.
In a previous research paper, Dr. Howard, and Bence Kollanyi, a researcher at Corvinus University of Budapest, described how political chatbots had a “small but strategic role” in shaping the online conversation during the run-up to the Brexit referendum.
The bot managers seem to repurpose the programs as well. During the British campaign, they discovered that a family of bots that had been tweeting around Israeli-Palestinian issues for three or four years had suddenly become pro-Brexit. After the vote, the bots returned to their original issue.
In the case of the American election, the researchers noted that “highly automated accounts — the accounts that tweeted 450 or more times with a related hashtag and user mention during the data collection period — generated close to 18 percent of all Twitter traffic about the presidential election.”
They also noted that bots tend to circulate negative news much more effectively than positive reports.
One of the consequences of the intense social media campaigns will be a rise in what social scientists call “selective affinity.”
“Clinton supporters will cut the Trump supporters out of their network, and Trump supporters will do the same,” Dr. Howard said. “The polarization of the election is going to make this stuff worse as we self-groom our news networks.”