Home / Technology / She Warned of ‘Peer-to-Peer Misinformation.’ Congress Listened.

She Warned of ‘Peer-to-Peer Misinformation.’ Congress Listened.

In 2016, they monitored thousands of Twitter accounts that suddenly started using bots, or automated accounts, to spread salacious stories about the Clinton family. They watched as multiple Facebook pages, appearing out of nowhere, organized to simultaneously create anti-immigrant events. Nearly all of those watching were hobbyists, logging countless hours outside their day jobs.


Colin Stretch, general counsel of Facebook, left, Sean Edgett, acting general counsel of Twitter, and Kent Walker, senior vice president and general counsel of Google, testified earlier this month at a Senate Intelligence Committee hearing examining social media influence in the 2016 elections. Credit Eric Thayer for The New York Times

“When I put it all together and started mapping it out, I saw how big the scale of it was,” said Jonathan Albright, who met Ms. DiResta through Twitter. Mr. Albright published a widely read report that mapped, for the first time, connections between conservative sites putting out fake news. He did the research as a “second job” outside his position as research director at the Tow Center for Digital Journalism at Columbia University.

Senate and House staff members, who knew of Ms. DiResta’s expertise through her public reports and her previous work advising the Obama administration on disinformation campaigns, had reached out to her and others to help them prepare for the hearings.

Rachel Cohen, a spokeswoman for Senator Mark Warner, Democrat of Virginia, said in a statement that researchers like Ms. DiResta had shown real insight into the platforms, “in many cases, despite efforts by some of the platforms to undermine their research.” Mr. Warner is a member of the Senate Intelligence Committee.

One crucial line of the questioning — on how much influence Russian-bought advertisements and content had on users — was the result of work by Ms. DiResta and others with a Facebook-owned tool. “Facebook has the tools to monitor how far this content is spreading,” Ms. DiResta said. “The numbers they were originally providing were trying to minimize it.”

Indeed, at the congressional hearings, the tech companies admitted that the problem was far larger than they had originally said. Last year, Mark Zuckerberg, Facebook’s chief executive, said it was a “crazy idea” that misinformation on Facebook influenced the election. But the company acknowledged to Congress that more than 150 million users of its main site and a subsidiary, Instagram, potentially saw inflammatory political ads bought by a Kremlin-linked company, the Internet Research Agency.

Ms. DiResta contended that that is still just the tip of the iceberg. Minimizing the scope of the problem was “a naïve form of damage control,” she said. “This isn’t about punishing Facebook or Twitter. This is us saying, this is important and we can do better.”


Ms. DiResta became interested in misinformation on social media while researching the anti-vaccine movement. Credit Jason Henry for The New York Times

In response, Facebook said it had begun organizing academic discussions on disinformation.

“We regularly engage with dozens of sociologists, political scientists, data scientists and communications scholars, and we both read and incorporate their findings into our work,” said Jay Nancarrow, a Facebook spokesman. “We value the work of researchers, and we are going to continue to work with them closely.”

A graduate of Stony Brook University in New York, Ms. DiResta wrote her college thesis on propaganda in the 2004 Russian elections. She then spent seven years on Wall Street as a trader, watching the slow introduction of automation into the market. She recalled the initial fear of overreliance on algorithms, as there were “bad actors who could come in and manipulate the system into making bad trades.”

“I look at that now and I see a lot of parallels to today, especially for the need for nuance in technological transformations,” Ms. DiResta said. “Just like technology is never leaving Wall Street, social media companies are not leaving our society.”

Ms. DiResta moved to San Francisco in 2011 for a job with the O’Reilly Alpha Tech Venture Capital firm. But it wasn’t until the birth of her first child a few years later, that Ms. DiResta started to examine the dark side of social media.

“When my son was born, I began looking into vaccines. I found myself wondering about the clustering effects where the anti-vaccine movement was concentrated,” Ms. DiResta recalled. “I was thinking, ‘What on earth is going on here? Why is this movement gaining so much momentum here?’”

She started tracking posts made by anti-vaccine accounts on Facebook and mapping the data. What she discovered, she said, was that Facebook’s platform was tailor-made for a small group of vocal people to amplify their voices, especially if their views veered toward the conspiratorial.

“It was this great case study in peer-to-peer misinformation,” Ms. DiResta said. Through one account she created to monitor anti-vaccine groups on Facebook, she quickly realized she was being pushed toward other anti-vaccine accounts, creating an echo chamber in which it appeared that viewpoints like “vaccines cause autism” were the majority.

Soon, her Facebook account began promoting content to her on a range of other conspiratorial ideas, ranging from people who claim the earth is flat to those who believe that “chem trails,” or trails left in the sky by planes, were spraying chemical agents on an unsuspecting public.

“So by Facebook suggesting all these accounts, they were essentially creating this vortex in which conspiratorial ideas can just breed and multiply,” Ms. DiResta said.

Her published findings on the anti-vaccine movement brought her to the attention of the Obama administration, which reached out to her in 2015, when officials were examining radical Islamist groups’ use of online disinformation campaigns.

She recalled a meeting with various tech companies at the White House in February 2016 where chief executives, policy leaders and administration officials were told that American-made social media platforms were key to the dissemination of propaganda by ISIS.

It was during that time that she first met Jonathan Morgan, a fellow social media disinformation researcher who had published papers on how the Islamic State spreads its propaganda online.

“We kept saying this was not a one-off. This was a toolbox anyone can use,” Ms. DiResta said. “We told the tech companies that they had created a mass way to reach Americans.”

A year and a half later, they hope everyone is finally listening. “I think we are at this real moment, where as a society we are asking how much responsibility these companies have toward ensuring that their platforms aren’t being gamed, and that we, as their users, aren’t being pushed toward disinformation,” Ms. DiResta said.

Continue reading the main story
NYT > Technology

Leave a Reply

Your email address will not be published. Required fields are marked *