Skip to main content

How Facebook Hinders Misinformation Research

The platform strictly limits and controls data access, which stymies scientists

Thumbs up and thumbs down symbols on smart phone screens

When the world first heard that Russia had used Facebook ads in attempts to interfere with the U.S.’s 2016 elections, computer scientists and cybersecurity experts heard a call to action. For the past four years we have been studying how hate and disinformation spread online so that independent researchers can build stronger defenses to protect the public. But as we have tried to conduct this basic science, we have met steep resistance from the primary platform we study: Facebook. Our own accounts were shut down in 2021, another sign of the social media company’s rejection of scrutiny.

Facebook wants people to see it as “the most transparent platform on the Internet”—as its vice president of integrity said in August 2021. But in reality, it has set up nearly insurmountable roadblocks for researchers seeking shareable, independent sources of data. It’s true that Facebook does provide researchers with some data: It maintains a searchable online ad library and allows authorized users to download limited information about political ads. Researchers have also been able to use Facebook’s business analytics tools to glean some information about the popularity of unpaid content. But the platform not only sharply limits access to these tools, it also aggressively moves to shut down independent efforts to collect data.

This is not just a spat between a social media platform and the people who study it. The proliferation of online misinformation has been called an “infodemic,” which, like a virus, grows, replicates and causes harm in the real world. Online misinformation contributes to people’s hesitancy to wear masks or get vaccinated to help prevent the spread of COVID-19. It contributes to distrust in the soundness of our election system. To reduce these harms, it is vital that researchers be able to access and share data about social media behavior and the algorithms that shape it. But Facebook’s restrictions are getting in the way of this science.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


First, Facebook limits which researchers are permitted to receive platform data and requires them to sign agreements that severely curtail how they access it, as well as how they share it. One example of this problem is the FORT (Facebook Open Research and Transparency) program, which Facebook created for researchers to study ad-targeting data. Despite widely touting this tool, however, Facebook limits the available data set to a three-month time period leading up to the 2020 elections. To access the information, researchers must agree to work in a closed environment, and they are not allowed to download the data to share it with other researchers. This means others are unable to replicate their findings, a core practice in science that is essential to building confidence in results.

Many scientists have a problem with these limitations. Princeton University misinformation researchers have described problems with FORT that led them to scrap a project using the tool. Of specific concern was a provision that Facebook had the right to review research before publication. The researchers feared this rule could be used to prevent them from sharing information about ad targeting in the 2020 elections.

Second, Facebook aggressively moves to counter independent sources of data about its platform—and our team is a good example. In 2020 we built a tool we call Ad Observer, a citizen science browser extension that allows consenting users to share with us limited and anonymous information about the ads that Facebook shows them. The extension communicates with our project Cybersecurity for Democracy, sending basic information, such as who paid for the ad and how long it ran. It also reports how advertisers target the ads, an issue that researchers and journalists have exposed as a vector in the spread of misinformation. For ethical reasons, we do not collect personal information about people who share the ads they see. And for scientific reasons, we do not need to—everything we need to know to answer our research questions is contained in public information that we are gathering with consent.

Even these limited data about ads have been tremendously helpful for our research, and the project demonstrates the necessity of independent auditing of social media platforms. With the data collected by our volunteers, we were able to identify ads promoting the conspiracy theory QAnon and far-right militias, as well as demonstrate that Facebook failed to identify approximately 10 percent of political ads that ran on its platform. And we have published our data so other researchers can work with them, too.

In response, Facebook shut down our personal accounts in August 2021. This prevented Cybersecurity for Democracy from accessing even the limited transparency information the platform provides to researchers. The company insinuated that its actions were mandated by an agreement it entered into with the Federal Trade Commission regarding user privacy. The FTC responded swiftly, telling Facebook that the platform is wrong to block our research in the name of its agreement with the agency: “The consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising.”

Facebook has not backed down from its decision to suspend our accounts, and it has placed other researchers in the crosshairs. In August 2021 the Germany-based project AlgorithmWatch announced that it had discontinued a project that used crowdsourced data to monitor how Instagram (a platform also owned by Facebook) treated political posts and other content. In a statement, AlgorithmWatch noted that Facebook had cited privacy concerns with its research.

So where do we go from here? Of course, we think Facebook should reinstate our accounts and stop threatening other legitimate researchers. In the long term, however, scientists cannot rely on limited voluntary transparency measures from the platforms we follow. Researchers and journalists who study social media platforms in a privacy-shielding way need better legal protections so that companies such as Facebook are not the ones deciding what research can go forward. Numerous proposals have been brought before the U.S. Congress and the European Union on how to strengthen these protections. Now it’s time for lawmakers to take action.

Cybersecurity for Democracy is a research-based, nonpartisan and independent effort to expose online threats to our social fabric—and recommend how to counter them. It is part of the Center for Cybersecurity at the New York University Tandon School of Engineering.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Laura Edelson is an assistant professor of computer science at Northeastern University and former chief technologist at the Department of Justice's Antitrust Division.

More by Laura Edelson

Damon McCoy is an associate professor of computer science and engineering at the New York University Tandon School of Engineering. He received his Ph.D., M.S. and B.S. in computer science from the University of Colorado Boulder. McCoy is the recipient of a National Science Foundation CAREER award, and he is a former CRA/CCC Computing Innovation Fellow.

More by Damon McCoy
SA Special Editions Vol 31 Issue 5sThis article was originally published with the title “How Facebook Hinders Misinformation Research” in SA Special Editions Vol. 31 No. 5s (), p. 72
doi:10.1038/scientificamericanTruthvsLies0922-72