Why Facebook and Twitter Can’t Be Trusted to Police Themselves

Rubén Weinsteiner




By RENEE DIRESTA and TRISTAN HARRIS


Google, Facebook and Twitter took a beating on Wednesday testifying in front of the Senate and House Intelligence Committees about their role enabling Russia’s interference in the 2016 election. “You bear this responsibility,” California’s Senator Dianne Feinstein lectured the three company lawyers in one heated exchange. “You’ve created these platforms. And now they are being misused. And you have to be the ones to do something about it. Or we will.”

The companies deserved the harsh treatment. As a social media disinformation researcher with Data for Democracy and a former Google design ethicist, respectively, we think technology platforms have a responsibility to shield their users from manipulation and propaganda. So far, they have done a terrible job doing that. Even worse, they have repeatedly covered up how much propaganda actually infiltrates their platforms.


Today, what we know about how disinformation spreads through social networks is due to the hard work of outside experts—researchers, journalists and think tanks—and no thanks to the tech companies themselves. In 2015, researchers were writing about ISIS bots spreading jihadi propaganda on Twitter, and posting recruiting videos on YouTube. Technology companies took the most egregious content seriously, but initially did little to disrupt the terrorist network. This year, once again, outsiders have taken the lead in exposing how the Internet Research Agency, a Russian company that conducts information operations on behalf of the Kremlin, purchased and disseminated propaganda meant to exploit American societal divisions during the election. But the official responses from the platforms have come from the same playbook: They deny, they diminish and they attempt to discredit the research.

Rubén Weinsteiner