Table of Contents

So to Speak Transcript: Debating social media content moderation

Jonathan Rauch and Renée DiResta

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: Welcome back to “So to Speak,” the free speech podcast, where every other week we take an uncensored look at the world of free expression through personal stories and candid conversations. I am, as always, your host, Nico Perrino. Now, over the summer, I received a pitch from regular, “So to Speak” guest Jonathan Rauch. He wanted to debate the idea that social media companies have a positive obligation to moderate the speech they host, including based on content and sometimes viewpoint.

Jonathan recognized that his view on the issue may be at odds with some of his friends and allies within the free speech space, so why not have it out, on “So to Speak”? It was a timely pitch, after all. For one, the Supreme Court was considering multiple high-profile cases involving social media content moderation, two cases dealing with states’ efforts to limit social media companies’ ability to moderate content, and another case dealing with allegations that the government pressured or “jawboned” social media companies into censoring constitutionally protected speech.

Of course, there was and still is, an active debate surrounding Elon Musk's purchase of Twitter—now called X—and his professed support for free speech on the platform. And finally, a new book was published in June by Renée DiResta that takes a look at social media movements and their ability to destabilize institutions, manipulate events, and, in her telling, distort reality.

That book is called “Invisible Rulers: The People Who Turn Lies into Reality.” Renée may be a familiar name to some in the free speech world. Her work examines rumors and propaganda in the digital age, and most famously, or perhaps infamously, depending on how you look at it, Renée was the technical research manager at the Stanford Internet Observatory, which captured headlines for its work tracking and reporting on election and COVID-related internet trends.

And as part of that effort, the observatory submitted tickets to social media companies flagging content the observatory thought might violate their content moderation policies. For that work, particularly the ticketing work, Renée is seen by some as a booster of censorship online—a description I suspect she rejects. Fortunately, Renée is here with us today to tell us how she sees it. Joined, of course, by Jonathan Rauch, who is a writer, author, and senior fellow at the Brookings Institution. He may be best known to our audiences as the author of the 1995 classic “Kindly Inquisitors: The New Attacks on Free Thought,” and the 2021 “The Constitution of Knowledge: A Defense of Truth.” Renée and Jonathan, welcome to the show.

Renée DiResta: Thanks for having me.

Jonathan Rauch: Happy to be here.

Nico Perrino: So, Jonathan, as the precipitator of this conversation, let's start with you. What is the general framework for how you think about social media content moderation?

Jonathan Rauch: Well, let's begin, if it's okay, with what I—and I think many of your listeners and folks at FIRE—agree on, which is that it is generally not a good idea for social media companies to pull a lot of stuff offline because they disagree with it or because they don't like it. That usually only heightens the visibility of the material that you are taking down, and it should be treated as a last resort. So, one reason I wanted to do this is I noticed a steady disagreement I was having with friends in the free speech community, including a lot of FIRE folks, who apply a First Amendment model to social media platforms. You know, X and Facebook and Instagram and the rest.

And they use the term censorship as a way to describe what happens when a company moderates content. And the lens they apply is unless this is an absolute violation of a law, it should stay up because these companies shouldn't be in the business of, quote-unquote, censorship. Well, there are some problems with that framework. One of them is that social media companies are a hybrid of four different kinds of institutions.

One of them is, yes, they are platforms, which is what we call them. There are places for people to express themselves. And in that capacity, sure, they should let a thousand flowers bloom, but they are three other things at the same time. The first is they’re a corporation, so they have to make a profit, which means that they need to attract advertisers, which means that they need to have environments that advertisers want to be in, and therefore users want to be in.

Second, they are communities, meaning that they are places where people have to want to come and feel safe and like the product. And third, they are publishers. Publishers aggregate content, but then curate it in order to assemble audiences to sell to advertisers. Now, in those three capacities that are not platforms, they’re not only allowed to moderate content and pick and choose and decide what’s going to be up there and what’s not, and what the rules and standards are going to be—they must do that. They are positively obligated to do that. And if they don't, they will fail.

So that means this is a wicked hard problem because, on the one hand, yes, free speech values are important. We don't disagree about that. But on the other hand, just saying, "Content moderation bad, boo, hiss"—that will never fly. So we are in a conversation where what we need to be talking about is getting content moderation right or doing it better, not doing away with it.

Nico Perrino: Renée, I'd love to get your thoughts on content moderation generally. Presumably, you think there's some sort of ethical imperative, like Jonathan, to moderate content. That was part of your work with the Stanford Internet Observatory, right?

Renée DiResta: Yes, though not in the way that some of you all have framed it. So, let's just dive right in with that. So I agree with John and what he said. Content moderation is an umbrella term for an absolutely vast array of topical areas that platforms choose to engage around. Some of them are explicitly illegal, right? CSAM, child sexual abuse materials, terrorist content. There's rigorous laws and platform determinations where they do take that kind of content down. Then there are the things that are what Daphne Keller refers to as lawful but awful—things like brigading, harassment, pro-anorexia content, cancer quackery, things that are perceived to have some sort of harm on the public.

That question of what is harmful, I think, is where the actual focus should be: what defines a harm, and how should we think about that? There are other areas in content moderation that do refer to particular policy areas that the platforms choose to engage in, oftentimes in bounded ways. Right. I think it's also important to emphasize that these are global platforms serving a global audience base. And while a lot of the focus on content moderation here in the U.S. is viewed through the lens of the culture wars, the rules that they put in place must be applicable to a very, very broad audience. So, for example, the set of policies that they create around elections apply globally. The set of policies that they created around COVID applied globally.

I don't think that they're always good. I think that there are areas where the harm is too indeterminate, too fuzzy, like the lab leak hypothesis. The decision to impose a content block on that during COVID, I think, was a very bad call because there was no demonstrable harm, in my opinion, that was an outgrowth of that particular moderation area. When they moderate, as Jon alluded to, they have three mechanisms they can use for enforcement. There is “remove,” which he alludes to, is the takedowns, right? I agree, for years I've been writing that takedowns just create a backfire effect. They create forbidden knowledge. They're largely counterproductive. But then there's two others, which are "reduce," where the platform temporarily throttles something or permanently throttles it. And depending on what it is, right, spam is often rigorously throttled. And then “inform” is the last one. And “inform” is where they'll put up an interstitial or a fact check label or something that, in my opinion, adds additional context that, again, in my opinion, is a form of counter speech.

These three things, content moderation writ large, have all been reframed as censorship. That's where I think you're not having a nuanced conversation among people who actually understand either the mechanisms or the harms or the means of engagement and enforcement around it. You're having a—you know what we might say—a rather propagandistic redefinition of the term, particularly from people who are using it as a political cudgel to activate their particular political fandom around this particular form of the sort of grievance narrative that they’ve spun around it.

Nico Perrino: Well, what I want to get at is whether, normatively, you think content moderation is an imperative, Jonathan, because you talk about how it's essential for creating a community, to maintain advertisers, for other reasons. But you can build a social media company around a model that doesn't require advertisers to sustain itself—for example, a subscription model. You can build your community by professing free speech values. For example, Twitter, when it first got started, said it was the free speech wing of the free speech party. I remember Mark Zuckerberg gave a speech at Georgetown, I believe in 2018, talking about how Facebook is a free speech platform, making arguments for the imperative of free speech so they can define their communities. It seems like they're almost trying to define them in multiple ways. And by trying to please everyone, they're not pleasing anyone. So for a platform like X now, where Elon Musk says that free speech must reign—and we've talked extensively on this podcast about how sometimes it doesn't reign—but I mean, do you think it's okay to have a platform where pretty much any opinion is allowed to be expressed? Or do you see that as a problem more broadly for society?

Jonathan Rauch: Yes, I think it is good to have a variety of approaches to content management. And one of those certainly should be, if companies want to be—you know, I'm a gay Jew, so I don't enjoy saying this—but if one of them is going to be a Nazi, white supremacist, anti-Semitic content platform, it can do that. Normatively, I don't like it, if that's what you're asking. But on the other hand, also normatively, it is important to recognize that these are private enterprises, and the First Amendment and the ethos of the First Amendment protects the freedom to edit and exclude just as much as it does the freedom to speak and include.

And that means that normatively, we have to respect both kinds of platforms. And the point that I think Renée and I are making is that the big commercial platforms, which are all about aggregating large numbers of users and moving large amounts of dollars and content, are going to have to make decisions about what is and is not allowed in their communities. The smaller platforms can do all kinds of things.

Nico Perrino: Renée, I mean, presumably you have a normative position on topics that the social media companies should moderate around. I mean, otherwise why would the Election Integrity Partnership or your Virality Project be moderating content at all or submitting tickets to these social media companies? Not moderating content yourself, of course, but submitting tickets to the social media companies identifying policies or posts that violate the company's policies. Presumably you support those policies, otherwise you wouldn’t be submitting URLs to the companies, right?

Renée DiResta: So, the Election Integrity Partnership—let me define the two projects as they actually are, not as, as unfortunately, your blog had some, you know, some erroneous information about them as well. So, the Election Integrity Partnership was started to look at election rumors related to the very narrowly scoped area of voting misinformation, meaning things that said vote on Wednesday, not on Tuesday. Text-to-vote suppression efforts, that sort of thing. It did not look at what candidate A said about candidate B. It did not have any opinion on Hunter Biden's laptop. It did, in fact, absolutely nothing related to that story. The other big topical area that the Election Integrity Partnership looked at was narratives that sought to delegitimize the election absent evidence or preemptive delegitimization. That was the focus of the Election Integrity Partnership.

The platforms had independently set policies, and we actually started the research project by going and we made a series of tables—and these are again, these are public in our 200-and-something page report that sat on the internet for two years before people got upset about it—and we sort of, coded basically, here's the policy. Here are the platforms that have implemented this policy. You see a lot of similarities across them. You see some differences, which to echo John's point, I think that in an ideal world, we have a proliferation of marketplaces and people can go and engage where they choose, and platforms can set their terms of service according to, you know, again, moderating explicitly illegal content. But they can set their own sort of speech and tolerance determinations for other topic areas.

So, within the realm of this sort of rubric of these are the policies and these are the platforms that have it, as we observed election narratives from about August until late November of 2020, we would every now and then—so students would—this was a student-led research project. It wasn’t an AI censorship Death Star superweapon or any of the things you've heard. It was students, who would sit there and they would file tickets when they saw something that fell within our scope. Meaning this is content that we see as interesting in the context of our academic research project on these two types of content, right? The procedural and the delegitimization content. When there were things that began to go viral that also violated a platform policy, there were occasionally determinations made to tag a platform into them.

So, that was us as academics with free speech rights, engaging with another, private enterprise with its own moderation regime, if you will. And we had no power to determine what happened next. And as you read in our Election Integrity Partnership report after the election—and so after the election was over and, you know, February or so of the following year—we went and we looked at the 4,000 or so URLs that we had sent in this kind of escalation ticketing.

So, what wound up happening was, 65% of the URLs, when we looked at them after the fact, had not been actioned at all.

Renée DiResta: Nothing had happened. Of the 35% that had been actioned, approximately 13% came down, and the remainder stayed up and got a label. So overwhelmingly, the platforms did nothing, right? Which is interesting because they have policies, but they don't seem to be enforcing uniformly. Or we got it wrong. Right? That's possible too. But when they did enforce, they on the side of, again, what I consider to be counter speech, slapping a label, appending a fact check, doing something that indicates... And oftentimes when you actually read the text, it says, this claim is disputed. It's a very neutral “this claim is disputed.” Read more in our election center here. And that was the project, right? So, was it a—the ways that people have tried to make this controversial include alleging that it was government funded. It was not. That I was a secret government agent.

Nico Perrino: The government was involved in some respects, right?

Renée DiResta: The government was involved in it in the sense that the Global Engagement Center sent over a few tickets, under 20, related to things that it thought were foreign interference. Now, keep in mind, we also exercised discretion. So, just because the Global Engagement Center sends a ticket doesn't mean that we're like, oh, let's jump on this. Let's rush it over to a platform. No, that didn't happen at all. And what you see in the Jira, which we turned over to Jim Jordan and he very helpfully leaked—so now anyone can go read it—is you'll see a ticket that comes in. And again, just because it comes in doesn't mean that we take it seriously. We are doing our own independent analysis to determine whether we think a) this is real and important, b) this rises to the level of something a platform should know.

So, there's several degrees of analysis. And again, you can see the very, very detailed notes in the Jira. Again, mostly by students, sometimes by postdocs. Or I was an analyst too on the project. I was a second-tier analyst. And then a manager would have to make a determination about whether it rose to the level of something that a platform should see.

The other government figures that we engaged with was a nonprofit, actually, that engaged with state and local election officials. So, when state and local election officials are seeing claims in their jurisdiction—and again, this is all 50 states represented in this consortium, the Election Integrity ISAC—when they see something in their jurisdiction that says, for example, a tweet saying, "I'm working the polls in Pennsylvania and I'm shredding Trump ballots," which is an actual thing that happened, that's the kind of thing where they get very concerned about that. They get very upset about that. They can engage with CIS. They can file a ticket with us. And again, the ticketing was open to the RNC. It was open to the DNC. It was a very broad invitation to participate. And what they could do when they sent something to us is, again, we would evaluate it and we would decide if we thought that further action was necessary.

There is nothing in the ticketing in which a government agency sends something to us and says, "You need to tell the platforms to take this down." So again, for a lot of what FIRE wrote about in the context of jawboning, we had no power to censor. We're not a government agency. We weren't government funded. I'm not a secret CIA agent.

Nico Perrino: I don’t think we said that (laughs).

Renée DiResta: No (laughs). But other people that unfortunately have been boosted as defenders of free speech very much did say that. And that's why when I'm trying to explain what actually happened versus the right-wing media narrative and the sort of Substack parody narrative of what happened, like it's not borne out by the actual facts.

Nico Perrino: I think what people have a problem with—because telling people to vote on a certain day and doing it with the intent to deceive, is not First Amendment protected activity. There is no First Amendment—there are some exceptions for this. And of course, I'm just talking about the First Amendment here broadly. I know these are private platforms. They can do what they please.

CSAM material is not protected, child sexual abuse material is not protected under the First Amendment. I think what people have a problem with is the policing of opinion. Even if it's wrongheaded, it's dumb, and it can lead to deleterious effects throughout society. It can destabilize. So, when you're talking about, you know, your work in the Election Integrity Project, and you're starting by saying like, people deceiving about where voting locations are or the day to vote or "text in your vote here," like that makes sense. But, submitting tickets about trying to delegitimize the election before it happens—that’s an expression of opinion. Now, we all in this room, I suspect, think that that's a bad idea and it's dumb. But it's still the expression of opinion. And I think that's where folks get most frustrated.

Renée DiResta: Can I go back to this one second? You made an amicus brief in “NetChoice,” you have a sentence in there that says, describing the sort of state theory in “NetChoice”. First, it confuses private editorial decisions with censorship. So, let's be totally clear. We had no power over Facebook. I have no coercive power over a tech platform. If anything, as you’ve seen in my writing over the years, we're like constantly appealing to them to do basic things like share your data or be more transparent.

So first, there is no coercive power. Second, the platform sets its moderation policies. The platform makes that decision. And you, in your—or not you personally—but FIRE has acknowledged the private editorial decisions, the speech rights of the platforms, the right of the platforms to curate what shows up. So if the platform is saying, "We consider election delegitimization," and again, this is not only in the United States. These policies are global. "We consider election delegitimization to be a problem. We consider it to be a harm. We consider it to be something that we are going to action." And then we, as a private academic institution, say, "Hey, maybe you want to take a look at this."

Nico Perrino: But you agree with them, presumably, otherwise you wouldn’t be coordinating with them on it.

Renée DiResta: Well, it wasn't—it wasn’t like I was coordinating with them.

Nico Perrino: I mean, okay, so you're an academic institution. You can either research something, right, and learn more about it and study the trends. But then you take the second step whereas you’re…

Renée DiResta: We exercise our free speech.

Nico Perrino: Yes, and nobody's saying that you shouldn’t be able to do that.

Renée DiResta: Many people have been saying that I shouldn’t be able to do something else. I’ve been subpoenaed and sued multiple times.

Nico Perrino: I’m not saying you shouldn’t be able to...

Renée DiResta: Okay.

Nico Perrino: And in fact, we have reached out to certain researchers who are involved in the project, who are having their records FOIAed, for example. And we've always created a carve-out for public records.

Jonathan Rauch: Can I try a friendly amendment here to see if we can sort this out? You’re both right. Yes, people get uncomfortable, especially if there is a government actor somewhere in the mix, even in an advisory or informal capacity, when posts have to do with opinion. You used the word "policing opinion." I don’t like that because we’re not generally talking about taking stuff down. We’re talking about counter speech, is that "policing opinion?” But on the other hand, the fact that something is an opinion also does not mean that it’s going to be acceptable to a platform. There’s a lot of places that are going to say, "We don’t want content like 'Hitler should have finished the job.'" That’s an opinion. It’s constitutionally protected. And there are lots of reasons why Facebook and Instagram and others might want to take it down, or dis-amplify it. And if it’s against their terms of service, and we know it’s against their terms of service, it is completely legitimate for any outside group to go to Facebook and say, "This is against your terms of service. Why is it here?" and hold them accountable to their terms of service. All of that is fine. It’s protected. If I’m at FIRE, I’m for it. If the government’s doing it, it gets more difficult. But we can come back to that.

So, question for you, Nico: how much better would you and your audience feel if everything that a group like, let’s say, an academic group or outsiders calling things to platforms’ attention was done all the time in full public view? There’s no private communication with platforms at all. Everything is put on a public registry as well as conveyed to the platform, so everyone can see what it is that the outside person is calling attention to the platform. Would that solve the problem by getting rid of the idea that there’s subterfuge going on?

Nico Perrino: Well, I think so. And I might take issue with the word "problem," right. Like, I don’t know that academics should be required to do that, right? To the extent it’s a voluntary arrangement between academic institutions and private companies, I think the confusion surrounding, like, the Election Integrity Partnership...

Jonathan Rauch: The problem is the confusion, not the...

Nico Perrino: And the fact that there is a government somewhere in the mix, right now, of course...

Jonathan Rauch: Set that aside. Separate issue. Yes. But just in terms of people doing what the Stanford Internet Observatory or other private actors do, of bringing stuff to platforms' attention, would it help to make that more public and transparent?

Nico Perrino: I’m sure it would help, and I’m sure it would create a better sense of trust among the general public. But again, I don’t know that it’s required or that we think it’s a good thing, normatively. I think it probably is, but I can’t say definitively, right?

Renée DiResta: On the government front, I think we’re in total agreement, right? You all have a bill or bill template, I’m not sure where we are in the legislative process, but which I agree with, just to be clear. And as—

Jonathan Rauch: Yeah me too, I think it's the right framework.

Renée DiResta: I think that’s the reason I argued with Greg about this on, like, Threads or something. But, no, I think it’s the right framework. Look, the platforms have proactively chosen to disclose government takedown requests. We’ve seen them from, you know, Google—you can go and you can see—there’s a number of different areas where when the government is making the request, I think the transparency is warranted, and I have no problem with that, with that being codified law in some way. The private actor thing, you know, it’s very interesting because we thought that we were being fairly transparent, and that we had a Twitter account, we had a blog, we had—I mean, we were constantly—I mean, I was...

Nico Perrino: You had a 200-page report.

Renée DiResta: We had a 200-page report where you can—I mean, you can... The only thing we didn’t do, we didn’t release the Jira—not because it’s secret, but because now that Jim Jordan has helpfully released it for you, you can go try to read it right, and you’re going to see just a lot of people debating, you know, "Hey, what do we think of this? What do we think of this?"

Jonathan Rauch:The Jira is your internal...

Renée DiResta: The Jira was just an internal ticketing system. It’s a project management tool. And, you know, again, you can go read it, you can see the internal debates about "Is this in scope? Is this of the right threshold?" I’ll say one more thing about the Virality Project, which was... The Virality Project was a different type of project. The Virality Project sought to put out a weekly briefing, which again went on the website every single week in PDF form. Why did that happen? Because I knew that at some point we were going to get, you know, some records request. We are not subject to FOIA at Stanford. But I figured that, again, because the recipients of the briefings that we were putting up did include anyone who signed up for the mailing list, but government officials did sign up for the mailing list.

Renée DiResta: So people at Health and Human Services or the CDC or the office of the Surgeon General signed up to receive our briefings. So, we put them on the website. Again, anybody could go look at them. And what you see in the briefings is we’re describing the most viral narratives of the week. It is literally as basic as here are the narratives that we considered in scope for our study of election rumors. And, you know, there they are. And we saw the project as how can we enable officials to actually understand what is happening on the internet, because we are not equipped to be counter speakers. We are not physicians, we are not scientists, we are not public health officials, but the people who are don’t necessarily have the understanding of what is actually viral, what is moving from community to community, where that response should come in.

And so, we worked with a group of physicians that called themselves "This Is Our Shot," right? Just literally a bunch of doctors, frontline doctors, who decided they wanted to counter speak, and they wanted to know what they should counter speak about. So again, in the interest of transparency, the same briefings that we sent to them sat on our website for two years before people got mad about them. And then this again was turned into some sort of, "Oh, the DHS established the Virality Project." Complete bullshit. Absolutely not true. The only way that DHS engaged with it, if at all, is if somebody signed up for the mailing list and read the briefings.

Nico Perrino: So, there wasn’t any Jira ticketing system...

Renée DiResta: There was a Jira ticketing system so that we internally could keep track of what was going on.

Nico Perrino: It wasn’t sent on to the platforms?

Renée DiResta: In the Virality Project, I think there were 900 something tickets. I think about a hundred were tagged for the platforms, if I recall.

Nico Perrino: What were those tickets associated with?

Renée DiResta: So, one of the things that we did was we sent out an email to the platforms in advance of the—you know, as the project began, and we said, "These are the categories that we are going to be looking at for this project." And I’m trying to remember what they are off the top of my head: it was like vaccine safety narratives, vaccine conspiracy theories, you know, metal—like, "It makes you magnetic," the "mark of the beast," these sorts of things. Or the other ones—oh—narratives around access, who gets it and when. So, again, these sorts of, like, big, overarching, long-term vaccine hesitancy narratives. We pulled the narratives that we looked at from past academic work on what sorts of narratives enhanced vaccine hesitancy.

And what we did after that was we reached out to the platforms. We said, "These are the categories we’re looking at. Which of these are you interested in receiving tickets on? When something is going viral on your platform—again, that seems to violate your policies," because you’ll recall, they all had an extremely large set of COVID policies. Lab leak was not in scope for us. It’s not a vaccine hesitancy narrative. And so, in that capacity, again, there were about a hundred or so tickets that went to the platforms. And again, they were all turned over to Jim Jordan. And you can go look at them all.

Nico Perrino: This conversation is coalescing around: "can" but "should." I think we’re all in agreement that social media companies can police this content. The question is, should they? Right. So, should they have done...

Jonathan Rauch: I prefer "moderate" to "police."

Nico Perrino: Okay. I mean, but they are out there looking for people posting these narratives or violating their terms of service. So, I mean, we could debate semantics whether "police" is the right word, but they’re out there looking and they’re moderating surrounding this content. Should they? Should they have done all the content moderation they did surrounding COVID, for example?

Renée DiResta: Well, I’ve said some of it, I think, like the lab leak, I thought was rather pointless. But again, I didn’t see the risk there, the harm, the impact that justified that particular moderation area. Facebook Oversight Board, interestingly, wrote a very comprehensive review of Facebook’s COVID policies. And one thing that I found very interesting reading it was that you really see gaps in opinion between people who are in the global majority or the global south and people who are in the United States.

And that comes through. And this again is where you saw a desire for, if anything, more moderation from some of the people who were kind of representing the opinions of, you know, Africa or Latin America, saying, "No, we needed more moderation, not less," versus where moderation had already by that point, beginning in 2017, become an American kind of culture war flashpoint. The very idea that moderation is legitimate had been sort of established in the United States. That’s not how people see it in Europe. That’s not how people see it in other parts of the world. So, you do see that question of should they moderate and how being in there.

Renée DiResta: I want to address one other thing, though, because for me, I got into this in part looking at vaccine hesitancy narratives as a mom back in 2014.

my cards have always been on the table around, you know, my extremely pro-vaccine stance. But one of the things that I wrote about and talked about in the context of vaccines specifically for many, many, many years—right, it’s all in “Wired,” you can read it—was the idea of platforms having a right to moderate. In my opinion, there’s a difference between what they were doing for a very, very, very long time, which was they would push the content to you. So, you had—you, as a new parent, had never searched for vaccine content. They were like, "You know what you might want to see? You might want to join this anti-vaccine group over here." Right?

So there’s ways in which platforms curate content and have an impact further downstream. The correlation between the sort of rise in vaccine hesitancy online over a period of about a decade—actually, you know, six years, give or take before COVID began—is something that people, including platforms, were very, very concerned about because of its impact on public health, long before the COVID vaccines themselves became a culture war flashpoint.

So, do I think that they have an obligation to take certain—you know, to establish policies related to public health? I think it’s a reasonable ethical thing for them to do, yes. And where I struggle with some of the conversation, you know, from your point of view, I think—or maybe what I intuit is your point of view, based on FIRE's writings on this—is that you both acknowledge that platforms have their own free speech rights, and then I see a bit of a tension here with, "Well, they have their own free speech rights, but we don’t want them to do anything with those speech rights. We don’t want them to do anything with setting curation or labeling or counter speech policies. We just want them to do nothing, in fact." Because then you have this secondary concern—or maybe dual concern—about the speech rights of the users on the platforms. And these two things are in tension for the reasons that John raised when we first started.

Nico Perrino: Well, do you worry that efforts to label, block, ban content based on opinion, viewpoint—what’s true or false—create martyrs? Supercharges conspiracy theories? You had mentioned, Jonathan, the "forbidden fruit" idea. Or maybe that was you, Renée. I worry that doing so, rather than creating a community that everyone wants to be a part of, creates this sort of erosion of trust. I suspect that the actions taken by social media companies during the COVID era eroded trust in the CDC and other institutions. And I think if the goal is trust and if the goal is institutional stability, it would have been much better to let the debate happen without social media companies placing their thumb on the scale, particularly in the area of emerging science.

Jonathan, I remember we were at a University of California event, right as COVID was picking up, and we were talking about masks and we were talking about just regular cloth face masks. And I think it was you or I who said that like, "Oh, no, those don’t actually work. You need an N95 mask," right? And then that changed, right? Then the guidance was that you should wear cloth masks, that they do have some ameliorating effect.

Andrew Callaghan created a great documentary called “This Place Rules” about January 6th and kind of conspiracy theory movements. And he said in his reporting that when you take someone who talks about a deep state conspiracy to silence him and his followers, and then you silence him and his followers, it only adds to his credibility. Now, here we’re not talking about deep state, we’re talking about private platforms. But I think the idea surrounding trust is still there. So I’d love to get your guys' thoughts on that. Like, sure, we all have an agreement around COVID or the election, for example, but the moderation itself could backfire.

Jonathan Rauch: Well, I’ll make a big point about that and then a narrower point. The big point is I think we’re getting somewhere in this conversation because it does seem to me, correct me if I’m wrong, that the point Renée just made is something we agree on—that there are tensions between these roles that social media companies play.

The first thing I said is there are multiple roles and tensions between them, and that means that simple answers like, "They should always do X and Y," or "Not and Y, X," are just not going to fly. And if we can establish that as groundwork, we’re way ahead of the game. Until now, it’s all been about, "What they should, should not ever, ever do." So I’m very happy with that.

So, then there’s the narrower question, which is a different conversation about what should they be able to do in principle, and that’s, "What should they do in practice?" which is Jonathan, Nico, and Renée all sitting here and saying, "Well, if we were running Facebook, what would the best policy be? How do we build trust with our audience? Did we do too much or too little about this and that?" And the answer to those questions is, "I don’t know." This is a wicked hard problem. I will be happy if we can get the general conversation about this just to the point of people understanding, "This is a wicked hard problem," that simple bromides about censorship, freedom of speech, policing speech won’t help us.

Once we’re in the zone of saying, "Okay, how do we tackle this in a better way than we did in COVID?" and I’m perfectly content to say that there were major mess-ups there. Who would deny that? But once we’re having that conversation, we can actually get started to understanding how to improve this situation. And thank you for that.

Nico Perrino: Well, I mean, let’s take a real-world example, right? The deplatforming of Donald Trump. Do you think that was the right call? That’s the first question. The second call—The second question is: was that consistent with the platform's policies?

Renée DiResta: That was an interesting one. That was an interesting one because there was this question around incitement that was happening right in the context in which it happened. It was as January 6th was unfolding, as I recall it, maybe it was 48 hours later that he actually got shut down.

Nico Perrino: I should add that the question as to whether Donald Trump’s speech on the Ellipse that day met the standard for incitement under the First Amendment is like the hottest debate topic...

Renée DiResta: Right, where do people come down on it?

Nico Perrino: All sides. I mean, both sides.

Renée DiResta: Interesting, interesting.

Nico Perrino: Yeah, there’s not a unity of opinion within the First Amendment community on that one. You have really smart people...

Jonathan Rauch: Is that true in FIRE as well?

Nico Perrino: Yes, yes, it is.

Renée DiResta: That’s interesting. So, within kind of tech platform- I mean tech policy community, I mean, there were, like, literally entire conferences dedicated to that. I felt like as a moderate, I maybe it was kind of like maybe punted. I wrote something in “Columbia Journalism Review” as it came out about, "With great power comes great responsibility."

And one of the things when I talk about moderation enforcement—again, one of the reasons why, with Election Integrity Partnership, when we went and looked after the fact, "Had they done anything with these things that we saw as violative?"—we found that 60, 65% of the time, the answer was "No." This was an interesting signal to us, because when you look at some of the ways that moderation enforcement is applied, it is usually the lower-level, nondescript, ordinary people that get moderated for inciting type speech...

Nico Perrino: You talk about borderline speech?

Renée DiResta: Yeah, when the President of the United States does not. Right. There’s a protection given. You’re like too big to moderate.

Nico Perrino: A public figure privilege.

Renée DiResta: Sure, yeah. And I believe...

Nico Perrino: Actually, some of the platforms had that.

Renée DiResta: Absolutely. They did, yes. And this was the one thing where occasionally, you know, you did see every now and then something interesting would come out of the Twitter files. And it would be things like the sort of internal debates about what to do about some of the high-profile figures where there were questions about whether language veered into borderline incitement or violated their hate speech policies or whatever else.

So, there is this question. If I recall correctly, my argument was that if you’re going to have these policies, you should enforce them. And it seemed like this was one of the areas where—there was—you have to remember also in the context of the time, there was very significant concern that there would be continued political violence. Facebook had imposed what it calls the break glass measures. I think I talk about this in my book.

Nico Perrino: Yeah, you do.

Renée DiResta: Yeah. And that’s because—this is, I think, this is something also worth highlighting—which is that the platforms are not, curation is not neutral. There is no baseline of neutrality that exists somewhere in the world. When you have a ranked feed, and you just can’t be. Right? This is a very big argument in tech policy around how much transparency should they be obligated to disclose around how they curate, not only how they moderate, but how they curate, how they decide what to show you. Because that’s a different—

Nico Perrino: You have some states that are trying to mandate that by laws, that place their algorithms—that from FIRE’s perspective would create a compelled speech issue.

Renée DiResta: But see, this is an interesting thing. I was going to raise that with you, like the AB 587 court case is an interesting one, right? The California transparency disclosures, where, you know, there's a, there’s um, platforms, they have their First Amendment rights, but they can't be compelled to actually show people what's going on. But also, maybe they shouldn't be moderating. But if they moderate, they should be transparent, but they shouldn’t be compelled to be transparent.

Like, we wind up in these weird circles here where we’re—I feel like we just get nowhere. We just—we just sort of always point to the First Amendment and say, "No, no, no, we can't do that."

Nico Perrino: Well, in the political sphere, you have some transparency requirements, for example, around political contributions and whatnot.

Renée DiResta: Right, exactly. And I think the question around how should transparency laws be designed to do the thing that you’re asking, right, to do the thing that Jon referenced also, which is if you want to know how often a platform is actually enforcing or on what topics, right now that is a voluntary disclosure. That is something that an unaccountable private power decides benevolently to do. In Europe, they’re saying, "No, no, no, this is no longer a thing that’s going to be benevolent. It’s going to be mandated," right, for very large online platforms.

Renée DiResta: It’s going to be a topic that’s litigated quite extensively. And it again comes back to, in a free society, there are things that the law shouldn't compel, but that we, as individual citizens, should advocate for. Right? And where is that line? And it always—you can feel uncomfortable advocating for things that the law doesn’t require, but I think that’s just kind of part of living in a free society as well.

Nico Perrino: Jonathan, I would like to get your take on the Trump deplatforming.

Jonathan Rauch: Well my take is I don’t have a take. It depends on the platform and what their policies are. My general view is a view I got long ago from a man named Alan Charles Kors.

Nico Perrino: FIRE co-founder.

Jonathan Rauch: Yeah, you may have heard of him. And that’s in reference to universities, private universities, which is: yeah, yeah, private universities ought to tell us what their values are and then enforce those commitments. So, if a university says we are a free speech university, the robust and unhindered pursuit of knowledge through debate and discourse, they should not have a speech code.

Nico Perrino: Sure.

Jonathan Rauch: But if they want to say, "We are a Catholic university, and we enforce certain norms and ideas," fine.

Nico Perrino: Like Brigham Young University.

Jonathan Rauch: You know, so what the first thing I want to look at when anyone is deplatformed is, "Okay, what are the rules?" And are they enforcing them in a consistent way? And the answer is, I don’t know the particular rules in the particular places relating to the Donald Trump decision. People that I respect, who have looked at it, have said that Donald Trump—now we’re talking about Twitter specifically as it was—have said that Donald Trump was in violation of Twitter’s terms of service and had been multiple times over a long period, and that they were coming at last to enforce what they should have enforced earlier. I can't vouch for that. Reasonable people said it. So what do you think?

Nico Perrino: Well, I think there needs to be truth in advertising if you’re looking at some of these social media platforms. We had talked about Mark Zuckerberg before. I think I have a quote here. He says, "We believe the public has the right to the broadest possible access to political speech, even controversial speech." He says, "Accountability only works if we can see what those seeking our votes are saying, even if we viscerally dislike what they say." Big speech at Georgetown about free speech and how it should be applied on social media platforms.

Then, at the same time, he gets caught on a hot mic with Angela Merkel, who’s asking him what he’s going to do about all the anti-immigrant posts on Facebook. This was when the migrant crisis in Europe was really at its peak in the mid-2010s. I think what really frustrates people about social media is the perception, and maybe the reality, of double standards. And I think that’s what you also see in the academic space as well. So, you have Claudine Gay going before Congress and, I think, giving the correct answer from at least a First Amendment perspective, that context does matter anytime you’re talking about exceptions to speech. In that case, they were being asked about calls for Jewish genocide, which was immediately preceded by discussion of chants of "intifada" or "from the river to the sea," which I think should be protected in isolation if it’s not part of a broader pattern of conduct that would fall under, like, for example, harassment or something.

With the social media companies and Twitter, for example—right?—so you have Donald Trump get taken down, but Iran’s Ayatollah Khamenei is not taken down. You have Venezuelan President Nicolás Maduro, who’s still on Facebook and Twitter. The office of the President of Russia still is operating a Twitter account. Twitter allows the Taliban spokesperson to remain on the platform. You have the Malaysian and Ethiopian prime ministers not being banned despite what many argue was incitement to violence. So, I think it’s these double standards that really erode trust in these institutions, and that lead to the sort of criticism that they've received over the years. And I think it’s why you saw Mark Zuckerberg responding to Jim Jordan and the House committee and saying, "We’re going to kind of stop doing this. We’re going to kind of get out of this game."

Jonathan Rauch: There always will be. I would just—well, first, I would retreat to my main point that I want to leave people with, which is this problem is wicked hard, and simple templates just won’t work. But in response to what you just said, I would point out that the efforts to be consistent and eliminate double standards could lead to more lenient policies, which is what’s happened on Facebook, or less lenient policies. They could, for example, have taken down Ayatollah Khamenei, or Nicolás Maduro, or lots of other people. And I’m guessing FIRE would have said, "Bad, bad, bad. Leave them up." I don’t know. But the search for consistency is difficult.

And if you take your terms of service seriously, and if you're saying, "We're a community that does not allow incitement or hate," or "We’re a community that respects the rights of LGBT people, and defines that as 'a trans woman is a woman,' and to say differently is hate," well, then that means they’re going to be removing or dis-amplifying more stuff.

Nico Perrino: Well, it depends how they define some of those terms.

Jonathan Rauch: Well, that's right. But the whole point of this is, is that it’s going to be very customized processes. And what I’m looking for here is, "Okay, at least tell us what you want, what you think your policy is, and then show us what you’re doing." So, at least we can see, to some extent, how you're applying these policies. And therefore, when we’re on these platforms, we can try to behave in ways that are consistent with these policies without getting slapped in seemingly capricious or random or partisan or biased directions.

Nico Perrino: Do you think the moderation that social media companies did during the pandemic, for example, has led to vaccine hesitancy in the country?

Renée DiResta: That’s a really interesting question. I don’t think I’ve seen anything—I don’t think that that study has been done yet. There’s a—you know, the question is: is it the—it's very hard, I would say, to say that this action led to that increase. One of the things that has always been very fuzzy about this is the idea that—is it the influencers you hear that are undermining the—you know, the vaccines became very partisan, very clear lines. You can see, expressed in vaccination rates, conservatives having a much lower uptake. Is that because of, you know, some concern about censorship, or is that because the influencers that they were hearing from were telling them not to get vaccinated anyway?

I think it’s also important to note that there was no mass censorship in terms of actual impact writ large on these narratives. I can—you can open up any social media platform and go and look, and you will find the content there. If you were following these accounts, as I was during COVID, you saw it constantly. It was not a hidden or suppressed narrative. This is one of the things that I have found sort of curious about this argument. The idea that somehow every vaccine hesitancy narrative was summarily deleted from the internet is just not true. The same thing with the election stuff. You can go and you can find the delegitimization content up there because, again, most of the time they didn’t actually do anything. So, I have an entire chapter in the book on where I think their policies were not good, where I think their enforcement was middling, where I think the institutions fell down. I mean, I don’t think that COVID was, you know—nobody covered themselves in glory in that moment.

But do I think that the sort of backfire effect of suppression led to increased hesitancy? It’d be an interesting thing to investigate.

Nico Perrino: Do you have any insight into whether content that was posted on YouTube, for example, or Facebook, that mentioned the word COVID during COVID, was de-amplified because there was a big narrative—

Renée DiResta: We can’t see that. We can’t see that. And this again is where I think the strongest policy advocacy I’ve done as an individual over the last, you know, seven years or so has been on the transparency front, basically saying we can’t answer those questions. One of the ways that you find out, you know, why Facebook elected to implement break-glass measures around January 6th, for example, comes from internal leaks from employees who worked there. That’s how we get visibility into what’s happening. And so while I understand First Amendment concerns about the compelled speech and the transparency realm, I do think there are ways to thread that needle because the value to the public of knowing, of understanding, you know, what happens on unaccountable private power platforms is worthwhile. It is, in fact, in my opinion, it meets that threshold of the value of what is compelled being more important than the sort of First Amendment prohibitions against compulsion.

Jonathan Rauch: Incidentally, footnote, some of the momentum around compulsion for transparency could presumably be relieved if these companies would just voluntarily disclose more, which they could, and they’ve been asked to do many times, including by scholars who’ve made all kinds of rigorous commitments about what would and would not be revealed to the public.

Nico Perrino: Doesn’t X open-source its algorithm?

Renée DiResta: That doesn’t actually show you how they curate—

Nico Perrino: Yeah, sure. I mean, it shows how the algorithm might moderate content, but it presumably wouldn’t show how human moderators would get involved, right?

Renée DiResta: Well, there’s two different things there. One is curation. One is moderation. The algorithm is a curation function, not a moderation function. So, these are two different things that you’re talking about that are both, I think, worthwhile. You know, so some of the algorithmic transparency arguments have been that you should show what the algorithm does. The algorithm is a very complicated series of—of course, it means multiple things depending on which feature, which mechanism, what they’re using machine learning for. So there’s the algorithmic transparency efforts, and then there’s basic—and what I think John is describing more of—transparency reports. Now, the platforms were voluntarily doing transparency reports. And I know that Meta continues to— I meant to actually check and see if Twitter did last night, and I forgot. It was on my list of votes.

Nico Perrino: You still call it Twitter too.

Renée DiResta: Sorry, I know. I know. But some people refuse to call it X.

Nico Perrino: No, no. It’s not a refusal. It’s just a—

Renée DiResta: No, no, no. It’s just where it comes to mind. But no, Twitter actually did some—sorry, whatever—did some interesting things regarding transparency, where there was a database, or there is a database, called Lumen. You must be familiar with Lumen.

Nico Perrino: Yes.

Renée DiResta: Lumen is the database where if a government or an entity reached out with a copyright takedown under DMCA, for a very long time, platforms were proactively disclosing this to Lumen. So, if you wanted to see whether it was, you know, a movie studio, a music studio, or the government of India, for example, requesting a copyright—using copyright as the argument, as the justification for a takedown—those requests would go into the Lumen database. Interestingly, when somebody used, I believe, that database and noticed that X was responding to copyright takedowns from the Modi government—this was in April of last year.

Nico Perrino: I think there was a documentary about, for example...

Renée DiResta: Yes. So, and Twitter did comply with that request. There was a media cycle about it. They were embarrassed by it because, of course, the rhetoric around the sort of, you know, free speech relative to what they had just done or what had been, you know, sort of revealed to have been done—and again, they operated in accordance with the law. This is a complicated thing, you know. We can talk about sovereignty if you want at some point. But what happened there though was that the net effect of that was that they simply stopped contributing to the database. So what you’re seeing is the nice-to-have of like, "Well, let’s all hope that our, you know, benevolent private platform overlords simply proactively disclose." Sometimes they do. And then, you know, then something embarrasses them, or the regime changes, and then the transparency goes away.

And so I actually don’t know where Twitter is on this. We could check, but, you know, put it in the show notes or something. But it is an interesting question because there have been a lot of walk-backs related to transparency, in part, I would argue, because of the chilling effect of Jordan’s committee.

Jonathan Rauch: But maybe—maybe for now the best clutch is going to be private outside groups and universities and nonprofits that do their best to look at what’s going up on social media sites, and then compare those with their policies, and report that to social media companies and the public. And that’s exactly what Renée was doing.

Renée DiResta: And look what happened.

Jonathan Rauch: Incidentally, if we want to talk about entities that look at what private organizations are doing regarding their policies, looking for inconsistencies between the policies and the practices, and reporting that to the institutions and saying, "You need to get your practices in line with your policies," we could talk about FIRE, because that’s exactly FIRE’s model for dealing with private universities that have one policy on speech and do something else.

Nico Perrino: Sure.

Jonathan Rauch: It’s perfectly legitimate. And it’s, in many ways, very constructive.

Nico Perrino: Well, that’s one of the reasons that we criticize these platforms, normatively, right? Is because you do have platforms that say, "We’re the free speech wing of the free speech party," or "We’re the public square." Or you have Elon Musk saying that Twitter, now X, is a free speech platform, but then censorship happens that we think, or, you know, you guys don’t like using the word censorship in the context of online private platforms. But moderation happens that would be inconsistent with those policies, and we will criticize him as we have. We’ll criticize Facebook as we have in the past.

Jonathan Rauch: Yes. I jumped on your use of the word "policing" earlier. You mentioned it’s a semantic difference, and I don’t think it is because I think it would be unfair and inaccurate to describe what FIRE is doing, for example, as policing. I think that’s the wrong framework. And that’s really the big point I’m trying to make.

Renée DiResta: So, you have an interesting—one of the things that I was—I think about is you have private enterprise and then you have state regulators, and everybody agrees that they don’t want the state regulators weighing in on the daily adjudication of content moderation, right? It’s bad. I think we all agree with that.

Nico Perrino: And the DOJ just came out with a whole big website, I believe, on best practices for its—

Renée DiResta: Yeah, for how government should engage. And I think that that’s a—that’s a positive thing. But then the other side of it is private enterprise makes all those decisions, and there’s a lot of people who are also uncomfortable with that, because then you have—normally when you have unaccountable private power, you also have some sort of regulatory oversight and accountability that isn’t really happening here, particularly not in the US. The Europeans are doing it in their own way. So, you have sort of "nobody wants the government to do it, nobody wants the platforms to do it." So, one interesting question then becomes, "Well, how does it get done?" Right? When you want to—when you want to change a platform policy, when somebody says, "Hey, I think that this policy is bad." I’ll give a specific example.

Early on in COVID, I was advocating for state media labeling, right? That was because a lot of weird stuff was coming out from Chinese accounts and Russian accounts related to COVID. And I said, "Hey, you know, again, in the interest of the public being informed, not take these accounts down, but just throw a label on them," right? Almost like the digital equivalent of Foreign Agent Registration Act to say, like, "Hey, state actor. Just so that when you see the content in your feed, you know that this particular speaker is literally the editor of a Chinese propaganda publication." That, I think, is, again, an improvement in transparency.

Those platforms did, in fact, for a while, move in that direction. And they did it in part because academics wrote op-eds in The Washington Post saying, "Hey, it’d be really great if this happened. Here’s why we think it should happen. Here’s the value by which we think it should happen." So, that wasn’t a determination made by some regulatory effort. That was civil society and academia making an argument for a platform to behave in a certain way. You see this with the advertiser—you know, advertiser boycotts. I’ve never been involved in any of those in any firsthand way. But again, an entity that has some ability to say, "Hey, in an ideal world, I think it would look like this." The platform can reject the suggestion, the platform can—you know, Twitter imposed state media labels and then walked them back after Elon got in a fight with NPR, right?

Nico Perrino: Sure, sure. I know, for example, that they don’t always listen to us. We reach out to them, okay. But I think we’re all in agreement that the biggest problem is when the government gets involved and does the so-called jawboning. I just want to read Mark Zuckerberg’s August 26th letter to Jim Jordan and the committee on the judiciary, in which he writes in one paragraph:

"In 2021, senior officials from the Biden administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn’t agree. Ultimately, it was our decision whether or not to take content down, and we made our own decisions, including—and we own our decisions—COVID-19-related changes we made to our enforcement in the wake of this pressure. I believe that government pressure was wrong, and I regret that we were not more outspoken about it. I also think we made some choices that, with the benefit of hindsight and new information, we wouldn’t make."

And then in some of the releases coming out of this committee, you see emails, including from Facebook. There’s one email that showed Facebook executives discussing how they managed users' posts about the origins of the pandemic, that the administration was seeking to control. Here’s a quote:

"Can someone quickly remind me why we were removing, rather than demoting, labeling claims that COVID is man-made?" asked Nick Clegg, the company president of global affairs, in a July 2021 email to colleagues.

"We were under pressure from the administration and others to do more," responded a Facebook vice president in charge of content policy.

Speaking of the Biden administration, "We shouldn’t have done it." There are other examples, for example, Amazon employees strategizing for a meeting where the White House openly asked whether the administration wanted the retailer to remove books from its catalog:

"Is the admin asking us to remove books, or are they more concerned about the search results order or both?" one employee asked.

And this was just in the wake of the [“When Harry Became Sally” book] incident, where the platform—or Amazon, in this case—removed a book on transgender issues and got incredible backlash. And so they were reluctant to remove any books, even books promoting vaccine hesitancy.

So, I think we’re all in agreement that the sort of activity you talked about there...

Jonathan Rauch: It’s in a different category. And this is frankly not a difficult problem to address. FIRE's bill is in the right direction. There are lots of proposals and there are all versions, as I understand it—you guys can correct me—of the same thing, which is: instead of having someone in some federal agency or the White House pick up the phone and yell at someone at some social media company, there should be a publicly disclosed record of any transactions. It should be done in a systematic and formal way, and those records should be, after some reasonable interval, inspectable by the public. And that's it. Problem solved.

Renée DiResta: Yeah, I would agree with that also. I mean, and this is where we wrote amicus briefs in Murthy, because my residual gripe with that is that this is an important topic. This is an important area for there to be more jurisprudence. And yet that particular case was built on such an egregiously garbage body of fact and so many lies that unfortunately, it was tossed for standing because it just wasn’t the case that people wanted it to be.

And my frustration with FIRE and others was that there was no reckoning with that—that even as, you know, as Greg was talking about the bill, which I absolutely support those types of legislation, it was like, "Well, the Supreme Court punted and, you know, didn’t come down on where they needed to come down on this issue." You know, they didn’t give us guidance. That happened because the case was garbage—unambiguously. And, you know, I thought it was very interesting to read FIRE's amicus in that where it’s pointing out quite clearly and transparently over and over again, the hypocrisy of the attorneys general of Missouri and Louisiana, the extent to which this was a politically motivated case.

And I wish that we could have just had a, you know, both things can—we can hold both ideas in our head at the same time—that jawboning is bad, that bills like what you're broaching are good, and also that we could have had a more honest conversation about what was actually shown in Murthy and the lack of evidence where there were very few—I think none, in fact, for many of the plaintiffs—actual mentions of their names in a government email to a platform. Like, that through-line is just not there. So, I think we had a lousy case and, you know, and it left us worse off.

Jonathan Rauch: Bad cases make bad law. Just restating my earlier point, this decision should be made in Congress, not the courts. This could be solved statutorily. And by the way, I don’t believe necessarily jawboning is bad if it's done in a regular, transparent way. I think it’s important for private actors to be able to hear from the government.

I’m a denizen of old media. I came up in newsrooms. It is not uncommon for editors at The Washington Post to get a call from someone at the CIA or FBI or National Security Council saying, "If you publish this story as we think you’re going to publish it, some people could die in Russia." And then a conversation ensues. But there are channels and guidance for how to do that. We know the ropes. But it is important for private entities to be able to receive valuable information from the government. We just need to have systems to do it.

Nico Perrino: I’m going to wrap up here soon, but I got an article in my inbox that was published on September 10th in the “Chronicle of Higher Education,” I believe, titled, "Why Scholars Should Stop Studying Misinformation," by Jacob Shapiro and Sean Norton. Are either of you familiar with this article?

Renée DiResta: Jake Shapiro from Princeton?

Nico Perrino: I don’t know. Let’s see if there’s a byline here at the bottom. No, I don’t have it here at the bottom. Jacob Shapiro and Sean Norton. Anyway, the argument is that while the term "misinformation" may seem simple and self-evident at first glance, it is, in fact, highly ambiguous as an object of scientific study. It combines judgments about the factuality of claims with arguments about the intent behind their distribution, and inevitably leads scholars into hopelessly subjective territory. It continues later in the paragraph, "It would be better, we think, for researchers to abandon the term as a way to define their field of study and focus instead on specific processes in the information environment." And I was like, "Oh, okay, this is interesting. I just so happen to be having a misinformation researcher on the podcast today."

Renée DiResta: No, I’m not a misinformation researcher. I hate that term. I say that in the book over and over and over again. This is echoing a point that I’ve made for years now. It is a garbage term because it turns something into a debate about a problem of fact. It is not a debate about a problem of fact. And one of the things that we emphasize over and over and over again, and the reason I use the word "rumors and propaganda" in the book, is that we have had terms to describe unofficial information with a high degree of ambiguity passed from person to person—that's a rumor. Propaganda—politically motivated, incentivized, often where the motivations are obscured—speech by political actors in pursuit of political power. That’s another term that we’ve had.

Nico Perrino: So, you wouldn’t call—

Renée DiResta: Since 1300 (laughs).

Nico Perrino —vaccine hesitancy or the anti-vaccine crowd that got you into this work early on as engaging in misinformation because you did—

Renée DiResta: I did study—there’s a—here’s the nuance that I’ll add, right? Misinformation—and the reason I think it was a useful term for that particular kind of content—is that in the early vaccine conversations, the debate was about whether vaccines caused autism. At some point, we do have an established body of fact. And at some point, there are things that

At some point, we do have an established body of fact. And at some point, there are things that are simply wrong, right? And again, this is where the—and I try to get into this in the context of, you know, in the book—the difference between some of the vaccine narratives with routine childhood shots versus COVID is that the body of evidence was very, very clear on routine childhood vaccines: they’re safe, they’re effective, right?

And most of the kind of hesitancy-inducing content on the platforms around those things is rooted in false information, lies, and, you know, a degree of propaganda. Candidly, with COVID—and the reason that we did these weekly reports—it was to say, people don’t actually know the truth. The information is unclear. We don’t know the facts yet. Misinformation is not the right term. Is there some sloppiness in terms of how I used it? Probably. I’m sure I have in the past. But I have to go read Jake’s article, because I, you know—again, this, like, "Why do we have to make up new terms?" Malinformation was by far the stupidest.

Nico Perrino: Malinformation is true, but insidious information, right?

Renée DiResta: Yes, it’s the way that—I mean, you can use true claims to lie. This is actually the art of propaganda going back decades, right? You take a grain of truth, you spin it up, you apply it in a way that’s, you know, where it wasn’t intended. You decontextualize a fact, right? I don’t know why that term had to come into existence again. I feel like propaganda is quite a ready and available term for that sort of thing.

Nico Perrino: I want to end here, Jonathan, by asking you about your two books, “Kindly Inquisitors” and “The Constitution of Knowledge”. In “Kindly Inquisitors,” you have kind of two rules for liberal science: no one gets final say, no one has personal authority. My understanding is that “The Constitution of Knowledge” is kind of an expansion on that, right? Because if we’re talking here about vaccine hesitancy, and vaccines’ connection with autism, I guess I’m asking: how do those two rules—no one gets final say and no one has personal authority—affect a conversation like that? Because presumably, if you’re taking that approach and you want to be a platform devoted to liberal science, you probably shouldn’t moderate that conversation, because if you do, you are having final say or giving someone personal authority.

Jonathan Rauch: Well, boy, that’s a—that’s a big subject. So let me try to think of something simple to say about it. Those two rules and the entire constitution of knowledge, which spins off of them, set up an elaborate, decentralized, open-ended public system for distinguishing, over time, truer statements from false statements, thus creating what we call objective knowledge. I’m in favor of objective knowledge. It’s human beings’ most valuable creation by far. As a journalist, I have devoted my life and my career to the collection and verification of objective knowledge. And I think platforms in general, not all of them, but social media platforms and other media products generally serve their users better if what’s on them is more true than it is false. If you look up something online, in fact, and the answers are reliably false, we call that system broken, normally. And unfortunately, that’s what’s happening in a lot of social media right now.

So, the question is, of course, these platforms are all kinds of things, right? They’re not truth-seeking enterprises. They’re about these other four things I talked about. Would it be helpful if they were more oriented toward truth? Yes, absolutely. Do they have some responsibility to truth? I think, yes, as a matter of policy. One of the things they should try to do is promote truth over falsehood. I don’t think you do that by taking down everything you think is untrue, but adding context, amplifying stuff that’s been fact-checked, as Google has done, for example. There are lots of ways to try to do that. And I, unlike Renée, am loath to give up the term misinformation, because it’s another way of holding—of anchoring ourselves in an important distinction, which is that some things are false and some things are true. And it can be hard to tell the difference. But if we lose the vocabulary for insisting that there is a difference, it’s going to be a lot harder for anyone to insist that anything is true. And that, alas, is the world that we live in now.

Nico Perrino: All right, folks, I think we’re going to leave it there. That was Jonathan Rauch, author of the aforementioned books “Kindly Inquisitors” and “The Constitution of Knowledge,” both of which are available at fine bookstores everywhere. Renée DiResta has a new book that came out in June called “Invisible Rulers: The People Who Turn Lies Into Reality.” Jonathan, Renée, thanks for coming on the show.

Jonathan Rauch: Thank you for having us.

Renée DiResta: Thank you.

Nico Perrino: I am Nico Perrino, and this podcast is recorded and edited by a rotating roster of my FIRE colleagues, including Aaron Reese and Chris Maltby, and co-produced by my colleague Sam Li. To learn more about “So to Speak”, you can subscribe to our YouTube channel or Substack page, both of which feature video of this conversation. You can also follow us on X by searching for the handle @FreeSpeechTalk, and you can find us on Facebook. Feedback can be sent to SoToSpeak@thefire.org. Again, that’s SoToSpeak@thefire.org. And if you enjoyed this episode, please consider leaving us a review on Apple Podcasts or Spotify. Reviews help us attract new listeners to the show. And until next time, thanks again for listening.

Share