Table of Contents

So to Speak podcast transcript: Section 230 and online content moderation

Section 230 and online content moderation

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: Welcome back to, So to Speak:The Free Speech Podcast, where every other week we take an uncensored look at the world of free expression through personal stories and candid conversations. I am, as always, your host, Nico Perrino. Did 26 words create the internet? That’s what some people argue, at least. Here are those words, “No provider or user of an interactive computer service shall be treated as the publisher or a speaker of any information provided by another information content provider.” Simple, right? Or maybe not.

Those words come from Section 230 of the Communications Decency Act, which is an American law passed in 1996. What the law essentially says is that interactive websites and applications cannot be held legally liable for the content posted on their sites by their users. Some argue that the internet as we know it today wouldn’t exist without this law. No Reddit, no Facebook, no X, no Yelp, and no Amazon, at least not in their current forms. Instagram, for example, has 2 billion monthly active users who share 95 million photos and videos per day.

For their part, YouTube users upload 3.7 million new videos per day. The argument goes that the legal liability associated with a platform being responsible for each piece of content its users post is so great that few companies could, one, bear the costs associated with defending against lawsuits over content, and two, implement the necessary moderation policies to forestall such lawsuits. But on the other side, critics of Section 230 allege that the law shields big tech companies from liability for enabling or even amplifying harmful content that promotes sex trafficking, drug trafficking, insurrection, vaccine hesitancy, bullying, miss, and disinformation, and more.

In short, they argue, Section 230 has made the internet an increasingly toxic place that creates real-world harms for people. This fight over Section 230 has found its way back into the halls of Congress. Reforming the law has been debated for years and has gone nowhere. So, now two lawmakers are proposing a measure that would sunset Section 230 by the end of 2025 unless big tech companies come to the table to enact a new legal framework for the internet. But the authors of Section 230 warn that eliminating the law would “kill the internet.” It would threaten free speech and erode America’s leadership position as an engine for online innovation.

So, on today’s show, we are going to explore Section 230. How should we think about it, and what, if anything, should we do about it? Joining us for the conversation are Professor Marshall Van Alstyne. He is a professor of information systems at Boston University and is one of the world’s foremost experts on network business models. He is also the co-author of the international bestselling book Platform Revolution. Professor Van Alstyne, welcome onto the show.

Bob Corn-Revere: Thank you. It’s a pleasure to join you.

Nico Perrino: We’re also joined by Bob Corn-Revere. He’s a recurring guest on the show, and he’s also FIRE's chief counsel and arguably America’s foremost expert on the application of the First Amendment to new and emerging technologies. Bob, welcome onto the show.

Marshall Van Alstyne: Thanks, Nico, and happy to be in the new studio.

Nico Perrino: There we go. This is our first time recording in the studio. We’re still working on the door, but everything else seems to be in working order, at least for now. Fingers crossed. Knock on wood. So, I wanna get started here by just talking about what Section 230 is. I read 26 words from Section 230 just a moment ago, but it’s a little bit longer than that. Bob, can you kind of paint the picture of Section 230 for us?

Bob Corn-Revere: Well, yeah, it is substantially longer than the 26 words that you read. You read Section 230 (c) (1). There’s an entire preface that talks about the reasons and values behind Section 230. Then it goes into the policy of the United States, and then it creates the statutory immunities. But the backstory to Section 230 is a lot larger than that. Quite often you’ll have people describing the origins of Section 230, and they’ll pick out one aspect of it or another. And people puzzle as to why it’s called the Communications Decency Act if it’s about immunity from liability.

The reason that it was given this title is because it was proposed as sort of a late amendment to a provision of the proposed Telecommunications Act of 1996, what became the Telecommunications Act of 1996. And one of the provisions that had also been added as an amendment was called the Exon Amendment, which was the first attempt to regulate the internet. And that regulation was designed to impose broadcast-style indecency prohibitions on the internet in the same way they had applied to radio and television.

What became Section 230 was proposed as a counterweight, as an alternative to having the direct regulatory restriction on speech. And then it was also expanded to deal with emerging issues of liability, if platforms, they’re called platforms now, but if a provider made any attempt to restrict what was on their platform, did they then take on liability for all of that? So, Section 230, as it ultimately was adopted to incentivize online providers to provide filtering and other technologies, and to take an active role in policing their platforms.

It also created immunities so that there would be the incentive to allow third-party speech to be carried by these providers. And then it also set the policy saying that we prefer free market solutions to regulatory solutions. The interesting thing was the Supreme Court struck down the Exon parts of the Communications Decency Act. But Section 230, which was then added as an amendment, kept the title Communications Decency Act, even though that provision did much more.

Nico Perrino: Well, when you read Section 230, pop culture and policy circles talk about it as shielding big tech companies from liability. But there’s also language in here surrounding protecting people on the internet. For example, in Section (b), the policy section before it says that, “The law is passed to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material, and to ensure vigorous enforcement of federal criminal laws to deter and punish trafficking and obscenity, stalking and harassment by means of computer.”

And then we recently had a hearing in Congress on sunsetting Section 230. Marc Berkman, who’s the CEO for the Organization for Social Media Safety, had this to say. Aaron, can you play clip one?

Recording: But we cannot fully understand the failures of Section 230 without a focus on its tragically forgotten provision, Section 230 (d). Congress required that internet providers, including today’s social media platforms, provide users, upon signing up for the service, with a list of commercially available safety software providers. The clear legislative intent of Congress was to provide the civil liability immunity provisions of Section 230 (c) only in conjunction with the understanding that a robust safety software industry would help ensure critical safety support to families and children.

Tragically, the social media industry has consistently defied the clear mandate of Section 230 (d). And unfortunately, Congress could not have envisioned that today’s social media platforms would have to provide some level of minimal assistance to third-party safety software providers for their products to effectively function. With this essential pillar of Section 230 long forgotten and ignored, we have seen millions of children unnecessarily harmed.

Nico Perrino: So, Professor Van Alstyne, how do you see Section 230 (d) playing into kind of this whole ecosystem that Section 230 creates?

Marshall Van Alstyne: Well, I love the quote. Let me preface my remarks first by saying I’m an information economist, not a law professor. I’m looking forward to the conversation and discussion with Bob on this. I’m hoping to look forward to a lot of good background there. The particular quote from Berkman is fascinating because I like this idea of giving users a set of safety software providers, which, in my view, would give users a lot more control over the algorithms that actually presented their content. So, one of the things I found interesting in control information is this idea that the platforms need to provide lists of safety software to users themselves.

And I think it’s actually quite important that the users have the capability to choose that. I think in some ways, one of the things I wanna come back to is what I would call a listener’s right to be free of polluted information streams. And if they could actually choose those filters for themselves, that safety software, that forgotten Section 230 (d), then I think we’d be in a better position. But to briefly recapitulate some of the important things that Bob said, what Section 230 basically does is it not only immunizes platforms against the user-generated content, it immunizes them against their editorial decisions regarding user-generated content.

And what I would really like to see happen, particularly regarding to that Berkman quote, would be to restore users' capacity to choose their own algorithms and choose their own filters. We’ll come back to that number of reasons, but I think that’s actually an enormous point. So, I’d be fully in support of the kinds of things that Berkman was arguing for in that segment.

Nico Perrino: Well, from the legal perspective, Bob, I think to the National Institute of Family and Life Advocates v Becerra case, would this result in compelled speech, where you essentially have a company that’s forced to tell people certain information that they might not wanna share because it would lead them to use their product differently?

Bob Corn-Revere: You might get to that conclusion if you followed that going forward. I think it’s important first to look back at the context in which Section 230 (d) was created. Now, keep in mind, at the time were talking about a debate between direct blunt force regulation of internet speech or voluntary measures, and empowering people to make these decisions using their own third-party software, as well as the tools provided by the platforms themselves. And I have to say I’m kind of mystified by the statement that Section 230 (d) is some forgotten provision that simply has gone by the wayside.

In fact, if you look at all of the online providers, they, when people sign up for them, provide a large array of tools to allow users and empower users to make all kinds of decisions about how they use their online resources, how social media is used, and so on. And there is a wide array of third-party providers that provide various kinds of add-on services that can be used in conjunction with those. So, the idea that this is somehow forgotten and is somehow out of step with the rest of Section 230 simply is ahistorical.

Nico Perrino: But Professor Van Alstyne, do people use those tools?

Marshall Van Alstyne: Well, actually, I’d like to play with Bob’s observation just a moment, because just this week, Nobel Prize winner Maria Ressa was complaining that almost all our moderation is effectively done by Mark Zuckerberg or Elon Musk or you could imagine even remotely, the Chinese government in some sense through Twitter. There isn’t sufficient ability used to choose their own filters in this. So, imagine, hypothetically, that on Facebook you could choose the BBC filter, or you could choose the Fox News or the Breitbart filter on there, listeners themselves would have greater power to actually get the information streams.

One of the things I wanna emphasize here is users are going not to listen to Facebook. Users are going to listen to their friends and colleagues, the influencers, to reach their followers, rather than the Facebook itself. It’s not as though you’re actually going to go read the content on the Miami Herald or New York Times. You’re really looking there to see what others are saying, not what the platform itself is saying. It’s an interesting twist on who’s actually a speaker here.

Nico Perrino: But isn’t the thing that distinguishes one platform from another its algorithm? One of the reasons TikTok is so popular right now is its algorithm seems to deliver content to folks that is just more engaging and entertaining, and some might say even more addictive than the other platforms. So, it seems to me that a lot of users are choosing the platforms because of their algorithms, because of the choices that the platform makes. And in fact, we have a communications department here in FIRE. Increasingly, we’re finding that followers don’t matter.

You can follow someone, but that’s often not who populates your feed anymore. And it seems like in the past couple of years, they’ve moved away from that. So, is that approach, that, is that approach even viable? Do users even know what they want from content anymore?

Marshall Van Alstyne: So, I agree with you. First of all, the algorithms themselves are popular. The algorithms on TikTok are quite wide, is that users are going to visit those sites. But if you were to go to LinkedIn or you were to go to Facebook, where you really are reconnecting with friends, those are not things that you could take with you to another platform very easily. Where you could easily switch websites from, say, New York Times or Miami Herald or Fox or Breitbart, you couldn’t switch to any of those sites and take with YOU your network of friends and associates.

Those by-network effects are effectively huge to a single platform on there. So, if it’s information or updates or interaction with those individuals, that’s the content you lose if you lose the algorithmic control. And if you see that to others, you don’t have the ability to get the content that you want. In effect, what I would suggest is that users should have the ability to create unions intersections of those algorithms, as opposed to having to accept only the offerings of the platforms effectively themselves. And what’s moderating their content when they can’t take, they can’t choose those other sources for themselves?

Nico Perrino: Is this essentially Mike Masnick’s protocols, not platforms argument, that social media should be more like email? For example, I’m able to communicate seamlessly with Bob, who might use Yahoo, with you, Professor Van Alstyne, who might use AOL or Comcast or some other provider if I use Gmail and you just don’t even know what the platform the other person is using, it’s the protocol of email that’s the tool.

Marshall Van Alstyne: Well, in many cases, I like Mike Masnick’s arguments again over at Techdirt. He’s written a number of very interesting points. And in general, I really like user control of algorithms. In a free speech context, we speak too often of the speaker’s rights and too infrequently of the listener’s rights to hear the sources that you find most important.

So, to the extent that I think Mike has identified the users' ability to choose algorithms that meet their needs, and the intersection and union of those algorithms, those protocols would serve user-listener interests better than the somewhat exploitative interests of the platforms themselves when they’re pushing their own advertising agendas, their own profit motivated algorithmic design, as opposed to the user’s best interest design.

Nico Perrino: Yeah, I wanna get Bob in here. Bob, did you have a thought?

Bob Corn-Revere: I was just going to say, we’re talking about multiple things here. I mean, we’re talking about algorithms, which are the way platforms decide how to serve up content to people who are on their platforms. There are also terms of service that the different platforms use to create whatever the feel or shape of their community is going to be. And you’re going to get different kinds of communities created by the different terms of services that are provided by, say, X versus Truth Social versus Facebook, for example.

And then you have filtering applications that allow individual households, individual users to filter out certain kinds of content that they would not like to see. So, there are a lot of ways in which users exert control. The idea that you should add on a layer of control that allows people to sort of design their own adventure, interesting concept, but I’m not quite sure, A, that it’s necessary, or B, that it would work.

Nico Perrino: Or c, I guess the question becomes, do users actually want it? Right?

Bob Corn-Revere: Because there is that question.

Nico Perrino: Professor, you might know, isn’t Bluesky more of a protocol, not a platform? And it hasn’t seemed to take off. I know a lot of folks migrated to it after Elon Musk purchased X, and maybe more people use it than I think. I’m not on it personally. What are your thoughts there? I mean, do users want this?

Marshall Van Alstyne: Excellent question, really. So, I’m quite sensitive to that point. Matter of fact, it’s not just Bluesky, but another one is Mastodon.

Nico Perrino: Correct.

Marshall Van Alstyne: In some ways, these are decentralized social networks. Look, we have a really interesting and challenging problem. There is one of the differences between traditional social media and traditional broadcast and print. One of the big differences in the traditional media, they are in some sense the major producers of that content, and you win audience by having more engaging or more attractive content. Social media are different as they’re not the producers of the content, it’s users that are the producers of that content. They’ve economized on costs, but they’ve also become what I would call inverted firms.

The production of value content is happening outside the organization, not inside. That’s not just social media, but it’s almost all platform companies. If you look at Uber, third parties are producing the rides. Airbnb, third parties are producing the rooms. On google search, you and I are producing the web pages, not google itself. So, inverted firms behave differently than traditional firms. That being the case, if you’re now a company like Mastodon or Bluesky trying to launch a new system, you can’t compete simply by creating more content.

You have to tear away the entire production, external production apparatus from one organization to another. Network effects are vastly stronger in social media than in traditional media, so it makes it really hard for startups to actually take market share to entrenched incumbents. I love a particular set of graphics that shows, and I’m going to misremember some of the numbers on this, but somewhere around 2008, there may have been 17 different major social media companies across the globe.

And somewhere about eight years later, there were really basically only four major ones where folks are spending most of their time. And these, ironically, were broken down pretty much by political geographic boundary. There was the western ones, the United States, North America, South America. Then there are the Chinese, and then there are the Russian ones. Again, these social media bound by network effects are really different. And most of the startup attempts to compete with traditional startup social media have failed.

Even Google failed to do it. And they are certainly a company that understands platform economics, but they weren’t able to succeed at social media. So, I don’t fault Bluesky or Mastodon. Those are really hard problems, and I think we need to recognize the difficulty. And there is a true economic difference in the nature of these inverted firms compared to traditional pipeline production processes.

Nico Perrino: Can you describe network effects for folks who aren’t familiar with the term?

Marshall Van Alstyne: Absolutely. So, a network effect is the quality that a product or service becomes more valuable to existing users as more users engage or participate. So, your google search might make my google search better. Bob’s movie-watching behavior might make your movie-watching behavior better or worse, depending on Bob’s taste or preferences. The point is that the information from one interaction is used to make an adjacent interaction better. There is, in economic terms, a positive externality, and that positive externality typically requires some form of orchestration in order to be maximized.

It’s the inverse of a negative externality. We’re going to come back to some of those. I think there’s a better way to diagnose some of our problems in that case. But again, just to answer your question, a network effect is a product or service that gets more valuable as more people use it to existing users.

Nico Perrino: Yeah. So, if you’re talking about Uber, for example, it’s more valuable to you as a rider if there are more drivers on the road who can get to you within two or three minutes of your call, as opposed to if there are fewer drivers and it might take 20 minutes for a driver to get there. And the same goes for the drivers. It’s more valuable to them if they’re able to do ride after ride after ride. But it’s hard to get to that point where it’s valuable to the whole network unless other people are joining the network, right?

Marshall Van Alstyne: That’s absolutely correct. That’s also one of the reasons, for example, why it’s been argued that Google has kept and maintained such high market share in search. As more people use its algorithms get better, especially on content on the long tail, very, very niche content, which means that when you come searching, you’re more likely to find the match that you want, so more people use it. It gets better. So, it’s, again, one of those reasons why it has tended to maintain a market share lead, which raises wonderfully interesting antitrust questions.

Nico Perrino: We’re talking right now about Section 230, and I’ve already gotten to a point in the conversation where there are some solutions, potentially to address the alleged harms of Section 230. But I do wanna take a moment to zoom out a little bit and talk about the problems that some folks have with Section 230 that would result in this idea that we should reform it or even sunset it. And Professor Van Alstyne has a paper that suggests some tweaks to that I’d like to discuss. But let’s start by going back to that congressional hearing that we visited earlier and play this clip from Doris Matsui. Aaron, can you play clip, two?

Recording: It’s clear that Section 230 as it exists today isn’t working. The status quo simply is not viable. As with any powerful tool, the boundaries of Section 230 have proven difficult to delineate, making it susceptible to regular misuse. The broad shield it offers can serve as a haven for harmful content disinformation and online harassment. This has raised significant concerns about the balance between protecting freedom of expression and ensuring accountability for the platforms that host this content.

Nico Perrino: So, is the congresswoman right Bob? Does it enable these harms?

Bob Corn-Revere: Well, it enables those harms in every bit as much as it enables more speech to happen on the internet generally. Most people’s complaints about Section 230 are complaints about the internet as a whole. I’ve long said that the absolute worst thing about the internet, that it provides a worldwide, global medium in which everyone can participate, is also the best thing about the internet. Both are true, that it’s both the best and worst. And so with the abundance of information that you get online, you also get a lot of bad stuff.

Now, Section 230 has become the poster child for all that bad stuff by saying that we’re going to be able to impose liability rules that will only prevent the bad stuff from happening and not the good stuff. But the very network effects that you and professor Van Alstyne were discussing earlier are the reasons why Section 230 is so essential for online communication. It is the first medium in history that is focused almost entirely, not entirely, but substantially, and enables third-party speech. It enables anyone with computer access to speak.

Now, the platforms that are the host for that speech simply are not going to take a chance on carrying all of that third-party speech if they face the possibility of liability for anything they can’t filter out. And given the volume of information, given these vast network effects, that’s the reason why Section 230 is so uniquely necessary for this medium as opposed to traditional media.

Nico Perrino: So, does this protect internet service providers from things that a separate company, a brick-and-mortar company, might not be protected from? Does it give them an extra shield that other companies wouldn’t have?

Bob Corn-Revere: Compared to a letter to the editor in a newspaper, newspapers operate on a completely different kind of format in which you can review and screen the information before it is published. And so the possible liability for allowing a defamatory statement to be published is different in that context. The information is reviewed before it is published. Because of the third-party speech that is made available on the internet and the platforms that host it. Quite often, that review does not take place in advance and can only be really reviewed after the fact.

In that context, you have a different set of rules for liability.

Nico Perrino: I just wanna ask –

Marshall Van Alstyne: Can I jump in?

Nico Perrino: Yeah, go ahead.

Marshall Van Alstyne: I would love to ask Bob a question on this to help me out here. So, I’d love to get to the volume argument a little bit later, but I’d like to ask a question. Would you agree or disagree that the current state of affairs has made it hard to hold almost anyone accountable? By that, I mean, clearly Section 230 shields the platforms from user-generated content, and so they don’t have publishers' liability in that sense. But the diffuse nature of individual atomized expression, which is echo chambered, repropagated, picked up whenever, then makes it hard to hold the individuals accountable.

Also, there isn’t particularly a source. In that case, isn’t it true that it’s almost infeasible to hold anyone accountable for misinformation or for harmful content?

Bob Corn-Revere: Oh, not at all. I mean, there has been sort of a boom in litigation over defamation, partly because there is this vast microphone in which information is put out there. Just ask the people who are propagating the stolen election myth from 2020. There’s a lot of active litigation taking place in that space. And by the way, one of the reasons why people would like to remove Section 230 is because of the deep pockets of the companies that host these, well, everything, host all of this traffic on social media.

They would rather sue the companies rather than the people who are disseminating, the source of the information that they are challenging, because they simply don’t have the same resources.

Nico Perrino: So, Section 230 doesn’t shield the person who posts the content from liability. If they make a defamatory statement or a true threat, or incite to imminent lawless action, they could be liable.

Marshall Van Alstyne: I believed all of that. It is clearly the case that users themselves are not exonerated from what they do. The challenge though, is that if the same kind of misinformation comes from half a dozen different parties, and that then is re-amplified by the platform, it’s hard to then find an individual responsible for it. I would also press back gently against some of the things that Bob was saying. Don’t you have to have awfully deep pockets to then go after a defamation suit? It took the Dominion Voting machines to go after some of the election lies in order to hold them accountable.

That seems like an awfully high bar relative to what would have been publishers liability under a standard case.

Bob Corn-Revere: Well, there you’re talking about going after big networks. Anytime you’re talking about litigation that magnitude, it’s going to be an expensive proposition. But it’s simply a demonstrable fact that defamation litigation has expanded because of the increased amount of speech out there. It’s true you can’t always find the individual speakers involved because one of the things that the internet permits and enables is anonymous speech. But again, the side effect of making the platforms responsible and the guarantors of whatever speech happens to be on their platform is that you would shut down the possibility of third-party speech.

Nico Perrino: I wanna ask a question by drawing a distinction between the different sort of user-generated content that could be fined on some of these platforms that host user-generated content. So, in this latest round of questions, we were talking about defamation, true threats, incitement to little imminent lawless action speech, albeit unprotected, but speech, how do we think about non-speech activity by users on the platforms? For example, sex trafficking or allegations of drug trafficking, or let’s say, Amazon it allows for a marketplace where folks can create or post, and sell their products on the platform.

Let’s say that one of the products creates lead and a child gets lead poisoned. Is that also insulated by Section 230, the platforms that is also insulated against those sorts of lawsuits?

Bob Corn-Revere: Well, yeah, platforms are insulated. The sort of the information that they are suing over, people can still sue the originators of the harmful products. If you’re talking about lead paint that people use or some kind of poisonous substance, they’re obviously responsible for that, but that really hasn’t changed And the other examples you bring up of unprotected speech. Section 230 doesn’t immunize that.

Nico Perrino: It makes me think then was the Silk Road prosecution.

Bob Corn-Revere: I was going to bring that up because there you had a service dedicated to basically propagating illegal activities. They were really quite upfront about it, and as a matter of fact, took a cut of the sales that they were engaged in. If it were illegal drugs or something like that. There where you’re actually participating in illegal activity Section 230 provides no bar, and as a matter of fact, it never provided a bar to federal criminal activity.

Nico Perrino: I wanna play another clip from that congressional hearing now. This is Representative Doris Matsui from California. Aaron, can you play clip three?

Recording: And we know many online platforms aren’t simply hosting this content. They’re actively amplifying it to reach more viewers. Most large social media platforms are designed from the ground up to promote constant engagement rather than healthy interactions. That means pushing harmful and misleading information on some of the most vulnerable users with ruthless effectiveness. Young women are constantly subjected to unhealthy or untested diets. Suicidal material is foisted on those seeking mental health care, and recent elections show the ongoing danger of targeted disinformation.

Nico Perrino: So, this is one of the arguments, Professor Van Alstyne, that I think critics of Section 230 have. And, you know, reading your papers, it sounds like you’re a critic in part of Section 230 is that the platforms aren’t innocent bystanders on this speech. Their algorithms are helping to amplify some of this harmful content, whether there’s a human involved in the decision or not. Is that a correct characterization of the position?

Marshall Van Alstyne: I think that’s fairly accurate. Let me try to summarize briefly. I somewhat disagree with Bob on the need to reform Section 230, although, ironically, I’m gonna quite agree with Bob that I think a lot of the proposed solutions are not a very good idea. So, I’m just not happy with the status quo. To kind of summarize what I think Doris was saying, one of the ways I’ve kind of expressed it, or first I’ve heard it said, if it’s enraging, it’s engaging. So, that’s one of the things that their machine learning algorithms have done. It’s set up with a profit motive.

The machine learning algorithms, whether intentionally or not, are set up in order to get people to participate. This is what we heard from Francis Haugen in the whistleblower testimony before Congress, that he put profits effectively ahead of people. And so they wait for someone to drop the spark, then they pour on the gasoline, which is effectively the fire, which then allows them to sell ads while people watch things burn down. And so it’s an interesting business model where they’re protected from amplifying, misinformation, awful but lawful content in some of these ways.

Now, the flip side argument is that many times you may actually lose users if you don’t moderate successfully. And we do need them to moderate successfully. That’s going to be one of the important criteria. That’s what we should try to create in any reform scenario. But I genuinely do believe it’s the case that the algorithms are set up in a way that profit for the firms for enraging, engaging content. And I think there are some better solutions to that. That’s some things I’m hoping we can spend some time on. But I wanna pause there in case Bob has any reflections. I’m very happy to dig into any of these other elements of it.

Bob Corn-Revere: Yeah, I think this is sort of a confluence of a whole series of factors that we’re talking about. One is the algorithms that do tend to provide more content that people have shown that they are interested in, and that builds on itself. But you also have, as Representative Matsui is saying, whole types of content that she finds unhealthy and harmful. And this is where, essentially, the argument is being made that people are being provided with information and it’s bad for them, and they should stop engaging in online content. I think that there, the distinction is not being made between content that is entirely legal, protected by the First Amendment, and that which is not.

And those two things are being sort of mashed together with various policy proposals. This is something that the Supreme Court took a look at last term in two cases alleging that social media platforms were encouraging content supporting terrorism. And the question was whether or not there was a cause of action against that. And then, secondly, whether or not Section 230 protected against it. And where the court drew the line was to say that the First Amendment requires the standard for liability to be aiding and abetting, which requires an intent to cause the illegal activity.

And so the court never reached the Section 230 question, saying that level of intent for harmful content, for illegal content, simply wasn’t met, there was not a cause of action there.

Nico Perrino: And simply enraging content wouldn’t be unprotected under the First Amendment.

Bob Corn-Revere: Well, that’s right. And that’s, by the way, not so different from what we’ve seen in media historically. As long as we’ve had mass media, there was the yellow journalism of the turn of the last century, there was the adage that if it bleeds, it leads. In local news you have the same thing with television talking about causing outrage and all of that. And so, as a consequence, this is really sort of an updated argument making the same points that historically have been made.

Marshall Van Alstyne: So, I think, let’s emphasize, I think Bob’s exactly right about that. There’s some even wonderful arguments getting all the way back to 1929, of enraging, engaging content and things that are our most famous and well-intentioned citizens chasing, pulling the alarm in order to watch the fire truck go down the street as other elements of that same thing. The slight difference is that if you cross the line with traditional media such that you’re causing harm, then you’re liable. But in this case, because of the scale of the problem, and we’ll come back to that, you’re not liable.

So, there is a difference. They’re effectively protected. And so the incentives to promote some of the misinformation, or perhaps even higher than those cases where you could be held liable.

Bob Corn-Revere: Well, so many of these issues we’re talking about are not questions of liability such that Section 230 comes into play. I mean, much of the argument about social media now is simply that it’s bad for people to be online and to be staring at their phones or their computers. It’s too much screen time, kids are losing sleep, things like that. Those are issues that have almost nothing to do with Section 230 and everything to do with media education for how best to use this powerful communications tool.

Nico Perrino: So, I think one of themes that we’re coalescing around is that one, the internet, is pretty much just society amplified. Right?

Bob Corn-Revere: Well, as Bo Burnham said, a little bit of everything all of the time.

Nico Perrino: Yes. But I think what critics of Section 230 would argue is that, yes, too much of something can be bad and can create harms that need to be addressed uniquely in those situations. But the First Amendment doesn’t have Bob right, if I understand it correctly, a too much of speech exception.

Bob Corn-Revere: Well, that’s right. The First Amendment protects television, even if people watch too much of it.

Nico Perrino: Yeah, professor.

Marshall Van Alstyne: Let me jump in. You’ve hit on something that is a really interesting issue, which is the nature of harms. If you might give me just a moment, I’d like to see if I could actually articulate why I think harms today are different than harms of yesterday. And so it’s not just the issue of harm to the individual consuming it. In some sense, there’s a self-moderation or media literacy element of that which is legitimate. But I think there’s another component of this which is over a century old that has only come to the fore because of the nature of the technology.

So, for over a century, it goes all the way back to 1919. We’ve got this wonderful quote from Holmes, “The best test of truth is for the idea to get itself accepted in the competition of the marketplace.” So, we’ve got the marketplace of ideas, and it’s the same argument that Mark Zuckerberg is using to have users decide, we’re just going to push all the information out there all the time in order for them to decide. It’s one thing if you’re making a decision error, but it’s another if you’re getting to an externality. We talked about network effects a moment ago, but here we have negative externalities.

Negative externalities is the damage that happens off-platform. This is the loss of herd immunity. This is global warming. This is insurrection. This is the stabbing Salman Rushdie that happens off-platform. This is the shooting in a pizza platform that happens off-platform. So, here’s why this is a problem. Externalities are market failures. Market failures require intervention, but government intervention in speech is forbidden by the First Amendment. So, we’re stuck.

What that means is that you’re simply turning things over to the marketplace of ideas to sort things out that will fail. The reason is that markets do not self-correct market failures. There is required to be some kind of intervention to correct that externality. Now, when we’ve moved to the world of social media, everybody is a producer, but nobody is effectively responsible for this bigger pollution problem. So, the reason it’s come to the fore now is we have more producers of pollution and no one is internalizing it. And markets don’t self-correct these market failures. It’s an externality problem that is currently unsolved.

Bob Corn-Revere: But it’s also one that if you start trying to find an external solution to that, most of the proposals that we’re seeing in that regard are to have the government in some way provide oversight. The response of Florida and Texas, for example, to bad moderation decisions was to simply put the states in charge of it, and then to make them the final arbiters of whether or not moderation decisions were being made in an equitable way. This is a classic example of where the cure is worse than the disease.

Marshall Van Alstyne: So, this is one where I agree with you 90%. So, I agree with you [inaudible – crosstalk] [00:43:17] I agree with you in the sense that all the proposed solutions are terrible. So, I agree with you with that. I don’t want the state intervening. The centralized solutions, I think are awful. But the problem that I think you have then acceded to is that we’ve traded government tyranny for platform, we’ve traded public tyranny for private tyranny. That is, we’ve allowed the platforms effectively to make those decisions when we’ve forbidden government from making those decisions.

So, this is where I disagree with you one component. I think the centralized solutions are awful, and in that I completely agree, and I don’t want other parties to do it. What I think we’re missing is that there actually do exist, from economics, decentralized solutions without any central party. This is a case where we need to actually go back to an older economist, Ronald Coase, where we can actually create markets in the missing markets of harms. If we can create an expanded set of listener rights, an expanded set of speaker rights, then the market may be able to correct these things.

The government’s role in this case would be to increase the rights of users, listeners and speakers, not to decrease them, but then create a marketplace within which we could actually solve these externality problems by new sets of rights that have existed. Coase effectively said you can solve these problems by creating trading in the missing market of harms. That I believe is where the existing solutions don’t go, and that’s where we need to travel to look, next.

Nico Perrino: Do you have a thought on that, Bob?

Bob Corn-Revere: Well, the only thought I’ll add, is that I read Professor Van Alstyne’s paper on this a couple of times. I’m still trying to get my mind around it and to figure out exactly how that would work. I take it to be a really ambitious goal to try and change everything about the way moderation is done and how the internet works, but I’m not quite sure I can envision how that would happen. With respect to Ronald Coase, I think he was always rightfully skeptical of having some sort of governmental solution.

As I think you point out, his 1959 paper on the Federal Communications Commission was path-breaking, basically saying that property rights in spectrum perform all of the role that licensing did as well. But whether or not you could then go from premises like that to saying we’re going to create a market for moderation, for example, as one of the proposals that you have, I don’t see how it works.

Nico Perrino: Professor?

Marshall Van Alstyne: Wonderful question. So, let me see if I could do this in two steps. The first is to give an argument as to why it should work. The second would be to give you my proposal about something that could work for purposes of creating a strawman, in the sense that if you accept the possibility of theory, then we can figure out who’s got the best implementation. Okay? So, what I really wanna do is I wanna give you hope that it’s possible to do this, and then you can figure out if my way to do it is the better one or someone else has a better one, or what might be possible. So, here’s why I might think it might work.

Okay, so let’s go back to Holmes's original idea of the marketplace of ideas. In economics, we have a comparable intuition that I’m sure you and almost all the listeners will be familiar with, which is Adam Smith’s invisible hands, that the choices of buyers and suppliers should lead you to some kind of social optimum. Adam Smith said, it’s not from the butcher, the baker, the brewer, that we can expect ourselves, but from the regard to their own interest. So, buyers and suppliers acting in their own self-interest, lead to a better optimum.

This was the reason for the Nobel Prize to von Hayek. It explains, for example, why market economies do better than centrally planned economies, why the US, the UK, Japan do better than Soviet Russia or North Korea, or East Germany. They misallocate resources by having government do it. This is so powerful, it’s even been proven mathematically in something called the first fundamental welfare theorem of economics. The choices of buyers and suppliers in a competitive market lead to a social optimum, but it breaks down if there’s information asymmetry or externality.

Since that time, there have been work that actually suggests that you can actually solve some of the information asymmetry questions. The Nobel Prize information economics given in 2001, can solve some of those questions. And Coase was the one that figured how to recreate markets to solve the externality problem using expansion of property rights. I wanna thank Bob for highlighting Coase’s property rights paper in the radio spectrum problem. And it worked. It absolutely worked.

So, the premise, the theory that I’m proposing, is that we should be able to reduce the amount of misinformation, the amount of pollution, with no censorship at all, and no central authority judging truth at all. If you think about it, censorship is just a quota on speech. That would be the Pigouvian solution. That’s the centralized solution, and that’s what we have to avoid. That’s what the first Amendment forbids, and that’s what we need to avoid. So, we need to import some of these ideas of Coase to repair the marketplace of ideas. That’s theory.

So, let me pause there for any reaction and see what you think. See, we take the original free marketplace of ideas, add Coase, create new property rights in a way that we should be able to restore market efficiency.

Bob Corn-Revere: Well, so far so good. I mean, I like the overall premise of using decentralized solutions and using market-based solutions. Where I kind of lost the thread of it is in trying to read through what your proposal would be, starting with the four principles, and they include some that do require intervention by some authority to make these things happen. For example, your third principle would say that platforms have a responsibility for facilitating a degree of diversity in the information environment. I’m not sure how that would be operationalized.

The last time the Federal Communications Commission tried to do it was through a policy called the fairness doctrine. And so again, I’ve seen various proposals along these lines in the online environment, and this struck me as being something kind of like that. And then the fourth principle, which gets back to the Section 230 discussion, is creating some liability if platforms don’t moderate consistently with their moderation principles.

And again, someone would have to be responsible for imposing that discipline, for holding platforms accountable if they don’t adhere to their own moderation standards. And again, that, to me, sounds like governmental oversight of content-based-decisions.

Nico Perrino: We’re talking now about Professor Van Alstyne’s co-authored paper, Improving Section 230, Preserving Democracy and Protecting Free Speech in the communications of the ACM journal. Professor Alstyne, did you wanna respond, get to the steps, the second step?

Marshall Van Alstyne: Sure. So, there are a couple of different points, if you’re really interested, the full details. So, that’s about a three or four-page paper. It’s very hard to actually have that level of complexity in a three-page abstract.

Bob Corn-Revere: That must be why I didn’t understand it.

Marshall Van Alstyne: Yes. The longer paper is available to anyone who’s interested on SSRN, Social Science Research Network, and it’s free speech and the fake news problem. So, to address your points directly, it’s essential the government not be involved in content moderation. If you take a look, for example, the Kosian solution in radio spectrum, government’s role is to grant the property rights, not to choose which programs are produced or what level the broadcast should actually be. It only creates the property rights in the radio spectrum which are themselves traded.

To give you an example in this case, for the point that you’re just making the property right might be a listener’s right to choose the algorithms on the platform. So, as we mentioned at the very beginning. Suppose that listeners had the expanded right to choose from BBC or Fox or Consumer Reports or Breitbart as their filter on top of the infrastructure. Government’s not making the content moderation decision at all. It’s the filter and that’s the user’s choice. It’s the marketplace. But without the right to choose, no one’s creating those filters.

The only mechanism from the platform side is they have to then have the fairness of attachment, and their algorithm has to be alongside the equivalent algorithms for others to choose from. In the same way that on Android you could choose your email package or your search engine, or Apple, you could choose the different mapping algorithms. You’d wanna be able to choose the fairness of different algorithms on top of the marketplace. Government has no role in the content moderation except to grant users the right to choose.

Nico Perrino: But isn’t that also a First Amendment violation? Because it creates a burden on the platforms who have the right to editorial discretion? Bob? I mean, we’re focusing on the rights of the users here, but the platforms also have rights. And I suspect that the Supreme Court, this term is going to double down on that is the argument, right?

Bob Corn-Revere: We’ll find out at the end of the term, what would they say about moderation in the First Amendment. But, yeah, that’s right. I mean, to begin with, and this gets to what I was alluding to earlier, simply having an ambitious proposal in that it would completely change the way in which the internet works. And what you would see is you would still have to have some overarching central authority to mandate that the platforms allow these other kinds of moderation policies to be imported, and having worked at the FCC, what could possibly go wrong?

And then to make sure that those are interoperable and function as they should. But the other thing too is, given the sort of overarching complaints that people have made about everything the internet has brought us in general, and I think Congresswoman Matsui’s comments captured that perfectly, saying that the real problem is the internet and all that speech out there, is that one of the main complaints people have made about the internet and its effect on democracy is that it has allowed people to silo themselves and only have to hear information that is consistent with their pre-existing views, and that no one hears any kind of contrary information.

And if you are importing these kinds of moderation policies, whether they be from BBC or Breitbart or whoever, then you’re doing that very thing and empowering people to double down on isolating themselves with just the editorial content they wanna hear.

Nico Perrino: Yeah, Professor, could it make things worse by creating even thicker filter bubbles?

Marshall Van Alstyne: So, let me jot both points Bob raised. I think both of them are interesting. So, again, the government’s role again is to create the infrastructure. And the whole point in a marketplace is to have competitors report on the misdeeds of competitors, rather than have the government do that. So, it’s not as though you’re actually having to go check the two terribly. If one of the algorithms is feeling that it’s being mistreated, then it can report and then adjudicate. So, you’re adjudicating the right to be equivalent to another algorithm rather than what your content would be.

So, again, I wanna make sure that’s clear. I’m extremely sensitive to the second point about polarization and filter bubbles. That is, again, where we need to go back to Coase. All harms recognized by Coase are choices by source and destination. And so if you expand one set of rights, you need to understand the best way to trade those rights. What you’ve just done is identified that the ability to choose a specific set of algorithms is equivalent to a right to hear and a right to not hear. At the same time, we then need to increase the speaker’s right to be heard.

So, I would pose back to you the question, put it as a hypothetical, how do you adjudicate the balance between the right to not hear and the right to be heard? I’ll pause there and see if you have any thoughts on that.

Bob Corn-Revere: Well, and that’s the point at which I got confused reading, admittedly, the abbreviated version of your paper. And that is there are certain requirements that you would include to evade the moderation policies, and that is certified information. For example, if someone’s willing to take the risk and say, “I guarantee my information is true, therefore, it gets to evade the filtering.” Again, you’re talking now about what appears would be a tremendous control infrastructure for managing how platforms allow information, allow filtering, and all of that. All of that subject to management.

And one practical consideration, having worked in government for a time in my past, and that is there are always going to be questions about who qualifies for this system. Which moderation standards, which algorithms are you going to “certify” to allow to be part of this system? That’s always going to be part of the rules, and that always is going to be subject to various kinds of manipulation. So, I think the possibility for all kinds of unintended consequences of having this kind of radical restructuring of how the information environment works creates a lot of possibilities for First Amendment problems and other problems, practical problems, in addition to the ones that you identified, Nico.

Marshall Van Alstyne: So, Bob, I wanna say again, remember, I wanted to start with, here’s theory of why it should work, and how to build it. Okay, so again, I wanted to layer these two things above a couple of different things. Returning to theory, then, let’s see if I can get back to some of the specifics. Again, the beauty of Coase understanding what he said was, you want a balance of rights. In this case, I’m simply trying to expand listeners' rights and expand speakers' rights. And my answer to the question I posed to you of when you would allow a speaker’s right to dominate a listener’s right, a speaker’s right to be heard, to dominate a listener’s right to not hear is a very simple condition.

You agree to exercise your right responsibly. What does it mean to exercise that right responsibly? It means that you are on the hook for something. To give you a broad metaphor for it, everyone on this radio broadcast will have heard the expression, “You don’t have the right to falsely shout fire in a crowded theater so as to cause a panic.” What’s fascinating about this is, guess what? You would gain that right, but you’d have to pay for it. You’d be responsible for it. In effect, it’s about as libertarian as you could possibly get, in the sense that you could say whatever you wanted.

But if you are making misstatements in order to impose them, on people that don’t want to hear you, then you are going to be responsible. You’re gonna be liable, if you will, for those misstatements. In some sense, it’s an attempt to thread the needle. If we take a look at the court cases out of Texas and Florida, in Texas, they’re trying to ban viewpoint discrimination. God knows what that’s going to be. That’s going to be extremely hard. That’s gonna be extremely hard. But the underlying issue is that certain groups feel they’re being discriminated against by the choices of the platforms.

This would give them the right to be heard. But honoring the arguments from the other side, it’s conditional on accepting responsibility for what people hear you say. If damage is caused, you’re then gonna take ownership of it. And then it’s a question, how do we make this happen? Okay, so it balances the right to choose to be free, pollution-free, to have the information streams you want that are gonna best. We address the filter bubble problem by granting speakers the right to be heard, conditional on the right to exercising that right responsibly.

Then we got to look at the implementation details and notice no party is involved. Not the platform. By design, there is no central authority whatsoever. It is a market-based, decentralized solution.

Nico Perrino: Well, I have two clarifying questions for you, and then I’ll let Bob get in here. When you say you’re responsible, are you talking about the users or the platforms, or both?

Marshall Van Alstyne: The speaker. Whoever it is that’s making the themes. Go back to some of the –

Nico Perrino: So, this wouldn’t implicate Section 230 then?

Marshall Van Alstyne: So, in some ways, it absolves the platforms almost completely because it’s the speaker. The only thing the platform has to do is to agree to carry the protocol. If they’re willing to carry the protocol, then the speaker then is gonna take the liability for the claims that they make.

Nico Perrino: And then when you say agree so this would be a voluntary arrangement?

Marshall Van Alstyne: Completely voluntary. Completely voluntary.

Nico Perrino: Bob, did you wanna get in?

Bob Corn-Revere: I’m not sure if I’m any less confused than I was before about how this would actually work. Either the platforms are going to lose the ability to do their own moderation in ways that allow them to create the communities and the kinds of atmosphere that they want, and have people who like that particular approach gravitate toward those. And I still don’t see how the just billions of moderation decisions that necessarily must be made are going to be policed when you have misinformation or disinformation out there.

And people who have made their certifications that it is accurate information how you’re going to enforce this web of rights and responsibilities that you’ve hypothesized.

Marshall Van Alstyne: So, two thoughts on that. So, the first one is they don’t lose the right to do content moderation at all. Matter of fact, I suspect they would offer their own content moderation algorithm in the exact same way that google offers mapping on top of Android or search on top of Android. So, on Facebook, you’d get your Facebook content moderation algorithm, on X Twitter you’d get as one of the main options, which would likely be popular, one of the main options that they’re already providing is just in competition with others, so they wouldn’t lose that option at all.

Let me see if I can give you an illustration of, theoretically, how this might work, and kind of a practical example to see if this clarifies any of the discussion. So, you and probably most of the audience are probably familiar with the Hunter Biden laptop story, right? So, roughly speaking, what happens is someone presenting as Hunter Biden shows up with three water-damaged laptops to an IT repair shop, never comes back to reclaim them. Oh, by the way, strangely, the IT repairman is legally blind, so it can’t identify who the person was. And they are a very strong right-wing radio listener.

So, because the person never comes back, the property becomes theirs. They find incriminating evidence of self pornography, drug use and Ukrainian emails. They get this over to the New York Post, who then publishes a story. Twitter and Facebook suppress this as possible Russian disinformation campaign. Then Elon Musk buys Twitter and releases the Twitter files as evidence of possible suppression of conservative voices. Suppose this Kosian mechanism were a possibility in here.

Let’s suppose that New York Post is not willing to certify that their story is true. Notice there is utterly no friction on any listener that wants to hear it. So, New York Post subscribers get it, Fox News subscribers get it, Breitbart is free to get it. But then Twitter and Facebook are free not to carry it at their own discretion because New York Post wasn’t willing to vouchsafe that it was valid content. So, they can’t complain of suppression of conservative voices. Now flip it.

Suppose that they were willing to vouchsafe the story, and I take no position as to whether it’s true or false. I simply play out the different branches of the decision tree. If they did vouchsafe the story, then it would have to be carried, it would be presented. They could own the libs, and anyone that received it could challenge it. If they can provide evidence that it’s false, then they could actually then get whatever bond or warrant actually was placed at stake. And it should be larger if you’re reaching a million people, than if you’re reaching 10 people, so you’re creating a bigger externality in a Kosian sense.

The most fascinating element of this is who is the best person to know the truth of that story? Hunter Biden. If Hunter Biden challenged the story, he could provide the evidence. If he doesn’t challenge it, then it’s probably true. My point is simply in a market mechanism, it causes the sources of information to speak up in a form that causes everyone to have it, and no suppression, no censorship at any point in the chain.

Bob Corn-Revere: Well, I think this gives you a sense of how ambitious your proposal is, and again, leaves me with a number of questions that include when you say that the information has been vouchsafed, then people must carry it. That goes to the question I had earlier about whether or not this would entail some kind of fairness doctrine-type policy that requires platforms to carry the information. And this apparently is one way that this would work.

And the other one, too, is for people who are willing to put up some kind of financial guarantee or some other guarantee of the truth of the information that they have. You’re talking about really fundamentally rewriting what kind of protections you have for whether or not something is true or not. And it’s usually more nuanced than simply, is it true or is it not true? Again, I have no opinion about the Hunter Biden laptop story, but I have a sense that there’s a lot about it that may be true and a lot about it that may not be. I mean, people argue back and forth.

Nico Perrino: We saw that with COVID as well.

Bob Corn-Revere: The Wuhan lab leak theory, for example. And so you have a lot of things out there in this grander debate about misinformation and disinformation that, depending on nuance, depending on your perspective, becomes really hard to sort out and I think would really be hard to implement as part of something where you are providing some kind of guarantee for the truth of your information. There’s always a way to challenge.

Nico Perrino: Forgive me, professor, do you have another 10 minutes? I wanna be respectful of your time.

Marshall Van Alstyne: Sure, sure. So, Bob makes a wonderful point in it, but he’s the lawyer, so I’m going to defer to his view on some of this stuff. So, actually, the degree of certainty, I think, is one already used in the law, and I would simply borrow the law. So, correct me if I’m right on this, but my understanding in court cases is that we use different levels of certainty. So, there’s preponderance of evidence, there’s clear and convincing, and there’s beyond a reasonable doubt. Why not adopt the existing standards on exactly those kinds of things?

You could certify to whatever level you as the speaker chose. One of the beautiful elements of this is if you know it might be challenged, you’ll be very careful what you choose to word. So, it causes the speaker to be more cautious in what they want for purposes of withstanding a challenge. It would actually give us a more sensible level of debate, as opposed to intentional ambiguity for purposes of a misinterpretation. Let’s use the existing standards, and we’ve already got three that are in widespread use.

Nico Perrino: Bob, do you wanna quickly respond? And then, professor, I would like to close by asking you about one of your other proposals surrounding the duty of care standard that might be incorporated into Section 230, and I think is making its way kind of, into the courts. But, Bob, first.

Bob Corn-Revere: Well, it just sounds like the speaker with the larger checkbook has the greater ability to vouchsafe whatever information they wanna put out there.

Marshall Van Alstyne: Let me interject. Absolutely not correct. That one I want to completely disagree with. The larger checkbook now would be the one with the advertisers. The beauty here is it comes back to you if you’re telling the truth. So, the honest party is actually better off. One of the things we’re running experiments on in mock marketplaces is whether new entrants in politics or new entrants as firms, because they’re cheaper, have an easier time, because it’s like a free ad in the sense that you get the resource back if you’re telling the truth.

So, I wanna completely disagree with the larger checkbook statement. I think it’s actually the opposite.

Nico Perrino: I wanna close here by asking you, professor, about another proposal from a separate paper. This is a paper, or this is an article in the Harvard Business Review, it’s Time to Update Section 230, and forgive me if there is a longer paper associated with it, as I missed that in the previous paper, but I wanna read a little bit from it. “How might Section 230 be rewritten? Legal scholars have put forward a variety of proposals, almost all of which adopt a carrot-and-stick approach by tying it to platform safe harbor protections to its use of reasonable content moderation policies.

A representative example appeared in 2017 in a Fordham Law Review article by Danielle Citron and Benjamin Wittes, who argued that Section 230 should be revised with the following changes. No provider or user of an interactive computer service that takes reasonable steps to address known unlawful users of its services that create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.

So, it’s a little bit longer than 26 words, and it essentially proposes a duty of care standard. And you continue in the article, the duty of care standard is a good one, and the courts are moving toward it by holding social media platforms responsible for how their sites are designed and implemented following any reasonable duty of care standard. Facebook should have known it needed to take stronger steps against user-generated content advocating the violent overthrowing of the government.

Likewise, Pornhub should have known that sexually explicit videos tagged as 14 Y/O had no place on its site.” Bob, I wanna let you respond here, because I do wanna ask, kind of in closing here, where the court currently stand on Section 230, are you seeing this argument make its way into the courts, and how have they interpreted it?

Bob Corn-Revere: Well, there are various ways in which people are arguing to get past Section 230 by arguing various theories, whether it’s a product liability theory or –

Nico Perrino: The idea is that social media is a product and that any defectiveness that harm creates would be liable.

Bob Corn-Revere: In that case, for a product, you could argue that strict liability applies. There are various other theories that have been used to get around 230 by arguing, for example, that the platform is a content provider, and not just a platform facilitating third-party content. But the difficulty with having a duty of care, and I was glad to hear professor Van Alstyne praise Mike Masnick earlier because he did a critique of this Harvard Business Review paper that talks about how the duty of care issue is one that doesn’t really speak to how litigation actually operates.

Because it’s very easy to sue someone for creating a harm with their online platform, and you might actually win that case. But where you sort of create an open season for liability, it’s the smaller companies, the innovative companies, the ones that even if they win those cases fall by the wayside because they’ve had to litigate these cases that previously under Section 230 would have been dismissed.

Marshall Van Alstyne: So, let me jump in with a couple of thoughts on those. Some of the original ideas on that, I think I was a fan of some of the work that Danielle Citron and others have done, but I’m also sensitive that some of my own thinking has moved on from that. There’s one idea that I would like to build upon. The thinking I really wanna move on is removing the center entirely, which is what I’ve been trying to do with these new, completely decentralized market designs rather than simply spackling on new layers of regulation to existing incumbents.

I think, can we redesign an entirely better system that’s fully decentralized and voice some of these problems entirely? The one piece of that I think actually is missed in the Masnick critique and in some of the other critiques I think really can and perhaps should be built on, even in those cases of illegal content where platforms are currently liable, I think there’s a better way to view this problem that has not been proposed, and this is one descent proposal I think we can and should use.

The problem with the current proposals if you hold them liable for any specific message, it gets through, it’s an utterly impossible task. Small companies can’t afford it. And the scale, if you’re getting hundreds of millions of messages each day or each second, it’s just not possible. The solution to that problem, information or an economic sense, is to treat it just like the other pollution kinds of problems previously. And you take a statistical sample.

To give you another really simple version of that. A doctor doesn’t take all your blood to check your cholesterol, she’ll take a few drops. You take a statistical sample. If you take a statistical sample, you can guarantee almost any level of accuracy that you wish. And then you might hold count platforms accountable for illegal speech, incitement to violence, illegal drugs, sex trafficking. That is not protected. And you could actually use that, and you could even have progressive differences so that small firms, that are just starting out, had very different rates than bigger firms.

If you use statistical sampling in mathematics, you guarantee any level of certainty that you want under the central limit theorem. And so you can be quite confident of what level of pollution or damage is actually there just by taking a larger, unbiased sample. So, that’s the one idea I would retain from that before trying to move to a better, decentralized solution without a center judging content at all.

Nico Perrino: And I will say, professor, the paper were just discussing was from 2021, and then the one you were discussing earlier about the decentralized marketplace came in 2023. So, you can see how the professor’s thinking has evolved here. But, Bob, I want you to respond to that, and we’ll kind of wrap up here in a moment. But also answer the question. In the absence of Section 230, would the First Amendment get to what Section 230 accomplishes anyway? Like, bookstores aren’t liable for the content in the books they sell, are they?

Bob Corn-Revere: Well, let me take your second question first because you’re right, the First Amendment would play a central role simply because Section 230 was fashioned to promote First Amendment principles. But it’s not exactly the same thing as the First Amendment. It is more like anti-slap law in that it is an early dismissal mechanism for cases to preserve those First Amendment values. So, anti-slap laws are ones where if someone brings a lawsuit designed to shut down your political speech, for example, most states provide an early dismissal mechanism for those kinds of suits to be protective of speech.

Nico Perrino: And this slap as strategic lawsuits against public participation.

Bob Corn-Revere: Exactly. And Section 230 works in much the same way to preserve the ability of platforms to carry third-party speech online. It provides a way to dismiss cases at an early stage without having to go through costly litigation and discovery and all of those things. Again, I wanna get back to Professor Van Alstyne’s proposal, because I do give him a lot of credit for having evolved his thinking from the earlier paper in the Harvard business review to talk about decentralized mechanisms.

And while I appreciate the idea of using sort of a statistical accuracy model to try and get at a desired level of accuracy, I don’t think you can really compare speech, which is highly nuanced and not as easy to categorize as, say, pollution or some other physical problem that you’re trying to reach a statistical measure of accuracy for. As I mentioned in the earlier examples of Hunter Biden’s laptop and the Wuhan lab leak theory, it can often be really hard, and it can’t be done by separating it from the specific case you’re talking about.

And that’s true of all the categories of unprotected speech, whether you’re talking about incitement, defamation, you name it. And so that’s where I think, in practical application, when you’re trying to apply these ideas to protected speech, it might not be a very good fit.

Nico Perrino: I wanna close here by asking, I’ll start with you, Professor Van Alstyne, where you think Section 230 will go from here, what will happen, and where you think it should go. We’ve already talked a little bit about the decentralized marketplace, but if you wanna talk a little bit about Section 230 in particular.

Marshall Van Alstyne: So, I’m the economist, not the lawyer. I’ll tell you where I think it should go, rather than where it’s actually going. So, the irony is, I actually agree with Bob, that outright repeal or sunsetting is not a good idea. I somewhat disagree. I think that it does need some tweaking around the edges. And my own proposals are to try to create these decentralized systems where users get more power to choose for themselves in ways that I think can actually help address the pollution problem.

So, I think we should allow users to choose the algorithms that will help their own filtering. And so we wanna increase the listeners' rights, and I think that’s often left out of the conversation. We all talk about the speaker’s rights, but not the listener’s rights. And then I also think we need to address this filter bubble polarization problem by giving the right to be heard, but conditional on accepting responsibility for what you say. And we then need to find the best mechanisms to make that happen. We want a healthy internet.

We should be proud of what we’ve accomplished in the United States. That’s not to say there aren’t problems. There are some serious problems. And I think if we are clever in the use of some of these theories to possibly implement some institutions that don’t yet exist, we may actually have a better solution that’s totally decentralized with not government interference and not private interference either. Maybe that’s what we really try to hope for in the long run.

Nico Perrino: Bob?

Bob Corn-Revere: Well, again, I applaud Professor Van Alstyne on this really creative thinking. I’m having a hard time grasping it’s because I’m just a poor country lawyer and not an economist. And I think that’s the kind of approaches to solving these problems that will help us find better solutions than having these sort of blunt force, politicized yes or no, let’s tear down the law ideas that we’re getting on Capitol Hill. I think having discussions like this and looking for ways to deal with the problems of speech online or the way to get at it, and I would agree with you, there are problems to deal with.

There will always be problems to deal with, particularly when you’re dealing with a communications medium of this scope and power. And these things are not easy.

Marshall Van Alstyne: It’s a really hard, really interesting problem, and I’m hoping together we can find some better solutions.

Nico Perrino: Professor Van Alstyne, thanks for coming on the show.

Marshall Van Alstyne: It was a pleasure. Thanks for having me.

Nico Perrino: Bob, thanks as always for being here.

Bob Corn-Revere: Thanks, Nico.

Nico Perrino: That is Professor Van Alstyne. He is a professor of information systems at Boston University. And of course, Bob Corn-Revere is Fire’s chief counsel. I am Nico Perrino. And this podcast is produced by Sam Niederholzer and myself. It’s edited by a rotating roster of our FIRE colleagues, including Aaron Reiss and Chris Maltby. You can learn more about So to Speak, by subscribing to our YouTube channel or our Substack page, both of which feature video versions of this conversation. We’re also on X by searching for the handle free speech talk.

If you have feedback, you can send it over to sotospeak@thefire.org And we also take feedback in the form of reviews on Apple Podcasts and Google Play. Reviews are the best way for you to help us attract new listeners to the show. And until next time, thank you all again for listening.

Share