Table of Contents

So to Speak Podcast Transcript: A tech policy bonanza! The FCC, FTC, AI regulations, and more

STS Episode 237

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: Alright, folks. Welcome back to So to Speak: The Free Speech Podcast, where every other week we take an uncensored look at the world of free expression through personal stories and candid conversations. I am, as always, your host, Nico Perrino. And today we’re gonna look at the world of tech free speech news. Got a distinguished panel here with me around the table in FIRE’s DC podcast studio. Ari Cohn, how many times have you been on this podcast now?

Ari Cohn: This is only my second time.

Nico Perrino: This is only your second time?

Ari Cohn: Yeah.

Nico Perrino: Well, we have those conversations every morning during our rapid response meeting.

Ari Cohn: Yeah.

Nico Perrino: So, I feel like we’re talking about the issues we’re gonna talk about to day much more frequently than that. Ari, of course, is FIRE’s lead counsel for tech policy. Ari, welcome back.

Ari Cohn: Thanks for having me.

Nico Perrino: We also have Adam Thierer, who’s resident senior fellow for technology and innovation at the R Street Institute. First time on the show, Adam. Welcome.

Adam Thierer: Yes, indeed. Thanks for having me.

Nico Perrino: And we have Jennifer Huddleston, senior fellow in technology policy at the CATO Institute. Jennifer, welcome onto the show.

Jennifer Huddleston: Thanks for having me.

Nico Perrino: Another first-time guest.

Jennifer Huddleston: Yeah.

Nico Perrino: All right. So, where should we start here? You guys okay starting with Section 230?

Adam Thierer: Sure.

Jennifer Huddleston: Sure.

Ari Cohn: Sure.

Nico Perrino: Let’s do it. On Sunday, February 23rd, Federal Communications Commissioner Anna M. Gomez posted an X thread. She wrote, "I'm seeing reports that the FCC plans to take a vague and ineffective step on Section 230 to try and control speech online." She continues, "An advisory opinion on Section 230 is an attempt to increase government control of online speech. It is meant to bully private social media companies to comply with direct demands from the administration."

This, I think, comes on the heels of an exclusive report from the New York Post which came out on February 22nd. And that reporter said that the FCC, with a new GOP-majority chair led by Brendan Carr, is a top regulator of media, new and old. It has the legal authority to interpret Section 230 and change the prior guidance that has given those expansive protections to Big Tech. And then he says that Brendan Carr can weaken or eliminate the shield by issuing a so-called advisory opinion. I think that's where Commissioner Gomez chimed in.

Ari, before we start on what authority the FCC has to regulate Section 230, can you just tell our listeners what Section 230 is?

Ari Cohn: Sure. Section 230 is a 1996 law that is responsible for the internet unfolding and developing the way that it did. It has two primary provisions, and this is going to be important in this conversation because Carr does not seem to acknowledge that one of them exists. The first prohibits a – I'm sorry. The first precludes treating basically let's say a social media platform, just for simplicity's sake so I don't get into the terminology here, as the –

Nico Perrino: I've got the definition here too, so we'll see how you do.

Ari Cohn: – okay – as the publisher of any information posted by another information content provider, for example, a user.

Nico Perrino: So, this is (c)(1).

Ari Cohn: Yes, that's –

Nico Perrino: It says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” So, if I post on Meta Platforms something that's defamatory against someone else, I'm responsible, not Meta. Right?

Ari Cohn: Right. And not only that, but if I retweet that, I am also not responsible because it protects users as well as the platform.

Nico Perrino: Oh, interesting.

Jennifer Huddleston: And I would just like to jump in here. And while it's easy to think about this in the social media context, one of the things that came up, particularly when we were seeing some of the litigation by NetChoice and CCIA regarding the Florida and Texas laws around content moderation, is how much broader user-generated content is on the internet.

So, social media is the way that we commonly hear Section 230, but it's also particularly critical to things like review sites. And you can imagine, again, thinking about somebody who doesn't like a review, how in a world without Section 230 you could quickly be sued for hosting a bad review even if it was true.

Nico Perrino: So, this is Amazon and posting something about a review of –

Jennifer Huddleston: Yelp –

Ari Cohn: The comment section of Wall Street Journal.

Jennifer Huddleston: – Tripadvisor, the comments section of a blog post. It's also things like Etsy, and that was certainly something that came up in –

Ari Cohn: And eBay. Yeah.

Jennifer Huddleston: – eBay, etc., where you see listings posted that may be user-generated.

Ari Cohn: That's a really, really good note because those kinds of issues do sometimes get lost in the sexy stories about social media censorship or what have you.

Nico Perrino: So, that's (c)(1), right. So, these platforms are not liable for the content that's posted on their platforms by their users. What does C2 mean?

Ari Cohn: C2 has two provisions. We're going to ignore (c)(2)(b) because everyone else does. C2A provides another protection. This provides protection against liability for actions taken to remove or restrict access to content that is – let's see if I can remember all of the categories.

Nico Perrino: Obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. Otherwise objectionable I assume covers –

Ari Cohn: Everything.

Nico Perrino: – covers everything. So, this is content moderation.

Ari Cohn: Well, but here's the thing. Ninety-sum percent of content moderation cases are actually decided under (c)(1) because the interpretation given to (c)(1) is to protect the traditional editorial functions. So, whatever a publisher traditionally does you can't be liable for, which includes decisions whether to post, remove, modify, restrict, what have you.

The way that (c)(2)(a) comes into that – because a lot of times what you hear is that, oh, well, why do you need (c)(2)(a) if (c)(1) covers all that stuff? If you look at (c)(1), it covers only information provided by another information content provider. So, if the social media platform is for some reason responsible in part for the creation or development of that content, they cannot get protection under (c)(1). What (c)(2)(a) does not have is that requirement.

So, if, say, a platform is responsible in some sense for creating or developing some sort of tortious or unlawful content, they can still take action to, say, take it down without being exposed to liability for that action of taking it down. It's a belt and suspenders.

Nico Perrino: Got it. Got it. So, Adam, does the FCC, the Federal Communications Commission, have authority to implement Section 230, so to speak? Is it the regulatory authority for this law?

Adam Thierer: No, it's not. And, you know, let's step back here because, first of all, I was lucky enough to be present at the birth of Section 230 working in the mid-1990s at the Heritage Foundation and worked closely with Chris Cox and Ron Wyden's office on a whole variety of issues. And the thinking back then was that the Telecommunications Act was about getting the government out of the business of regulating communications in media, and the thrust of it was very much market-oriented, deregulatory in context.

And Section 230 was really, in my opinion, sort of the – it's the policy secret sauce that unlocked the digital revolution. It basically gave us, not only the greatest outpouring of human speech and expression in history, but also economically speaking put us in the driver's seat as a nation in the digital revolution.

Nico Perrino: So, you have people retiring on Section 230 is what you're saying.

Adam Thierer: Basically. I mean, you're talking about in 2022 the Bureau of Economic Statistics found that it constituted 10 percent of GDP, trillions of dollars, $4 trillion of gross value added output, millions upon millions of jobs. More importantly again, this is why the United States leads.

And a really important forgotten part of the importance of Section 230 is that America has many benefits, but we have one of the most overly litigious hells in the world when it comes to filing frivolous lawsuits, and this really helped make sure that didn't happen. And especially made sure it didn't happen for small and medium-sized enterprises, which, of course, flourished in the resulting digital environment. But getting back to your point –

Nico Perrino: Yeah.

Adam Thierer: – no, the FCC is not supposed to be utilizing Section 230 as a regulatory cudgel to basically take down enemies.

People have asked me, "How do you summarize the Trump administration's approach to Section 230 and the communications policy more generally." And I said, "Have you ever heard the old Klingon proverb that revenge is a dish best served cold?" because that's basically what it's all about. It's like vendetta politics. They're gonna use whatever instruments are at their discretion, including ones that don't have anything to do with regulating, to try to indirectly regulate or to browbeat people into submission, and that's what I think's going on here.

Nico Perrino: So, are you saying that Brendan Carr is wrong when he writes – and I'm looking at the Project 2025 document from the Heritage Institute – when he writes that Section 230 is codified in the Communications Act, and the FCC has authority to interpret that law and, thus, provide course with guidance about the proper application of the statutory language?

Adam Thierer: Well, I'm gonna let these two take on some of the more technical – but I'll say this. What Commissioner Carr – Chairman Carr is ignoring is the broader thrust of the act within it sits and the fact that this was not meant to be this sort of regulatory sledgehammer that he and others in the conservative world are trying to turn it into now. So, just the general thrust of where he's heading with this is problematic and deserves attention in its own right.

And why in the world are conservatives the one leading the idea of empowering the FCC to do these things? I was part of, not one, but two different projects with conservative groups to abolish or downsize the FCC. This used to be the normal state of affairs in conservative world, like, "We need to get rid of this analogue-age relic." It's a dinosaur.

Ari Cohn: LCIAs.

Adam Thierer: It is the source of our problems. It created media scarcity and limited competitive options. And here we have today conservatives empowering this agency to try to be more aggressive and do more regulating than ever before. That's insane.

Ari Cohn: And Section 230, it's self-executing. You don't need to do anything for Section 230 to be "implemented." A court looks at it and says, "This applies," or, "This doesn't apply." There is just simply nothing for the FCC to do.

Nico Perrino: So, the FCC doesn't give guidance on Section 230 then?

Jennifer Huddleston: Not currently. And I think that –

Adam Thierer: Not ever.

Jennifer Huddleston: No currently and not to date. And I think that there would be some – I know this is a free speech podcast, but I've gotta play admin law junky for a minute here, if it's okay.

Nico Perrino: All right. All right.

Jennifer Huddleston: I know. I know. I'm sorry. We gotta get into that boring part. But I do think it's important to look at what has happened with net neutrality and the FCC if we wanna see why, separate from the free speech concerns, this is a risky path for an agency to consider going down. When you've seen the kind of revitalization of things like major questions doctrine, the death of Chevron deference with Loper Bright, there seems to be an indication that the courts are not okay with agencies these days stepping beyond their normal bounds of authority and that they want clear congressional delegation.

What's really interesting about Section 230 is we've got the authors still around, and they've kind of talked about this and whether or not they think the FCC has authority of Section 230. And I'm not saying that always has to be controlling, but if you look at the original legislative record, you have "We do not want a federal computer commission." And then you also have when this came up the first time in the prior Trump administration; Senator Wyden and former Representative Cox filing comments saying, "You do not have this authority under the law that we wrote." So, I think that at a minimum you would see this challenged pretty significantly on administrative law grounds.

From a kind of philosophical point of view, you also have a weird conflict of that people who are advocating for overturning net neutrality and removing the FCC from one layer of the internet, in part due to the potential impact on speech-related grounds, are looking at potentially inserting the FCC into this other element of the internet that has been incredibly useful for speech, including for conservative speech.

In the last few months alone, we've had Elon Musk speaking about how important Section 230 was to X and its ability to make changes to content moderation and to have that kind of platform that it wanted. But then you can also think about things like Trump's own Truth Social or other platforms that have sprung up on both the left and the right as alternatives.

We often hear about Section 230 in the context of Big Tech, but it's incredibly important to more niche platforms, platforms that are serving a very specific audience that are going to have perhaps content moderation rules that are different than the norm. And Section 230 actually enables competition that way because, while I know we have a lot of lawyers at this table right now, it means if you're a startup in Silicon Valley or anywhere in the US who wants to host user-generated content for a set of voices that you feel are underserved, your second hire doesn't have to be a lawyer who's gonna tell you what to do when somebody uses your platform for what you didn't plan on it being used.

Ari Cohn: Maybe still should be, but you know.

Nico Perrino: Well, Ron Wyden said this.

Jennifer Huddleston: Doesn't have to be.

Adam Thierer: You don't even actually necessarily even have to get to net neutrality – although, that is a very salient point – to see the kind of internal incoherence here, at least from a free speech standpoint. What you have here are government officials saying, “We believe in free speech. And what's the best way for us to foster free speech? Government intervention.” I mean, it's like the classic “I'm from the government, and I'm here to help.”

Nico Perrino: Right. To Jennifer's point, Ron Wyden, I believe after this New York Post article, came out, said, "The reason former Representative Christopher Cox," who I've had on the show before, "and I wrote this was for small users, people who didn't have PACs and clout and influence. And I still feel that way."

But Brendan Carr, speaking to the conservative censorship on social media issue, is arguing that section – what is it? – (c)(2)(a) says that any action voluntarily taken in good faith to restrict the availability of material that the provider or user considers to be obscene, lewd, that whole list we went through before – what he's arguing is that, over the past decade, you have these Big Tech platforms run by our liberal overlords who are censoring conservative speech. They're doing it arbitrarily. They're doing it capriciously. They're doing it without a lot of transparency.

And what he's arguing is that this is not done, therefore, in good faith, and Section 230 says that these moderation actions need to be taken in good faith. Why is he wrong?

Ari Cohn: That definition of good faith itself requires violating the First Amendment, the one area where good faith is actually being used. And, again, keep in mind that actually most of these content moderation court cases are decided under (c)(1).

Nico Perrino: Yes, again, to your earlier point.

Ari Cohn: So, Brendan Carr is conveniently ignoring the vast majority of these –

Nico Perrino: But the (c)(2), it's there. It's there. It says good faith.

Ari Cohn: But even if he's right that content moderation cases are under (c)(2)(a), the one area where (c)(2)(a) has been effective in terms of getting rid of bad faith things is when it comes to anticompetitive animus when you are, say, doing something, you're removing this, not because you actually think it's otherwise objectionable, but because you are trying to, say, harm a competitor.

Nico Perrino: So, you're X, and Apple posted something on your platform announcing a new product. And you're trying to undermine that product because you see it as competition for your product.

Ari Cohn: And the precedent behind it is not entirely fleshed out and clear. It's pretty much come up like once, once or twice. And if you try to broaden that to what Brendan Carr is talking about, what does that entail? So, look at what you would be talking about proving it in court. Say, it's a "hate speech" content rule.

Nico Perrino: Sure.

Ari Cohn: So, what Brendan Carr is going to be arguing in court, that your definition of application of hate speech rules is in bad faith because I don't agree with it, which requires necessarily –

Nico Perrino: Defining hate speech.

Ari Cohn: – defining hate speech –

Nico Perrino: Okay.

Ari Cohn: – and then imposing whatever the government's preferred definition of hate speech or government's preferred definition of the consistent application of hate speech over the private party whose editorial discretion it is in the first place. You are by necessity supplanting the private editorial decisions of a social media platform, say, with the government's. And if there's anything that violates the First Amendment, it has to be that.

Jennifer Huddleston: And if I could just jump in really quick –

Nico Perrino: Yeah, go ahead.

Jennifer Huddleston: – because I think we're focusing on one element of what content moderation is, which is certainly important, which is the debate over what gets taken down. But content moderation isn't just a debate about what gets taken down; it's also a debate about what gets left up. And the reason I –

Ari Cohn: And to what extent it gets received.

Jennifer Huddleston: And to what extent it gets received, and all these other tools. I'm sure we may talk about some of those later on in this conversation.

The reason I wanna make this point now though and that I think it's so important is for those who think that good faith hasn't been properly interpreted and that they are somehow willing to allow the government to be the arbiter of good faith. The reason this should be so chilling as a First Amendment violation is to imagine if someone that you disagree with had that power. Because we all know once you give the government the power over speech it is very, very hard to get back. So, that could mean that –

Ari Cohn: Impossible.

Jennifer Huddleston: That could then mean that someone after Commissioner Carr – after Chair Carr is looking at using this, for conservatives out there, to complain about conservative content getting left up, that platforms are not in good faith executing on their hate speech rules nearly enough.

Nico Perrino: If we got rid of Section 230 or sunset it, would this result in more censorship on social media?

Ari Cohn: Almost assuredly for one of two reasons. One of which being it would then become just infeasible to host user-generated content at any kind of meaningful scale.

Nico Perrino: Because these platforms would be liable for what everyone says.

Ari Cohn: Yeah. And maybe one platform would try to stick it out for a while with a huge legal budget, but forget about any kinda competition between platforms because most of them will go under in a heartbeat. And two, the other meaningfully reasonable thing that a platform could do is decide to simply prescreen every single piece of content and avoid anything that could potentially reek of trouble.

Nico Perrino: I wanna close out this part of the conversation and turn now to something, Adam, that you had tweeted about on, I believe, Friday. You had responded to something that Brendan Carr said on Twitter. Brendan Carr noted, "I have received complaints that Google's YouTube TV is discriminating against faith-based programming. These concerning allegations come at a time when American public discourse has experienced an unprecedented and unacceptable surge in censorship." And he concludes his tweet by saying, "I'm asking Google for answers."

I have the letter he sent on March 7th to the chief executive officers of the various Google companies, including YouTube, where he says that the FCC has authority to regulate multi-channel video programming distributors. Goes through some convoluted regulatory language to get there. Cites Section 230, says, "Likewise, Google offers a range of projects that have benefitted from the protections contained in Section 230 of the Communications Act. With respect to those covered products, Google's conduct is only protected to the extent its actions, as relevant here, are taken in good faith."

So, he's looking at faith-based programming on YouTube TV. Adam, what was your thought on this?

Adam Thierer: Yeah, well this is quite silly. So, substantively speaking, if Google is really making an attempt to try to censor faith-based programming on YouTube, they're doing an absolutely horrible job. There's never been a platform in human history where more faith-based programming, religious programming in general, has ever been available freely to the masses. And this distinction between YouTube TV versus YouTube is irrelevant and dumb because, as long as you can go online and search, you can find these things. And what is the FCC doing regulating Google in general, right?

Ari Cohn: Not just that, but on YouTube all the stuff is free. On YouTube TV, you have to pay.

Adam Thierer: Right.

Ari Cohn: So, if there's more stuff available on YouTube, that's great.

Adam Thierer: And even if he wants to make –

Nico Perrino: Well, YouTube TV is what they call an over-the-top streaming platform.

Adam Thierer: Right. So, I could get into the technicalities of FCC law, media law, MVPDs, and all this jargon. But the bottom line is conservatives used to be in favor of getting rid of most of that stuff, of trying to eliminate these ridiculous, archaic, analogue-age media regulation rules.

Nico Perrino: And these rules, Adam, were they implemented because of broadcast scarcity, the spectrum scarcity?

Adam Thierer: That was part of it. But then we also got – the FCC got involved in the business of regulating cable and satellite television. They grew their mission. And it was loosely based on scarcity, rationales, and then competitiveness rationales.

Nico Perrino: Okay.

Adam Thierer: The bottom line is that those rationales are all gone. It used to be that we could literally – and I used to do this in a publication that I wrote on media ownership issues. I used to go out and count cable channels. And we used to think for a while like, "Fifty. Wow, that's pretty great." And then it moved to like 500. And then it moved to thousands. Like, we're gonna stop counting now.

Nico Perrino: And you still couldn't find anything to watch.

Jennifer Huddleston: I almost feel like we need to explain to people what cable was at this point.

Nico Perrino: Right. Right.

Jennifer Huddleston: You used to rather than going to your various streaming services have this box or this cable that ran into your TV.

Adam Thierer: And it's all crazy now. What do we have these –

Ari Cohn: [Inaudible – crosstalk] [00:22:52].

Adam Thierer: It's like we're in this search for a metaphysical definition of: What is media? What is a channel? What is television? Who cares? We have choices.

Ari Cohn: I should have like three streams on one channel.

Adam Thierer: Yeah, it's all that matters, is that we have choice.

Jennifer Huddleston: I also wanna kind of point out something else with this letter that I think has to be part of this conversation, which is some of the conversation around jawboning, because there are the actual kind of debates to be had over what authority does the FCC have if it were to try and actually bring a claim against Google over YouTube or if it were to try and issue an advisory opinion. But then there's also this question of what happens when we kind of continue to see these kinds of letters. What kind of pressure, both informally and formally, are we seeing placed on private actors by the government around how they choose to host speech?

And what does that mean for the culture of free speech, not only when we see the US government doing that, but when we see governments around the world who might not have the First Amendment where we could sit here and say, "Well, if they did that, that could be challenged under that"? What does that mean in terms of how platforms are able to continue to offer these forms that have really benefited those who might not have traditionally had a voice?

Nico Perrino: But Chairman Carr says here he's just asking questions. People are asking him questions. So, he's asking Google questions. Right. What's wrong with that?

Adam Thierer: So, let's go back to a podcast you hosted recently with Ronnie and Bob here at FIRE and talked about regulation by raising an eyebrow. This is all a shakedown racket. This is about threats and intimidation and jawboning. And regulation by raised eyebrow has a long and unfortunate history at the FCC where, knowing that the First Amendment provided a bearer to direct regulatory threats to companies and media outlets, you could go the opposite route, go indirect routes, and threaten at the margin.

And so, in the old days, it would be basically like CBS was considering running something in primetime. The FCC chairman would call them in the office and say, "You're going to run what tonight?" And he'd raise his eyebrow at them, and all of a sudden, it wasn't running. And so, censorship happened indirectly without any law or regulation being implemented.

And so, this is the long and unfortunate and ugly history of the Federal Communications Commission; threats and intimidation becoming a way to do backdoor censorship. And now they're just trying to extend it in the digital age, with threatening letters. You know, “Nice platform you got there. Shame if anything would be happening to it.” It's mafioso politics.

Nico Perrino: So, hold on. To just put clarity to what Brendan Carr is alleging here. As an example, he tells Google, "Great American Media wrote a letter to me in which they claim that YouTube TV deliberately marginalizes faith-based and family-friendly content. Great American Media states that its Great American Family network is the second-fastest growing channel in cable television. And while they are carried on a range of cable and streaming services, including Comcast, Cox, Hulu, FuboTV, and DirecTV, YouTube TV refuses to carry them."

Ari Cohn: That's like one percent of the streaming services, by the way. That is not a long list of streaming services that carry it, for the record.

Nico Perrino: I subscribe to Sling, and I don’t see Sling listed there. Maybe they'll get a letter here soon enough. I don't know.

Adam Thierer: And Carr's undercutting his own point because he's pointing out all the other ways you can get this speech.

Ari Cohn: Yeah.

Jennifer Huddleston: Right.

Adam Thierer: Who cares if one of these don't – you know, this is a long part of FCC regulation going back to things we used to debate. Like, should we have the Tennis Channel on Comcast? And the tennis people would come and say, "We must have tennis." There's always gonna be a special interest or a constituent who says, "I want my stuff there, and I want it."

Ari Cohn: I want all 17 Big Ten overflow channels.

Jennifer Huddleston: Yeah.

Adam Thierer: Yeah, not only the one there, but I want preferential channel treatment. Like, I want he white glove treatment. Bring me up into the upper tiers and all this stuff. This has been going on forever. This is all what cuts back to editorial discretion and the question of what media companies should be allowed to do with their editorial discretion as a First Amendment basis. And the FCC should get out of this game entirely, not just for old analogue, but obviously for new companies they have no jurisdiction over.

Ari Cohn: To get back to what both of you were talking about, jawboning, let's just also remember context here. There was an entire lawsuit filed because a candidate for president said that social media was killing people, and there was just a whole brouhaha over just that. And here we have an actual person in immense government power writing threatening letters; and the same type of thing that he himself complained about under prior administrations, and just crickets, complete crickets.

Adam Thierer: Yeah.

Nico Perrino: So, Brendan Carr criticized this sort of activity when he was not the chairman.

Adam Thierer: Yeah, I was just gonna say that. Yeah.

Ari Cohn: He went in front of House panels and whined about it for –

Adam Thierer: Weaponization.

Ari Cohn: Yeah.

Adam Thierer: Yeah, they kept going back to weaponization of regulation, and in one sense they were right. This is the thing. And getting back to a point Jennifer made earlier, it used to be that a lot of conservatives believed that, well, we don't wanna have these rules in the books when the next crowd comes into town. But unfortunately, they've kind of given up on that, and they said, "No, it's our turn. Turnabout's fair play." You know, revenge will be served and served cold. And this is that moment. And so, they've given up on principle, and they're now just down to just good old fashion vendetta politics.

Nico Perrino: Owning the libs.

Adam Thierer: Yeah, owning the libs. And in some cases, I don't even think that Carr and others necessarily wanna see a conclusion to this. It's just the whole process of making people grovel before them and hurt, the pain of it, and then the cheering that goes on in their base. And again, both sides play this game. The left plays this game too.

And at the end of this day, this grievance politics that we've seen infect the world of technology oftentimes, in fact rarely, has a full conclusion judicially speaking. But in the short term, it can inflict a lot of pain and create a lot of attention, especially among your base.

Ari Cohn: It's very weird sadism.

Adam Thierer: Yeah, it really is.

Nico Perrino: And lead to multi-million-dollar settlements with the president of the United States.

Adam Thierer: Well, that's on your previous podcast.

Nico Perrino: Yeah, we talked about that a lot.

Adam Thierer: That is what's really scary. I'm really concerned about media companies, technology companies caving and signing these agreements to get out of the short-term pain.

Ari Cohn: Yeah.

Adam Thierer: Somebody's applying the –

Ari Cohn: But you're gonna cause them long-term pain.

Adam Thierer: Well, that's exactly – and the broader world of free speech and First Amendment long-term pain, right, because they'll say, "Well, now we've got precedent, people settling. They knew they were doing something wrong." Even though that's not what was the case of the settlement. They were just getting out of harm's way in the short term, and it has terrible long-term ramifications.

Jennifer Huddleston: And again, I wanna go back to some of the international dynamic here in places where we don't have a First Amendment. If you have companies making these deals with a government when they could have pushed back under the First Amendment, what's to stop a more totalitarian government from saying, "No, we want you to do XYZ." And then you have "Well, you settled with them. Why won't you play with us?" kind of conversation.

Nico Perrino: Well, this is the problem when you abandon principle, right, if you have no moral leg to stand on when authoritarian countries abroad do the same thing. This was my deep concern about the TikTok ban, right. You have Russia. You have China. You have all these authoritarian countries banning American platforms within their borders. And then here we go, a platform that people can voluntarily choose to use, banning it within our borders.

Adam Thierer: And one added component to this, Nico, which is really important is that there is the added danger of a sort of regulatory quid pro quos when a company has a merger going on or some other business before the FCC or any regulatory body.

Jennifer Huddleston: Right.

Adam Thierer: And then the sitting regulators start playing games like, "Oh, we see you've got something you need to have done over here. Well, it'd be nice if you could do us a little favor over here." And many, many conservatives wrote very long law review articles about what a bad idea that was, how that was merger extortion and regulatory blackmail they call it, and quite rightly so. But now, turnabout's fair play. They're all playing the same game now. It's just a question: Is our guy in power to help extract [inaudible – crosstalk] [00:30:36]?

Nico Perrino: So, Ben Smith, who runs Semafor, the news outlet, asked Chairman Carr about this on Thursday, February 27th. Aaron, if you would cut this clip into the audio and video.

Smith asks, “It sounds like the argument that you are making is that the Federal Communications Commission in the previous administration was acting against or at least looking into conservative media inappropriately, and that you want to right the ship by inappropriately looking into liberal media.” Carr, “What I'm saying is that the Federal Communications Commission is a place that operates by case law and by precedent. And these cases and precedents that were developed over the last four years were apparently not controversial when the Democrats were in charge, and I'm surprised that applying the same precedent is now considered controversial.”

Ben Smith, “You said at the time that you thought they were partisan and inappropriate basically, right?” Carr, “There are things that the FCC does that, once they do them, that creates precedent, that creates the case law, that creates the things that the Federal Communications Commission is supposed to follow. What we're doing right now is following the law that the Federal Communications Commission has developed over the last four years.”

So, it seems like Brendan Carr's interpretation of the Federal Communications Commission's authority has changed now that he is in power.

Ari Cohn: Imagine.

Nico Perrino: Isn't that the story as old as time?

Ari Cohn: Yeah.

Adam Thierer: That's right.

Nico Perrino: Any thoughts on that? Or does it speak for itself?

Ari Cohn: It's a startling admission, to be honest. Like, I would not be caught dead telling Ben Smith from Semafor on a C-SPAN broadcast that I have absolutely no principles. But here we are.

Nico Perrino: But is he wrong? Is he wrong?

Ari Cohn: Yes, he's wrong. Of course, he's wrong because FCC precedent can't override the First Amendment.

Adam Thierer: Right.

Jennifer Huddleston: Right. And I would add there though too to those that are pushing back against some of the concerns about the use of this in the current administration that are coming from the left to recognize that this should be a principled-based approach in the pushback against it as well. In the sense that if you don't want the FCC under Carr to have this power –

Ari Cohn: Don't assume it for yourself. Yeah.

Jennifer Huddleston: – don't assume that you want it when it's the next time that it's a Democratic chair either.

Ari Cohn: And even maybe admit where there have been missteps in the past and overreaches.

Jennifer Huddleston: Right.

Ari Cohn: I mean, that could go a long way –

Nico Perrino: Yeah.

Ari Cohn: – towards actually creating that principled reaction. You say, "You know what? Yeah, we did a couple of things wrong when we had power." And even if it's a lesson learned late, it's still a lesson, and it's still something that can actually teach the public and other people the value of the principle.

Nico Perrino: The FCC is an independent agency, right, at least theoretically?

Ari Cohn: Yeah. To be seen.

Nico Perrino: Although, the administration is trying to eliminate the independence of some of these agencies.

Adam Thierer: Well, and we have a history that says that presidential administrations have utilized backdoor mechanisms to basically encourage FCC chairs to go after media companies with the fairness doctrine and other types of regulatory instruments. You know, the threat of license revocation is a death sentence in the world of traditional media for broadcast outlets. Now, luckily, newer media outlets did have those same licensing regimes, but the FCC –

Nico Perrino: This is a license to use a certain portion of the broadcast spectrum, correct?

Adam Thierer: That's right. Again, this just applies to broadcast radio, television.

Nico Perrino: Do you know how long those licenses last?

Adam Thierer: Well, seven years is what it used to be. But then they're really perpetual. They basically don't get pulled, but you can use the threat of pulling them or you can use other sort of indirect mechanisms to extract a lot of promises and put a lot of pain.

Nico Perrino: Do they automatically renew if the licensee wants it?

Ari Cohn: There has to be a showing of cause to not renew.

Jennifer Huddleston: Yeah.

Adam Thierer: Yeah.

Nico Perrino: Okay.

Adam Thierer: Yeah, it works differently that it used to. And luckily, we don't have our government yank – we're not a third-world republic yet. We basically don't have governments yanking licenses left and right. But they use other instruments to extract promises and concessions.

And getting back to Ari's point which is that a principled FCC chair would be saying, “Look, we need to attack this regulation at its source and get rid of it. These are relics of a bygone era. And we've always been in favor of downsizing unnecessary regulations.” And what about that whole DOGE effort, right, and getting rid of rules?

“Let's do our part here by saying that rules that were put in the books all these many decades ago," in this case now a full century ago, "do not have the same relevance that they used to in an age of scarcity. We now live in a world of information abundance. The fundamental rationale of this agency has been challenged to its core, and yet we're still regulating like it's 1970.”

Nico Perrino: We've been talking a lot on this podcast about the FCC. You think that's unwarranted?

Adam Thierer: I don't think so because it still continues to put a lot of pressure on media and speech.

Nico Perrino: But has it been doing more in the last two months under the Trump administration than it has previously? I feel like I've been talking about it a lot more, and I just wanna be fair to the topic. Are we blowing this out of proportion, relative?

Jennifer Huddleston: I think it's the nature of what this podcast discussed and what you're focused on. For a while, there were a lot of discussions around potential Section 230 reform in Congress.

Nico Perrino: Congress, yes.

Jennifer Huddleston: There certainly are debates that impact free speech, particularly online speech when we're thinking about something like the Kids Online Safety Act or things like that in Congress. But a lot of the debates around Section 230, around online content moderation more generally, that we've seen at least since the start of the year have been at the FCC – or I think we're gonna get into what's going on at the FTC as well in a bit – at these agencies, as opposed to in Congress.

There's also some action in the courts. But, again, we saw the NetChoice cases last term around content moderation. We'll see what kinda happens next there. There are some age verification cases, including Free Speech Coalition v. Paxton, that are before the court. But, again, when we're thinking about these kind of broad questions of online content moderation, the government's potential significant changes, and violation for the First Amendment there, where we've seen the most recent action has been in this discourse around are the agencies going to act or not, as opposed to is Congress going to try and pass a law.

Ari Cohn: I think what you also have seen before is the FCC has "done" stuff in this area before, but it actually hasn't accomplished anything. And it's kind of just been this perpetual state of limbo on these issues. And now you see Brendan Carr making this push to actually effectuate things instead of just dallying around arguing about it for four years, which is kind of how the FCC worked before. So, it's a little bit more pressing now because there's actually a risk of something happening.

Jennifer Huddleston: And one of those things was the petition under the prior –

Ari Cohn: NTIA, yeah.

Jennifer Huddleston: – the NTIA petition under the prior Trump administration, which I think is why particularly for folks who were following free speech debates about five or six years ago now and remember when that was a concern and was a conversation that was being had and was being had under these same questions –

Nico Perrino: What is this petition?

Ari Cohn: The NTIA are the ones who petitioned the FCC to engage in Section 230 ruling in '18.

Nico Perrino: Gotcha.

Jennifer Huddleston: So, this stemmed from the prior Trump administration. Issued an executive order around Section 230, or social media and content moderation. I apologize I can't remember the exact name of the executive order or how it was fully framed.

Nico Perrino: No, it's okay.

Ari Cohn: It was a long time ago.

Jennifer Huddleston: Because the FCC is an independent agency, that order actually directed the NTIA to petition the FCC to engage in rulemaking and possibly also advisory opinions. I can't remember.

Ari Cohn: I think so, yeah.

Jennifer Huddleston: Again, we're talking about six years ago. But for the FCC to engage on the Section 230 issue. An initial kind of comment period was held. There was a change in administration. And so –

Nico Perrino: So, now they're picking it back up.

Jennifer Huddleston: – Now I think there's a lot of conversation from those who were engaged in free speech issues during that time period. With some of what we're hearing from, for example, Chair Carr, with some of what was in Project 2025, are we going to see what seemed like it had kind of died down at the FCC, particularly the –

Ari Cohn: But the docket was open.

Jennifer Huddleston: Right.

Ari Cohn: The NTIA petition never was actually closed out, which is why they can just pick it back up.

Jennifer Huddleston: Right. And so, that's been why I think there's been so much attention on some of these comments.

Nico Perrino: Okay. You had teed up the Federal Trade Commission action on some of these issues, so let's look at that. On Thursday, February 20th, Federal Trade Commission Chair Andrew Fergusen posted an X thread. They all post X threads, right. We're all on X these days. Although, X was down this morning, so the world stopped, except for the stock market, which was tanking.

Ferguson posted this, "Big Tech censorship is not just un-American. It is potentially illegal. The Federal Trade Commission wants your help to investigate these potential violations of the law. We are asking for public submissions from anyone who has been a victim of tech censorship." And then he has in parentheticals here, "Banning, demonetization, shadow banning, etc."

So, they're looking for submissions from anyone, including employees of tech platforms or from anyone else who can shed light on these practices and the ways in which they might violate the law. Ari, you responded and said, "But consumer protection law is no talisman against the First Amendment."

Ari Cohn: Well, I also just wanna first flag my absolute favorite thing about this whole fiasco, is that, No. 1, probably the most censored people on social media are, say, adult film actors and people posting adult content. And the docket allows you to upload attachments. Just strategic blunder.

Nico Perrino: So, the FTC's getting a bunch of porn right now is what you're saying.

Ari Cohn: Almost certainly. If they're asking for examples, they're getting porn. Just really just a bad idea. Mind blown.

Nico Perrino: But, okay, let's steelman this, right. So, Andrew Ferguson is saying that these tech platforms have policies. Twitter used to say that they're the free speech wing of the free speech party. It's a public forum, what do you have? And then you have content moderation happen. Posts are taken down. People see this to be contrary to the terms of service these platforms allow for. And this is the FTC saying, "Hey, you got stuff taken down in violation of these terms of service. We wanna know that because that's an unfair and deceptive trade practice."

Ari Cohn: But it's not, and for two reasons. One, it's the same argument as the good faith argument. You actually have to sit here and sit in judgement of somebody's just subjective editorial judgements, and there's no real way to do that that's consistent with the First Amendment. But it's also just a standard concept in consumer protection law that consumer protection law protects the reasonable expectations of the consumer.

But terms that are not really subject to a concrete and pretty definite definition, you can't just base UDAP or some kind of competition claim on a piece of what's basically puffery or something that's so malleable and subjective as to not actually give the consumer a reasonable expectation that what they think that term means is actually what it means. And when you get to things like hate speech or abuse or harassment, there is just nothing about those terms that is narrowly definable such that it actually gives a consumer an expectation of exactly what that term means.

Jennifer Huddleston: Speaking of terms and what they mean, can we talk about the use of the word censorship in this conversation?

Ari Cohn: Let's.

Jennifer Huddleston: I think it's very concerning that when the likely response of government action would be by censorship of both private platforms and individuals on the basis of the government decision about what speech must stay up or must be taken down that we call what are content moderation decisions by private actors censorship.

Ari Cohn: Well, and the hilarious part is how willing everyone is to misuse the term Orwellian.

Nico Perrino: But this speaks to the decision in Moody v. NetChoice that the supreme court handed down last year. In that decision, they write, “The reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content and wants them to create a different expressive product communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.”

So, getting back to your earlier point, Ari, what you're saying is that you have these terms of service. They ban things like harassing, discriminatory, hate speech, or whatever it is.

Ari Cohn: Yeah.

Nico Perrino: You're saying to define any of these terms for a regulator to look at whether they are deceptive or unfair is to define those terms, which the government cannot do because that would be prohibited by the First Amendment.

Ari Cohn: Right. Take for example the phrase “From the river to the sea, Palestine will be free,” a source for huge controversy. Some people interpret that as a call for ethnic cleansing Jews out of Israel and getting rid of the state of Israel. Some people view that as a statement, a slogan, in support of Palestinian independence. And then you have –

Nico Perrino: Or the reunification of the British mandate, I guess.

Ari Cohn: Right. Yeah, right. How do you judge whether or not that, say, is consistent with, say, somebody on Twitter saying America should be a White ethno-state. How are you to judge the consistency of applications of content policy between those two things? Okay. So, a platform thinks this one is too hateful, but this one is not too hateful. But how do you judge that? You can't.

Nico Perrino: It's like what's happened to the college presidents in front of Congress, the calls for Jewish genocide. They were talking about "From the river to the sea, Palestine will be free."

Ari Cohn: Yes.

Nico Perrino: Is that a call for genocide? Is that a call for a political solution to the Israeli-Palestinian – yeah?

Ari Cohn: Zero-win.

Jennifer Huddleston: So, I can give you an even more benign example.

Nico Perrino: Oh, please.

Ari Cohn: I'm not good at benign.

Jennifer Huddleston: One of my favorite content moderation examples to use is this subreddit that is just pictures of cats standing up. So everyone's with me here, the only thing we're allowing, our only content moderation rule, is cats standing up. If we are moderating in good faith, does a cat stand up on two legs or four?

Nico Perrino: Ooh. What do you think, Adam? Do you have any cats?

Jennifer Huddleston: So, to me, this is the example of like –

Adam Thierer: Oh, too early in the morning [inaudible – crosstalk] [00:45:36].

Jennifer Huddleston: You have obviously very controversial, political, philosophical questions. But you also have very often simple questions that just have to have a content moderator make a decision. And one platform may say both of these, both two and four legs, are allowed. One may say, "We believe cats stand up on two legs, and we're taking down all the four-legged cats." One may say, "We believe that cats stand up on four legs and that a lion constitutes a cat, and we're gonna have a very broad interpretation."

Ari Cohn: If it's standing up on its front two legs, is it a handstand?

Jennifer Huddleston: Front two legs, is it a handstand?

Ari Cohn: Right.

Jennifer Huddleston: You know, things like that. This is a very straightforward, benign content moderation rule, but it's very easy to see how decisions have to be made, and that those decisions that may be different from perhaps what – I'm guessing, with the four of us, we would come up with slightly different interpretations of what a cat-standing-up rule looks like.

Nico Perrino: I don't go anywhere where cats are involved.

Adam Thierer: We can tie this all together with a simple line, which is that the editorial decisions of a private platform are not unfair and deceptive practices for purposes of FTC regulation. When we're talking about decisions about content moderation or just basic editorial discretion, unless there's some truly deceptive act going on by the actual speaker or the platform itself, then everything else after that is just speech. It's protected. And the FTC's way out of bounce here.

Nico Perrino: So, what might be an unfair or deceptive trade practice, like X saying your post got 500,000 impressions when it really only got 10?

Ari Cohn: Maybe in the case where you're monetized, and that makes a difference to your bottom line.

Nico Perrino: Yeah, okay.

Jennifer Huddleston: And if I can just add something here because there's a sixth question in this RFI that really –

Ari Cohn: Does it have to do with cats?

Jennifer Huddleston: – doesn't have anything to do with cats, but Nico's gonna be mad because I'm gonna bring in another area of law again. It has to do with antitrust. And that last question in that RFI asked if content moderation occurred because of the nature of lack of competition in this space. Which, if I can just say, not only is content moderation not an unfair and deceptive trade practices violation, it's also not an antitrust violation. And that if anything, antitrust intervention could make a lot of these debates about content moderation even more difficult, could result in platforms having even fewer tools to respond to the type of harmful content, spam, animal cruelty, etc.

Nico Perrino: Beheading videos.

Jennifer Huddleston: Yeah.

Ari Cohn: Ironically, it would actually also make their decisions more in line with each other.

Jennifer Huddleston: Other, as they have to deal. Yeah.

Nico Perrino: Ironically, also, the content moderation decisions of many of these platforms are what led to the creation of new platforms –

Jennifer Huddleston: Right.

Nico Perrino: – such Rumble and Truth Social.

Jennifer Huddleston: And if you like competition in social media, you should love Section 230 because it keeps that barrier to hosting user-generated content low. So, I just wanna highlight that for perhaps any other antitrust wonks out there.

Ari Cohn: I like that.

Nico Perrino: What is a conversation about tech policy without a conversation about artificial intelligence, right? So, let's close out there today.

Adam, you caught my attention because, on Sunday, February 23rd, you posted another X thread. X is just my news editor. I'm sorry. You posted, "AI regulatory activity is completely out of control in the United States. We have already seen over 700 bills introduced less than two months into 2025. That's around 13 per day." Why is this out of control, and why is that a problem?

Adam Thierer: Well, unfortunately, first of all, here we are a couple weeks after that, we're now up to 800 bills.

Nico Perrino: Okay. You're the guy who keeps track.

Adam Thierer: So, a dozen-plus bills a day. And the vast majority of these are state regulatory measures. In my 34 years doing tech policy, I've never seen any tech – not just any tech issues, but any policy issue period where within the first 70 days of a year you say 800 bills on the topic introduced. That's an unprecedented level of interest in an issue.

And, of course, it's because for so many different people and so many different interests, AI is the ultimate bogeyman issue. It's coming after their jobs. It's coming after their content. It's coming after them, whatever. It scares people, and this has led to fear-based policies leading the way, as opposed to the internet where we led with freedom-based policies.

And so, as a nation, we're about to see a reversal of the internet and digital economy and speech vision that we had, which was one of freedom, of permissionless innovation, of embracing, you know, the fear of the unknown was fine. We're gonna go into it and take that leap of faith. And we came out better for it. But now a lot of people say, “Oh, but, boy, we should've thought that through more. We don't want what happened to the internet to happen to AI.” I'm like, what, you think we should've –

Ari Cohn: Why not?

Jennifer Huddleston: Right.

Adam Thierer: What was so bad about that? I mean, we had all of these wonderful new choices and options and competition. And, oh, by the way, would you have wanted it all licensed by design, locked down, prior restraint on everything on the internet before we allowed it into the wild? That's called the European model. And we have a really good experience from the past quarter century of how badly that has derailed innovation and speech on the other side of the Atlantic.

And here in America, by contrast, we've led. And our household names in the world of tech are Europe's household names in the tech. And what was the greatest irony of all is that Europeans regulated aggressively on tech to try to go after large technology companies, and all they have left to show for it is large technology companies. They're the only ones with the compliance teams and the pockets to litigate and deal with all of the regulation.

And so, we're seeing an unfortunate moment for technology policy in the United States, with very few people being defenders of the old vision. And now we're sort of holding on to past gains and praying that we don't lose ground on things like Section 230 and, more broadly, just economic freedom and permissionless innovation in AI.

Nico Perrino: Can I try to pick a fight amongst you guys here, because I've talked to you about this in the past, Ari? Let's go back to the 1990s. You said we avoided fear-based regulation in favor of freedom-based regulation. I'm not so sure of that history when you look at, for example, the CDA, the Communications Decency Act. I just got done reading Mike Godwin's book Cyber Rights, where he talks about the cyber porn panic – what was it? – summer of 1995, –

Adam Thierer: Yeah.

Nico Perrino: – that led to the CDA essentially passing without any dissent.

Adam Thierer: Yeah.

Nico Perrino: And he says that was actually a good thing because the law was so poorly written as a result of not having debate over it that it led to a comprehensive strike down on First Amendment grounds in the courts, and the law also allowed for immediate appeal directly to the Supreme Court. So, this all happened in kind of quick succession.

Now, Adam, I know you're a believer in a national framework for artificial intelligence regulation. Ari, I know you are very fearful of a national framework because you're worried that maybe we'll get something like the CDA. But if that goes to the courts, drawing on the CDA comparison, wouldn't that help set up a really good challenge that will have implications across the country and will forestall some of this AI regulation? I know we got many CDAs too, but still.

Ari Cohn: So, what I actually worry about is I worry about a law – and this is true in many different cases – I worry about a law, whether it's state or federal, that is just good enough to make challenging it very difficult. That's the nightmare situation.

Nico Perrino: Okay. So, not like the CDA, which regulated indecency.

Ari Cohn: Yeah, the CDA was – honestly, I agree with Godwin on this. We have a really great, strong precedent that guided our First Amendment on the internet for decades.

Nico Perrino: He later wrote he thought he could just retire, his job being so completely done by the Supreme Court.

Ari Cohn: But the thing that I worry about – I'm of many minds on this. I worry about 50 states passing 50 different laws, making it literally impossible to operate in the United States. That is very possible. I also don't trust Congress to get it right. I would kind of prefer that maybe – if I had my druthers, what we would stop doing is stop trying to anticipate problems before we know that they actually exist and try to shoehorn regulatory solutions to things that we aren't even sure how they would manifest if they do exist because we're just gonna screw it up more.

Jennifer Huddleston: And we should also quit presuming things are new problems.

Ari Cohn: Yeah.

Jennifer Huddleston: That's one of the concerns I have when I look at some of the amount of action around both AI in the state legislature as well as some of the bills we've seen at a national level. There certainly are debates to be had over what a broad AI framework should look like. I'm an optimist. I still think we have the opportunity to get it right. And Adam is about to –

Nico Perrino: When you say "we," do you mean Congress?

Jennifer Huddleston: I mean we as an American people –

Nico Perrino: Okay.

Jennifer Huddleston: – have the opportunity to get this right and have that '96-esque moment with something that really reaffirms our belief in innovation and the free market. And I'm getting looks from both Ari and Adam that indicate I might be alone.

Ari Cohn: You have a lot of faith.

Jennifer Huddleston: But I wanna say, one of the things that I think we certainly should talk about on this podcast are some of the AI proposals. While these broad frameworks often have implications for speech, there are ones that are more direct attacks on free speech. Particularly in the election context, we've seen a lot of this at both a state and a federal level kind of stemming from some of the debates around online election content, but then taking it further in ways that could have eliminated political parity, could have eliminated useful tools for a candidate reaching their audience by having auto-generated captions in a different language or something like that.

Ari Cohn: You know, it's funny actually you say that because one of the things that always struck me from a couple years ago when I was participating in the Senate AI Insight Forums on elections, like right before that session happened, there was – excuse me – there was a moment where Eric Adams had deepfaked his voice speaking other languages to constituents, and people were aghast. Like, they were just, oh, my god, how dare you mislead people into thinking you're a polyglot.

And I'm sitting here thinking, actually, if I'm a person who doesn't speak English as my first language and I get a call from the mayor in his voice, not a translator's voice, speaking a message in my language, the amount of inclusion in the political system that I feel, the amount of belonging I feel, which then probably translates into participation, significantly outweighs any deception that I think Eric Adams speaks languages that he does – I mean, come on.

Adam Thierer: I wanna get back to first principles here with regards to regulating AI. First of all, Jennifer's points are good. We already have a lot of different forms of regulation for AI and the various harms that people identify with so-called algorithmic discrimination and bias, things like that. We have civil rights laws. We have unfair and deceptive practice. We have other mechanisms.

But there's a whole new class of regulatory proposals out there, especially at the state level, on numerous grounds to essentially – it's essentially a war on computation. And the problem here is that we gotta talk about what we mean by interstate algorithmic speech in commerce. I mean, at some level, these algorithms and code don't just stop at state borders –

Jennifer Huddleston: No.

Adam Thierer: – and say, "Is this safe? Is this protected here? Can I go to the next state, to the next?"

In the 1990s, whether we got this right by design or by accident is kind of irrelevant at this point. But the bottom line is we did get it right. We had a framework for global electronic commerce, and we had a Telecommunications Act that generally treated the internet and digital commerce more like a national resource or national platform and, generally speaking, moved towards a world of more forbearance, except for a couple of notable things like the obscenity issues about that. We also had the encryption wars, right.

Nico Perrino: Yeah.

Adam Thierer: That was terrible, but we won. It's not to say it's all gonna be good, but we do need to think about what makes sense constitutionally speaking. And practically speaking, we've gotta think about what this means for free speech more broadly. Last year at this time, Greg testified – Greg of FIRE –

Nico Perrino: Yeah, Greg Lukianoff.

Adam Thierer: – testified at a great hearing on the concerns about weaponization of AI. And Greg said that the most chilling – this is a quote from Greg at that hearing – "The most chilling threat that the government poses in the context of the emerging AI is regulatory overreach that limits its potential as a tool for contributing to human knowledge."

And I pointed out a subsequent piece praising Greg's testimony. This is the next great technology of freedom, to borrow a phrase from an old book that inspired me by the political scientist Ithiel de Sola Pool, who wrote a book in 1983 called Technologies of Freedom, and talked about how we had to get the prerequisites of free speech and free commerce right and set sort of basic frameworks up, and the frameworks were however rooted in forbearance.

And this is the tricky part when it comes to communications technology. We do need a national framework of some sort, but it needs to be light touch, market-oriented, free speech embracing. Are we gonna get all those things? Probably not. I think if we reopen Section 230 in any way shape or form in this modern Congress, it would be gutted.

Ari Cohn: Yeah.

Adam Thierer: Right. And I'm concerned about creating a national framework where we overcome the misguided patchwork of hundreds of state rules only to get one big worse rule up here. This is a real-time risk we face in every context. But it doesn't mean we don't need anything. We're gonna need some because we already have a lot of existing rules and regulations. Far from being an unregulated wild west, AI is very aggressively regulated by agencies like the FDA, FAA, the FTC, NIST, all sorts of agencies. If anything, we need some deregulation and some protections. I don't know if we'll get them.

As Jennifer knows, in my last two books, I said that the greatest protection of online free speech and online commerce has nothing to do with the law; it's technology itself. It's the so-called pacing problem, or what I call the pacing benefit, which is that technology races ahead very fast, evolving at least linearly, sometimes exponentially, and policy evolves incrementally at best. And the gap between them is called the pacing problem. But one person's pacing problem is another person's pacing benefit. The fact that technology evolves this fast is the ultimate check and balance, in my opinion, on regulatory stupidity.

And so, maybe Congress just sits there and dilly-dallies in constantly debating yesterday's issue as fast as a new one's coming along, and they have to shift. But the problem is, if the states are accumulating all of these additional regulations while Congress is doing that, then you've got multiple layers of controls on both speech and commerce that are hugely problematic for what is legitimately, truly interstate speech and commerce. So, I do think a national framework is needed.

Nico Perrino: I wanted to ask about Greg's testimony and the quote that you bring up there. When you're talking about the rise of the internet and speakers on the internet and the Communications Decency Act, which regulates indecency, which isn't a category of speech accepted from the First Amendment – I know there's a whole debate around that, but we'll put that to the side – you're talking about speakers. You're talking about Ari Cohn on a chat forum or whatnot.

Ari Cohn: God forbid.

Nico Perrino: You're talking about me speaking to my however many followers on X. With artificial intelligence, it's an artificial intelligence. Now, there's a human who creates the code that does the reasoning that does the learning, but they don't know necessarily what's gonna get spit out by the algorithm. And I think what Greg's speaking to in that quote that you cited, Adam, is the other part of the First Amendment that's maybe less developed, the right to access information. It's the right to learn. It's the access to knowledge production.

Adam Thierer: That's right.

Nico Perrino: And artificial intelligence produces knowledge. It produces information. And what does the First Amendment say about our freedom to access that information?

Adam Thierer: Well, Greg had an answer. He said, "It offers even greater liberating potential empowered by First Amendment principles, including freedom to code, academic freedom, and freedom of inquiry," and, "We are on the threshold of a revolution in the creation and discovery of knowledge."

This is a legitimate information revolution. And it is about not just our ability to speak, but, as you said, our ability to consume and receive more information. That's a hugely powerful and important thing for us to be defending, right? And so, there's multiple layers of First Amendment questions that we need to unpack here. A lot of them will be done in courts later. But at a threshold matter, we should hope our policymakers can wrap their heads around why it's important that America once again embrace this, especially with autocratic nations like China looking to lead the world and have their technologies be dominant over ours.

I just recently wrote a piece in response to the so-called DeepSeek moment of China releasing a very powerful new open-source AI model and said, "Take a look at what China's trying to do to counter America in one country after another around the world with their 'Digital Silk Road' project, which is to impose their autocratic values and top-down totalitarian vision through technological design choices." So, this is where our free speech and our ability to push to the world our model is really powerful. Some may think it's elitist, but I don't care. This is about the power of speech.

Nico Perrino: Does our model win out in that big open encounter?

Adam Thierer: Well, look in the analogue age what Hollywood did. People like to complain about it a lot, but the reality is that a lot of people across the world became familiar with American culture and values through the export of information and entertainment, right? And so, this has important ramifications for AI going forward. We want our systems, our technologies and a lot of our, yes, Western values embedded in these technological systems and exported to the world.

This is why this is the most disturbing thing about the whole new MAGA movement to me, is this idea about coming more contained and closing ourselves off from the world and decoupling. No, we want to engage. We wanna make sure our systems, our values, and our technologies, and especially our information systems lead.

Nico Perrino: Jennifer, Ari, some final words on this?

Ari Cohn: I love listening to Adam talk about – I'm literally gonna let him have the last word.

Jennifer Huddleston: Yeah. Honestly, that's such a perfect moment to end on. I think that has to be the last word.

Nico Perrino: All right, folks. Well, this has been fun. On the next podcast, listeners, I promise we'll get out of the tech space and hopefully look at some of the other news that's breaking across the country. We've got some developments at Columbia and with immigration and customs enforcement and DHS. There's just too much news right now honestly, folks. I don’t know how I'm gonna get a weekend over the next four years, but we will try.

Ari Cohn, Lead Counsel, Tech Policy, here at FIRE. Thank you for coming on the show. Adam Thierer, Resident Senior Fellow, technology and Innovation at the R Street Institute. And Jennifer Huddleston, Senior Fellow in Technology Policy for the Cato Institute. Folks, I appreciate you joining me.

Adam Thierer: Thank you so much.

Jennifer Huddleston: Thank you.

Ari Cohn: Thanks for having us.

Nico Perrino: I am Nico Perrino, and this podcast is recorded and edited by a rotating roster of my FIRE colleagues, including Sam Li, Aaron Reece, and Chris Maltby. It's also co-produced by my colleague Sam Li. To learn more about So to Speak, you can subscribe to our YouTube channel or Substack page, both of which feature video versions of this conversation. You can also follow us on X by searching for the handle freespeechtalk. We often post clips from these conversations on that X channel.

If you have feedback, you can send it to sotospeak@thefire.org. Again, that's sotospeak@thefire.org. And if you enjoyed this episode, please, please, please consider leaving us a review on Apple Podcasts or Spotify. Those reviews help us attract new listeners to the show. And until next time, I thank you all again for listening.

Share