Table of Contents

Your guide to Section 230, the law that safeguards free speech on the internet

social media icons on smartphone

Twin Design / Shutterstock.com

This is the first installment in a two-part series on Section 230. This part delves into Section 230’s text, purpose, and history. Part 2 will focus on Section 230’s importance to free speech on the internet and address some common misconceptions about the law.


Imagine you want to start an online platform where thousands or even millions of people can freely discuss topics that interest them. The only catch: You can be held legally responsible for everything they say. Anytime a user posts something potentially defamatory (or otherwise injurious to someone), you risk facing a lawsuit. Do you still want to create your platform? And, if you do, will you be able to resist aggressively supervising and limiting users’ speech to avoid liability? 

This was the problem threatening to stall the internet’s rapid growth as a free speech engine in the 1990s. Congress responded by including in the Communications Decency Act a provision now widely known as Section 230. The law gives “interactive computer services” — including discussion forums, hosted blogs, video platforms, social media networks, crowdfunding sites, and many other services that host third-party speech — broad immunity from liability for what their users say. It similarly protects individual internet users who share third-party content by, for example, retweeting someone or reposting an Instagram video. Section 230 also grants the services broad immunity from liability for declining to host, publish, or platform third-party speech.

By the time Congress passed the law in 1996, tens of millions of people were already using the internet. Today, that number is in the billions, and includes over 90% of Americans. The immense volume of activity on the internet makes it virtually impossible for digital platforms to review everything their users post and determine if it violates the law. To avoid a barrage of lawsuits — even those in which they would prevail, which may come at great cost — many platforms would ramp up their content moderation or opt out of hosting user-generated content altogether. 

Section 230’s immunity shield solves this problem. It’s widely credited with enabling digital platforms — and free speech — to flourish on the internet. These platforms empower us to exchange ideas and information with people all over the world. The average person’s voice has a reach that was unthinkable a century ago. Content creators can connect and interact with wider audiences. As the Electronic Frontier Foundation said, “The free and open internet as we know it couldn’t exist without Section 230.”

The impetus for Section 230

Section 230 arose in reaction to a pair of 1990s court decisions that showed how existing law sowed uncertainty regarding platforms’ liability for third-party speech. These decisions created a serious risk that platforms would face scores of lawsuits and massive potential liability for their users’ speech, chilling speech that may otherwise have flourished on the internet. 

 Section 230 remains critical to preserving free speech on the internet.

The first was Cubby, Inc. v. CompuServe Inc. Early internet giant CompuServe provided access to 150 different discussion forums. The plaintiffs in Cubby claimed one of those forums hosted defamatory content, but a federal court held CompuServe could not be held liable for it. The court likened CompuServe to newsstands and libraries, which are not liable for defamatory statements of which they are unaware in publications they carry.

The court in Cubby applied that same rationale to CompuServe’s vast online forums. CompuServe did not review or exercise any editorial control over the forums’ content, and the court held that requiring the company to do so “would impose an undue burden on the free flow of information.” 

A few years later, a New York state court reached a different result in Stratton Oakmont, Inc. v. Prodigy Services Co. Prodigy was another early online content-hosting service. In 1994, an anonymous user wrote on Prodigy’s message board, “Money Talk,” that the brokerage firm Stratton Oakmont had committed criminal acts in connection with a stock’s initial public offering. The user called the firm “a cult of brokers who either lie for a living or get fired.” (Stratton Oakmont’s founder Jordan Belfort, the subject of Martin Scorsese’s 2013 film “The Wolf of Wall Street,” was later convicted of securities fraud.) Stratton Oakmont sued the user and Prodigy for libel.

android phone with icons for Parler and Twitter

FIRE Statement on Free Speech and Social Media

Issue Pages

Social media and free speech.

Read More

Like CompuServe, Prodigy could not possibly review the enormous amount of content posted on its message boards each day. Even at this early stage of the internet’s development, Prodigy’s message boards saw about 60,000 messages daily. But the New York court distinguished Cubby and held Prodigy could be held liable for the “Money Talk” posts as if it had written them itself. The court reasoned that because Prodigy exercised editorial control over its message boards by moderating their content, unlike CompuServe, it was more like a newspaper than a newsstand.

The upshot of these cases was the threat that if a digital platform did any content moderation, it was legally responsible for all speech on the platform. And that even platforms like CompuServe — which took a completely hands-off approach — could potentially face liability for any content about which someone notified them. Given the massive amount of content posted on many platforms, that would mean having to investigate a huge number of complaints and make quick decisions about whether to take down challenged posts or risk a lawsuit. 

The bad incentives facing online platforms threatened freedom of speech on the internet and caught the attention of then-Reps. Chris Cox and Ron Wyden, who proposed a bill that would eventually become Section 230.

What Section 230 does

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Those are what legal scholar Jeff Kosseff describes as “the twenty-six words that created the internet.” They come from Section 230(c)(1).

The statute defines an “interactive computer service” as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” That includes websites, internet service providers, and web hosting services.

Section 230 distinguishes the act of creating content from the act of hosting or distributing that content. 

It defines an “information content provider” as a person or entity responsible “for the creation or development of information provided through the Internet or any other interactive computer service.” For instance, someone who writes a tweet and posts it on Twitter is acting as an information content provider.  

Section 230 distinguishes the act of creating content from the act of hosting or distributing that content. The idea is to place responsibility for speech on the speaker. 

The first federal appeals court decision to interpret Section 230 confirmed that the law’s plain language broadly immunizes platforms from liability for user-generated content. In 1997’s Zeran v. America Online, the U.S. Court of Appeals for the Fourth Circuit explained that the “specter of tort liability in an area of such prolific speech would have an obvious chilling effect,” causing online services and websites to “severely restrict the number and type of messages posted.” In the decades since Zeran was decided, courts have almost uniformly adopted and followed its interpretation of Section 230.

Section 230 doesn’t immunize online services and websites from liability for all third-party content they host. It doesn’t, for instance, protect them from intellectual property claims (which may fall under other immunity regimes), supplant communications privacy laws, or affect the enforcement of federal criminal law. It also doesn’t protect platforms for liability for content they create or to which they contribute.

Say, for example, you run a blog with a comments section. Section 230 wouldn’t protect the content of the blog posts that you write. But it would protect you from liability for the comments left by your readers. (If, on the other hand, you substantially edited a reader’s comment and changed its meaning in a way that made it unlawful or legally actionable, you could lose Section 230 immunity for that comment.)

Section 230 additionally protects platforms’ freedom to curate the content they host, restricting or removing material as they see fit. Subsection (c)(2) states:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

This provision immunizes service providers and users from liability for taking down content the provider finds objectionable. It reinforces websites’ First Amendment right to exercise editorial discretion over what appears on their private platforms, and promotes the development of a wide variety of online forums and communities. As FIRE explained in a recent statement on free speech and social media, the First Amendment protects both a private actor publishing speech the government wishes to suppress, and a private actor refusing to publish speech the government wants platformed.

Together these statutory provisions express Congress’s intent to preserve the internet as “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” Section 230 sought to promote the development of tools that give users greater control over the content they view — a preferable alternative to government censorship. The law’s drafters recognized that an internet unfettered by government regulation and the threat of liability for hosting or refusing to host user content offers the best chance for free speech to thrive online.

Almost three decades later, Section 230 remains critical to preserving free speech on the internet. Part 2 of this series will explore the law’s relationship to online free expression in greater detail.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share