What Is Section 230 and Why Should I Care?
Every day, millions of users log on to social media platforms and scroll through other users’ posts. Chances are those users will see something they do not like.
When social media users find content that they consider distasteful or even harmful, what can they do? Who is responsible?
In short, content that is distasteful or disagreeable is protected from government interference by the First Amendment. Users can make their own choices about posting it, and platforms can make their own choices about allowing or banning it. Content that is illegal or is in a category of speech that is not protected by the First Amendment can result in government punishment, but typically only for the poster, not for the platform.
This article explores how a federal law commonly called Section 230 and the First Amendment both influence who is responsible for potentially harmful content online.
What is Section 230?
Section 230 is shorthand for Section 230 of the federal code, a part of the Communications Decency Act, a federal law enacted in 1996 to protect websites and social media platforms from responsibility for content that their users post. It is considered one of the most consequential laws governing speech on the internet. It says, in part:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This means that a website is protected from most types of lawsuits over content that its users post. This federal law applies to websites that feature users’ content; social media platforms like Facebook, YouTube, Yelp, and X; and platforms like message boards or that have comments sections where users can post.
RELATED: The complete guide to free speech on social media
For example, if a user posts a false statement that harms someone’s reputation, the website or social media platform is not liable for defamation. Only the user who posted the statement is legally liable. Only the poster, not the site, can be sued. (Note that other users who share or repost defamatory posts may also be liable.)
This protection remains whether the website or social media platform leaves the offensive post up or takes it down, as long as it doesn’t edit the content in any way.
For example, if someone notifies Facebook about a post that makes false and defamatory statements about them, Facebook can either take the post down or leave it up — untouched — without fear of losing a defamation lawsuit. Section 230 protects Facebook in this scenario. Any lawsuit must be filed against the original poster.
Section 230 is most often invoked by websites and social media platforms who are sued for defamation, but it also protects them from lawsuits about invasion of privacy, fraud, discrimination, and negligence (for instance, when someone is injured copying a video stunt they see on TikTok or Instagram).
There are certain exceptions. Section 230 does not protect a website when users post content that violates someone’s trademarks or copyrights (though the Digital Millennium Copyright Act offers immunity for websites in copyright cases if they follow certain procedures). Section 230 does not protect websites when user content violates federal criminal laws such as those that criminalize distribution of obscene content, money laundering, mail fraud, kidnapping for ransom, and identity theft committed to further another federal crime or felony.
Depending on the platform’s role, platforms may sometimes be liable for content around sex trafficking and child sexual abuse. A 2018 law called FOSTA amended Section 230 to say that sites can be liable for such content if it violates a federal sex trafficking law that makes beneficiaries of trafficking liable. A beneficiary, according to the courts, is someone who assists, supports or facilitates the trafficking.
How does Section 230 relate to the First Amendment and free speech?
Social media platforms are private companies that are not bound by the First Amendment. They can delete user-generated content without violating users’ First Amendment rights. The First Amendment prohibits the government, not private people or companies, from infringing on speech.
Social media platforms have their own First Amendment rights. The First Amendment protects their right to moderate the content people post on their websites. The government cannot tell social media sites how to moderate content any more than it can tell a social media user what to say or not say. While reviewing Florida and Texas laws to regulate how social media companies can moderate content, the Supreme Court said in its 2023-2024 term that the First Amendment protected content moderation by social media platforms.
The court treated social media platforms and websites like newspapers, magazines and book publishers, which are selective in what they accept for publication. Typically, such a publisher is also aware of the content of each article or book they put out and bears responsibility for any unprotected or illegal content in it.
Section 230 recognizes that, unlike book or newspaper publishers, social media platforms feature high volumes of user-generated content that platforms never actually see or review. The law states that they are not responsible for that content, even if the content is not protected by the First Amendment. It says that if a user publishes illegal content on a social media platform, the user and not the social media platform would typically be held responsible. The platform is only responsible for its own content.
How are internet users impacted by Section 230?
Section 230 means that if people believe that they have been harmed by content on the internet, their recourse is to sue the poster of the content, not the platform. For example, eBay is not liable if an item for sale by a user is fake; a ticket resale site is not responsible for a counterfeit ticket sold on it by a user.
Google is not responsible for misleading websites that appear in its search results. Dating apps are not liable for fake or impersonation profiles.
Social media platforms and websites may state in their own terms of service or company policies how they will treat these types of content, but the government cannot require them to take action.
Many, but not all, platforms have policies stating that they will remove content if there is a court order to do so – but, apart from content that violates copyright, they still cannot face liability if they do not do so, according to a California Supreme Court decision.
Has the Supreme Court ruled on Section 230?
In 2023, the family of a person who was killed in a terrorist attack tried to sue YouTube for not removing terrorist content that the family said contributed to the attack.
They argued that Section 230 shouldn’t protect sites like YouTube from liability. The family said the Google-owned platform’s recommendation algorithm converts the site from a protected “interactive computer service” into a publisher.
The Supreme Court heard this case. It did not reach the issue of whether Section 230 applied because the court said unanimously that YouTube did not aid and abet international terrorism, as the family claimed.
The court said that platforms, under the law, are not held liable because their content suggestion algorithms treat terrorist content the same as all other content. The platforms are not required to speak out and object to content with objectionable views.
In July 2024, the Supreme Court declined to accept another case, this one involving Snapchat, that tried to challenge Section 230.
Why was Section 230 created?
In the early days of the internet, courts and lawmakers weren’t sure how to handle the new types of businesses and communications being created.
In two key cases in the early 1990s, courts ruled that sites that simply hosted users’ content without any moderation had no liability for that content. On the other hand, courts found computer services liable for users’ content if they moderated the content in any way. One early internet company, Prodigy, used filtering software to remove user comments with profanity. When a user posted false claims about a financial company on the Prodigy platform, the financial company sued Prodigy for defamation and won. The company “lost its protection as a distributor and gained liability as a publisher because it had tried to remove objectionable material but had done so incompletely,” according to Danielle Citron and Bengamin Wittes in the Fordham Law Review.
Lawmakers noticed this case and predicted that it would discourage internet companies from trying to moderate content at all, or else they could lose so much money in lawsuits that they would not be profitable.
“If you were going to hold all of these platforms just as responsible as the authors, ultimately you probably would have fewer venues for user-created speech because there’s just not much incentive to take on the liability for that content. A lot of platforms might have been sued out of existence a long time ago.” – Jeff Kosseff, cybersecurity law professor and author of “The Twenty-Six Words That Created the Internet,” in a video interview with Freedom Forum
Section 230 aimed to prevent this.
Lawmakers reasoned that if platforms were not legally liable for users’ content, then they would try harder to moderate it. They would be able to moderate content without being worried about having to do a perfect job, which would not be feasible. Any content that slipped through their efforts wouldn’t result in an expensive lawsuit.
Section 230 applies to platforms that take no action to moderate content and to platforms that may even encourage illegal content.
But the proposal made a point of also specifically protecting platforms’ good faith efforts to remove illegal content. It aimed to encourage providers to remove harmful content by protecting those that tried to do so from liability if their moderation missed some such content.
From Communications Decency Act, only Section 230 remains
Section 230 was enacted in 1996 as part of the Communications Decency Act, which itself was a section of the broader Telecommunications Act. The Supreme Court struck down most of the Communications Decency Act, which aimed to regulate indecent content online, ruling that it regulated too much free speech on the internet, not only obscene content. Section 230 is the only provision that remains in effect.
Common misconceptions about Section 230
Both how platforms moderate content and the role of Section 230 have been criticized across the political spectrum, fueled by disagreement over whether there should be more moderation or less moderation of content online.
Some social media users and lawmakers say that Section 230 protects content they do not want to see on the internet, such as hate speech, misinformation or political views with which they disagree.
But much of this type of content — including what users post and how the platforms choose to moderate it — is protected from government regulation by the First Amendment, not by Section 230.
Users or lawmakers may object to content online and say companies bear too little responsibility for removing objectionable content. But, unless it meets the high bar for a particular category of speech that is not protected as free speech, the First Amendment protects companies’ right to host objectionable information.
Others argue that Section 230 gives tech companies too much power to restrict what appears on their sites, such as allowing companies to favor certain views and disfavor others. But the First Amendment similarly protects companies’ free speech rights to restrict or allow different viewpoints, not Section 230.
Arguments and challenges around Section 230
Recognizing that the First Amendment itself largely protects internet platforms’ content moderation policies, some current questions remain around Section 230.
Critics of the law propose various modifications to limit platforms’ immunity to liability in some cases, including:
- Excluding platforms that encourage or enable illegal content from Section 230’s protections.
- Including in Section 230’s protections only platforms that make a good faith effort to address illegal content, particularly because it can be difficult to find and recover damages from users who use technology to conceal their identities while making harmful statements.
- Applying only to unmonetized content.
- Excluding immunity for some types of illegal content to include child sexual exploitation and illegal drug sales.
Proponents argue that without Section 230, websites could be overwhelmed by lawsuits over users’ content and could remove much more content, even content that is not objectionable. At that point, they would no longer serve their purpose.
The Supreme Court in two cases in 2023 suggested any changes to the law should come from Congress, and noted that if the law were totally revoked, “the Internet will be sunk.”
This report is compiled based on previously published Freedom Forum content and with the input of Freedom Forum experts including First Amendment Specialist Kevin Goldberg. The editor is Karen Hansen. Email.
Paul Robert Cohen and “His” Famous Free Speech Case
Prior Restraint: When (and How) the Government Can Censor You
Related Content
2025 Al Neuharth Free Spirit and Journalism Conference
All-Expenses-Paid Trip To Washington, D.C.
June 22-27, 2025
Skill-Building
Network Growing
Head Start On Your Future