Generated by GPT-5-mini| Section 230 | |
|---|---|
![]() U.S. Government · Public domain · source | |
| Name | Section 230 |
| Enacted | 1996 |
| Statute | Communications Decency Act |
| Codename | 47 U.S.C. § 230 |
| Jurisdiction | United States |
| Related legislation | Communications Decency Act, Telecommunications Act of 1996, Computer Fraud and Abuse Act |
Section 230
Section 230 is a provision of the 1996 Communications Decency Act codified at 47 U.S.C. § 230 that provides liability protections for interactive computer service providers that host or republish third-party content. It arose during the tenure of policymakers responding to issues highlighted in the 1990s surrounding online speech, advertising, and liability that involved actors such as AOL, CompuServe, Microsoft, Netscape, and lawmakers in the United States Congress. The provision has been central to disputes involving major platforms including Google, Facebook, Twitter, YouTube, Reddit, and legal contests reaching the Supreme Court of the United States.
Congress enacted the Telecommunications Act of 1996 amid debates involving stakeholders such as AT&T, Verizon, and trade associations representing online services. Congressional drafters, including staff from committees chaired by members like Senator James Exon and Representative Chris Cox, sought to reconcile decisions from cases such as litigation against Prodigy Services and Stratton Oakmont-related claims that implicated intermediary liability. Early legislative proposals paralleled discussions in hearings featuring testimony from executives at CompuServe, consumer advocates aligned with Electronic Frontier Foundation, and scholars from institutions like Harvard University and Stanford University. The result was a short statutory text intended to limit civil liability for platforms that host third-party speech while allowing them to engage in content moderation.
The statute contains concise operative clauses that: (1) state that interactive computer service providers shall not be treated as publishers or speakers of third-party content, (2) provide a safe harbor from civil liability for such content, and (3) authorize good-faith content removal or moderation without forfeiting immunity. The provision interacts with statutes such as the Federal Communications Commission's remit and federal tort law addressed by courts in cases involving plaintiffs like Dyroff-era litigants and defendants including MySpace and Backpage. Key statutory terms—"interactive computer service", "information content provider", and "good faith"—have been focal points in judicial interpretation by courts including the United States Court of Appeals for the Ninth Circuit and the Supreme Court of the United States.
Judicial construction has developed through seminal decisions such as Zeran v. America Online, Inc., which established broad immunity, and Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, which distinguished publisher versus content-provider conduct. Later appellate rulings, including Doe v. MySpace, Inc. and Jones v. Dirty World Entertainment Recordings LLC, refined the limits of protection. The Supreme Court addressed related questions in cases concerning intermediary liability doctrines and, more recently, sought certiorari in matters implicating platform immunity and state-law claims. Circuit court splits have emerged over issues like federal preemption, criminal law exceptions, and the interplay with statutes such as the Trafficking Victims Protection Reauthorization Act and consumer-protection laws invoked in suits against Backpage.
Scholars, legislators, and advocacy groups have debated whether immunity fosters innovation represented by entities like Amazon and eBay or enables harms alleged in litigation involving Cambridge Analytica, Parler, and other high-profile controversies. Critics—including policymakers from both Democratic and Republican caucuses—have argued that broad protections reduce platforms' incentives to curb illegal content and disinformation, as raised by commentators from New York Times, Washington Post, and policy researchers at Brennan Center for Justice and Brookings Institution. Defenders cite amici from technology firms and think tanks like Heritage Foundation and Electronic Frontier Foundation emphasizing free expression and the operational burden on smaller services.
Since enactment, numerous bills—introduced in sessions of the United States Congress and considered in committees such as the Senate Judiciary Committee and the House Energy and Commerce Committee—have sought to revise immunity, clarify standards for moderation, or impose transparency and accountability mandates. Proposals have ranged from targeted exceptions addressing issues raised by sex trafficking claims to broader reforms advanced by lawmakers like Senator Amy Klobuchar and Representative Bob Goodlatte. Legislative efforts have intersected with executive actions and regulatory proposals from administrations including those of President Donald Trump and President Joe Biden, generating competing text models addressing platform liability, notice-and-takedown procedures, and algorithmic accountability.
The legal environment shaped by the provision influenced business models of platforms including Instagram, TikTok, Pinterest, LinkedIn, and WordPress. Immunity facilitated scalable hosting of user-generated content, advertising ecosystems tied to firms like Google AdSense and Meta Platforms, Inc. and the emergence of content-moderation practices employing both human reviewers and machine learning systems developed by research groups at MIT and OpenAI. Users' redress options often involve litigation against content creators rather than intermediaries, while platform policy enforcement decisions have prompted public scrutiny and litigation invoking civil-rights organizations such as ACLU and consumer advocates.
Other jurisdictions have adopted divergent regulatory approaches exemplified by the European Union's e-Commerce Directive and Digital Services Act, the United Kingdom's online safety proposals, and country-specific regimes in Germany (NetzDG), Australia, and India. These laws impose varying duties on intermediaries for notice-and-action procedures, proactive removal obligations, and transparency reporting, creating cross-border compliance challenges for multinational firms like Microsoft Corporation, Apple Inc., and Huawei. Comparative debates involve tensions among national courts, supranational bodies such as the European Court of Justice, and transnational platforms navigating multiple statutory regimes.