Section 230

Section 230 was developed in response to a pair of lawsuits against online discussion platforms in the early 1990s that resulted in different interpretations of whether the service providers should be treated as publishers or, alternatively, as distributors of content created by their users.

Its authors, Representatives Christopher Cox and Ron Wyden, believed interactive computer services should be treated as distributors, not liable for the content they distributed, as a means to protect the growing Internet at the time.

"[29] The court asserted in its ruling Congress's rationale for Section 230 was to give Internet service providers broad immunity "to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material.

"[28] This rule, cementing Section 230's liability protections, has been considered one of the most important case laws affecting the growth of the Internet, allowing websites to be able to incorporate user-generated content without fear of prosecution.

In 2008, the Ninth Circuit in an en banc decision ruled against Roommates.com, agreeing that its required profile system made it an information content provider and thus ineligible to receive the protections of §230(c)(1).

[32] Around 2001, a University of Pennsylvania paper warned that "online sexual victimization of American children appears to have reached epidemic proportions" due to the allowances granted by Section 230.

[42][43] The bills were criticized by pro-free speech and pro-Internet groups as a "disguised internet censorship bill" that weakens the section 230 immunity, places unnecessary burdens on Internet companies and intermediaries that handle user-generated content or communications with service providers required to proactively take action against sex trafficking activities, and requires a "team of lawyers" to evaluate all possible scenarios under state and federal law (which may be financially unfeasible for smaller companies).

In 2020, Supreme Court Justice Clarence Thomas made a statement in respect of denying certiorari to Malwarebytes, Inc. v. Enigma Software Group USA, LLC., which referenced Robert Katzman's dissent in Force v. Facebook.

[52][53]Consequently, in 2023 the Supreme Court agreed to hear two cases considering whether Social media can be held liable for "aiding and abetting" in acts of international terrorism, when their recommender systems promote it.

[57] Some politicians, including Republican senators Ted Cruz (TX) and Josh Hawley (MO), have accused major social networks of displaying a bias against conservative perspectives when moderating content (such as Twitter suspensions).

"[24] Representative Beto O'Rourke stated his intent for his 2020 presidential campaign to introduce sweeping changes to Section 230 to make Internet companies liable for not being proactive in taking down hate speech.

Fellow candidate and former vice president Joe Biden has similarly called for Section 230 protections to be weakened or otherwise "revoked" for "big tech" companies—particularly Facebook—having stated in a January 2020 interview with The New York Times that "[Facebook] is not merely an internet company.

[82][83] In the aftermath of the Backpage trial and subsequent passage of FOSTA-SESTA, others have found that Section 230 appears to protect tech companies from content that is otherwise illegal under United States law.

Gonzalez involved Google's liability for the YouTube recommendation options that appeared to promote recruitment videos for ISIS that led to the death of a U.S. citizen in a 2015 Paris terrorist attack.

[87] In Taamneh, the company had been found liable for hosting terrorism-related content from third-party users under the Antiterrorism and Effective Death Penalty Act of 1996, Section 230's protections.

The Supreme Court considered this question in regard to terrorism content in the forementioned Gonzalez and Taamneh cases, but neither addressed if Section 230 protected social media firms for the product of their algorithms.

[93] In February 2020, the United States Department of Justice held a workshop related to Section 230 as part of an ongoing antitrust probe into "big tech" companies.

[94] Observers to the sessions stated the focus of the talks only covered Big Tech and small sites that engaged in areas of revenge porn, harassment, and child sexual abuse, but did not consider much of the intermediate uses of the Internet.

[124] Jack Dorsey, Twitter's former CEO, defended the moderation, stating that they were not acting as a "arbitrator of truth" but instead "Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.

"[128] The EO asserts that media companies that edit content apart from restricting posts that are violent, obscene or harassing, as outlined in the "Good Samaritan" clause §230(c)(2), are then "engaged in editorial conduct" and may forfeit any safe-harbor protection granted in §230(c)(1).

"[136] By June 2, 2020, the Center for Democracy & Technology filed a lawsuit in the United States District Court for the District of Columbia seeking preliminary and permanent injunction from the EO from being enforced, asserting that the EO created a chilling effect on free speech since it puts all hosts of third-party content "on notice that content moderation decisions with which the government disagrees could produce penalties and retributive actions, including stripping them of Section 230's protections".

The lawsuit stated that should the EO be enforced, Twitter would not have been able to fact-check tweets like Trump's as misleading, thus allowing the President or other government officials to intentionally distribute misinformation to citizens.

Pai stated that this was mostly due to the lack of time to implement such rule making before his resignation, but also said that he would not "second-guess those decisions" of social media networks under Section 230 to block some of Trump's messages from January 6 that contributed to the violence.

[149] In the days that followed, Twitter, Facebook, and other social media services blocked or banned Trump's accounts claiming his speech during and after the riot was inciting further violence.

[153] In March 2021, Facebook's Mark Zuckerberg, Alphabet's Sundar Pichai, and Twitter's Jack Dorsey were asked to testify to the House Committee on Energy and Commerce relating to the role of social media in promoting extremism and misinformation following the 2020 election, of which Section 230 was expected to be a topic.

[155] Following Frances Haugen's testimony to Congress that related to her whistleblowing on Facebook's internal handling of content, House Democrats Anna Eshoo, Frank Pallone Jr., Mike Doyle, and Jan Schakowsky introduced the "Justice Against Malicious Algorithms Act" in October 2021, which is in committee as H.R.5596.

[156] "The last few years have proven that the more outrageous and extremist content social media platforms promote, the more engagement and advertising dollars they rake in," said Representative Frank Pallone Jr., the chairman of the Energy and Commerce Committee.

"By now it's painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it's a question of how best to do it," he added.

[157] The state of Florida (predominantly Republican after the 2020 election) passed its "deplatforming" Senate Bill 7072 in May 2021, which had been proposed in February 2021 after Trump had been banned from several social media sites.

[165] The Fifth Circuit reversed the district court ruling in September 2022, with Judge Andy Oldham stating in the majority opinion, "Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.

The two tweets on May 26, 2020, from President Trump that Twitter had marked "potentially misleading" (inserting the blue warning icon and "Get the facts..." language) that led to the executive order
Trump signs an executive order on "Preventing Online Censorship" on May 28, 2020.
Text of the "Executive Order on Preventing Online Censorship"