Skip to main content

Section 230 and Disinformation

What is Section 230? 

As the internet was gaining popularity in the ‘90s, Congress was faced with questions of how it was going to regulate this new communication platform. In 1996, Congress passed the Communications Decency Act (CDA) which sought to provide some oversight over what people could share on the internet. The law had some issues though. One section criminalized any “patently offensive” posts that could be viewed by minors on the internet. If found guilty, individuals could face steep fines and/or up to two years in prison. 

The American Civil Liberties Union (ACLU) sued the government arguing that the law’s censorship provisions were too vague and would unconstitutionally infringe on protected First Amendment free speech. In 1997, the Supreme Court in Reno v. ACLU sided with the ACLU stating the censorship placed an “unacceptably heavy burden on protected speech” that “threaten[ed] to torch a large segment of the Internet community.”

What about Section 230? Section 230 of the CDA, sometimes referred to as the 26 words that created the internet, was included in the bill as a way to protect websites and service providers from being held responsible for what individual users posted on their sites. Section 230(c)(1) states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

Different from a publisher such as the New York Times or an individual speaker who could be held responsible for defamation or inciting violence with their words, Section 230 established that companies like Meta, YouTube, and Twitter are not publishers or speakers, and thus cannot be held liable for the content users post on their platforms.

The second key provision is Section 230(c)(2) which states, “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…” This ‘good samaritan’ provision encourages moderating content on platforms and protects companies from liability if they fail to take down harmful content, so long as they are making a good faith effort to police their platform. 

Section 230 has allowed billions of people to create an unlimited amount of posts, forums, tweets, videos, blogs, podcasts, etc. without platforms being held responsible for their content. If Section 230, or a similar protection, did not exist the internet as we know it today would be vastly different.  

Why do some want to amend Section 230?

While Section 230 has been the law of the land for nearly three decades, a growing number of critics believe that it is time for a change. In a rare example of bipartisan agreement, politicians on both sides of the aisle say it is time to abolish or amend Section 230. Former President Trump and President Biden have both called for scrapping Section 230 in its entirety. Democrats and Republicans in Congress have also called for changes to Section 230, however, little progress has been made in deciding what changes to Section 230 are needed.

Part of the reason why there has been no consensus in Congress is because Republicans and Democrats disagree on what the problems with the law are. 

Republicans generally view Section 230 as a way of providing big tech carte blanche to infringe on conservative’s First Amendment rights by censoring their free speech. However, as RStreet has pointed out, “the First Amendment is a restriction on government, not private entities, and attempts to force platforms to carry speech they don’t want to violates their First Amendment rights to freedom of association.”

Democrats on the other hand generally believe Section 230 has allowed tech companies to profit off of disinformation, misinformation, hate speech, and other disruptive content without facing any accountability or any requirement to police content on their platforms.  

Proposed changes to reform Section 230

According to the Bipartisan Policy Center, in the 117th Congress there were over 25 bills aimed at repealing or amending Section 230. Some of those bills have been reintroduced in the 118th Congress. Many of the bills create specific carve outs for things such as artificial intelligence or content that enables cyber-stalking, online harassment, and discrimination. These carve outs would narrow the scope of immunity that platforms are currently granted but leave other Section 230 protections in place. 

There is some precedence for this strategy. In 2018, the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) was signed into law and created a carve out of Section 230 immunity for content that promoted sex trafficking. However, the law has faced some challenges in implementation. Earlier this year, the Supreme Court declined to hear a case in which a victim of sex trafficking was seeking to hold Reddit accountable for hosting child pornography on its website. By declining to hear the case, the 9th U.S. Circuit Court of Appeals decision that “plaintiffs must show that an internet company ‘knowingly benefited’ from the sex trafficking through its own conduct” for FOSTA to apply means the bar to hold tech companies accountable remains high. 

While some members of Congress may continue to promote specific carve outs to Section 230, others believe a full repeal is necessary. Some big tech CEOs are hopeful a compromise can be reached as a complete repeal of Section 230 would likely fundamentally change how the internet functions today. 

In 2021, during the House Energy and Commerce Subcommittee on Communications and Technology’s hearing, “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation” CEOs offered their suggestions on how to address concerns with Section 230 without major changes or a repeal. Mark Zuckerberg, CEO of Meta (Facebook, Instagram, Threads) and Sundar Pichai, CEO of Alphabet (Google, YouTube) made the case for improving transparency in how companies handle moderating harmful but legal content. Arguing that doing so would not solve all issues but could improve trust between users and platforms. It is worth noting that currently there is nothing stopping these companies from being more transparent in their content moderation, but it seems they may need a legislative incentive to do so. 

The path forward

It is clear that there is a demand for Section 230 reform in Washington, but what that reform will look like is still an open question. Earlier this year, in an op-ed for the Wall Street Journal, President Joe Biden reiterated his belief that, “we must fundamentally reform Section 230” and called for greater transparency regarding the algorithms that big tech companies use on their platforms. However, the Biden administration has not offered specific changes to the law. 

Section 230 is a double edged sword that has created an environment that allows political dissent and social movements such as the Arab Spring, Black Lives Matter, and the Me Too movement to thrive but it also “enabled online abuse that has destroyed people’s lives.” A balance must be struck between protecting individual freedom of expression and limiting online harms. 

Several proposed changes to Section 230 center on making immunity carve outs for specific illegality exemptions, i.e. ensuring platforms do not get Section 230 immunity for hosting illegal content. While this may address some issues, many of the proposed changes would likely have little to no effect on the negative impacts legal harms such as mis/disinformation can have on these platforms. Furthermore, changes to Section 230 alone may not result in more algorithmic transparency that many have called for. 

One recommendation from the Aspen Institute that may, at least in part, address the spread of disinformation would be to, “Withdraw platform immunity for content that is promoted through paid advertising and post promotion.” Doing so may provide more of an incentive for big tech platforms to carefully screen paid ads and promotions, which have been used to promote disinformation. Though the CATO Institute argues even this may have First Amendment concerns. 

The Task Force on Defeating Disinformation Attacks on U.S. Democracy will continue to explore various options for amending Section 230 and seek out other changes to federal law that will address concerns beyond Section 230 immunity. 

DISCLAIMER: McCain Institute is a nonpartisan organization that is part of Arizona State University. The views expressed in this blog are solely those of the author and do not represent an opinion of the McCain Institute.

Author
Mike Brand, McCain Institute Democracy Fellow
Publish Date
August 14, 2023
Type
Tags
Share