Who is Responsible for Regulating Misinformation? – An Analysis of Section 230, the First Amendment, and Technology Platforms

By Laetitia Haddad, Intern

Upon finally acquiring Twitter, Elon Musk’s first order of business was to “save free speech”. Musk alludes to the ongoing debate regarding misinformation and censorship on social media platforms. In his leadership capacity at Twitter, Musk begins to navigate the precedent for content regulation and user freedom online. An analysis of Section 230, a key feature underpinning the current situation of free speech, is crucial to understand the scope of power platforms can exert over free speech, and how this may change in the near future. 

What is Section 230? 

In 1996, Section 230 of the Communications Decency Act aimed to foster the growth of social media websites. It outlined that these new online technology forums were not legally responsible for user posts and granted hosts the ability to regulate content on their platforms as they saw fit. 

How does it interact with the First Amendment?

As private entities, digital platforms reserve the right to implement their own moderation rules. One Texas bill, H.B. 20, argues that the lack of standardized policy regarding moderation results in the unfair persecution of certain political voices. The new legislation proposed in H.B. 20 would prohibit the censorship of users based on their viewpoints. In January 2021, Twitter suspended then-President Donald Trump’s account, attributing the ban to Trump’s tweets which incited violence during the January 6 Riots. Some individuals, like Musk, argue that removals of this kind unfairly target conservative viewpoints.  

In September, the U.S. Court of Appeals for the 5th Circuit upheld Texas H.B. 20, a move that will prompt an eventual ruling by the Supreme Court. The current Supreme Court, which has had no qualms with overturning precedent, may repeal the protections enshrined in Section 230, granting the government the right to moderate online content. 

Advocates in favor of repealing Section 230 argue that state regulation of social media would limit misinformation without stifling certain political perspectives. To contrast, proponents of Section 230 believe that it is a key tenet of freedom of speech on the internet, and has allowed companies based and modeled on user-generated content to flourish.

Misinformation and democracy 

A brief analysis of the impact of misinformation on the American psyche conveys a complex and politicized relationship, with profound impacts on trust and democracy. 

Misinformation remains a divisive aspect of American society, damaging the foundation of democracy. Seventy percent of adult Americans believe that the dissemination of false information online is a major threat to national security, ahead of anxieties about China, Russia, climate change, and the economy. 

Currently, the battle against misinformation is polarized. In 2021, Pew Research Center tracked a split between Republicans and Democrats regarding tolerance of misinformation and who should arbitrate online speech. One survey shows that 70% of Republicans believe all information should be protected, even if false, while 65% of Democrats believe that misinformation should be regulated. When polled on if tech companies should restrict false information online, even at the expense of the freedom of information, Republicans and Democrats held different opinions once again: only 37% of Republicans agreed with granting platforms the ability to limit speech, compared to 76% of Democrats. These statistics convey divided opinions on what, if any, information should be regulated online, and who should be responsible.   

Despite these disparate perspectives on how to manage misinformation online, common ground can be found with the consistent, apolitical moderation of users across all political affiliations and all online platforms. The application of clear and fair rules is essential to depoliticize the problem of misinformation. 

How are technology companies adjusting to fight misinformation?

Leveraging the rights enshrined in Section 230, social media companies are implementing their own standards for content moderation. Tech companies and the internet have thrived on the free flow of ideas and information; in order to sustain inclusive and engaging platforms, self-regulation is critical. 

In practice, Twitter founded the Twitter Moderation Research Consortium in 2022 which incorporates diverse global researchers to analyze misinformation data. Additionally, Birdwatch is an algorithm developed by Twitter to flag potentially misleading tweets across the political spectrum, which also utilizes human moderators. At TikTok, global fact-checking partners work through content to implement an International Fact-Checking code of principles. These new and developing initiatives present some examples of tangible and fair action against misinformation.

Transparency is an integral feature of effective moderation policy. Standards for dialogue need to be clear and applied fairly to all iterations of speech across the political spectrum to mitigate attacks on the integrity of private platforms. At the same time, tech companies can continue to grow and engage diverse perspectives in meaningful discourse on their platforms.  

At this point in time, it is in the best interest of social media companies to focus on and bolster their self-regulation policies, instead of waiting for the government to encroach on the provisions of Section 230. In the future, collaboration between tech firms and policymakers about free speech online could generate mutually beneficial solutions to misinformation and reap bipartisan legislation, deconstructing the current polarized situation. 

Though it is a difficult task, social media companies have the power to foster a more inclusive and safe environment for dialogue online. Working to enshrine basic regulations and norms, as well as understanding the mechanisms behind misinformation, are important first steps to maintaining fair platforms for productive free speech.