When Seeing Isn’t Believing: Navigating Trust, Regulation, and Innovation in the Age of Generative AI

By Olivia Reynolds, Intern

The rapid evolution of generative artificial intelligence (GAI) is transforming how we create, share, and perceive information. Imagine a world where you can’t fully trust what’s in front of you. A politician appears to slander global allies, a CEO announces a disastrous decision affecting millions, or a loved one seems to make a heartbreaking comment. These scenarios aren’t science fiction—they’re the reality we face as the line between fact and fiction becomes increasingly blurred with the rapid advancement of generative technologies.

The Evolution of Deepfakes

Although the popularity of GAI tools skyrocketed in 2023, regulatory frameworks have struggled to keep pace, leaving a gaping hole in technological oversight with deep implications for privacy. Despite persistent conversations about the “transformative impact of artificial intelligence,” there remains a striking lack of consensus at the federal and state levels on the most effective regulatory approach to govern generated content.

In the absence of clear action from the government, novel threats such as deepfakes have tainted Americans’confidence in the media. Deepfakes are highly realistic images, videos, or other media created by GAI. The term first emerged in 2017, when a Reddit user of the same name began sharing artificially generated intimate videos made with face-swapping technology. The definition has since broadened to encompass all synthetic media created using GAI, often portraying real individuals making statements or being involved in activities that never actually occurred.

The first deepfake that grabbed the public’s attention came from Oscar-winning dystopian filmmaker Jordan Peele in 2018. The message? Don’t trust everything you see online. In a doctored Buzzfeed video featuring former President Barack Obama, he begins by cautioning viewers: “We’re entering an era in which our enemies can make anyone say anything at any point in time—even if they would never say those things.” Ironically, after about thirty seconds, Peele reveals himself to be the puppeteer behind the video—and viewers learned that a seemingly timely message on fake news was actually a powerful statement about digital misinformation.

Regulatory Gaps and the Growing Threat of Synthetic Media

Over the past six years, deepfake technology has only become more advanced, amplifying the threat of fraud and deception. In fact, within the last 12 months, 53% of businesses in the US and UK were targeted by a deepfake scam. Without comprehensive regulations governing GAI in the US, the responsibility largely falls on individuals and companies to be proactive in the face of these technological developments. 

Not all deepfakes are inherently negative. Something innocent in nature, such as creating an image of you posing with your favorite celebrity, would be considered a deepfake. Seeing rapper Eminem perform alongside his alter ego “Slim Shady” at the 2024 VMAs or comedian Jerry Seinfeld starring in a scene from the thriller Pulp Fiction show a lighter side to generated content. However, in an era where information — whether true or not — can spread like wildfire online, most deepfakes carry the threat of serious consequences. In early 2024, nearly 1 in 4 Americans said they encountered a political deepfake – a statistic expected to be grossly underestimated, since most Americans wouldn’t even know if they’ve encountered these images, as the purpose of these images is to appear as real content. A key incident involved a robocall from President Joe Biden, suggesting to New Hampshire voters that casting a ballot in the upcoming primary would preclude them from doing so in the general election in November. 

Nonconsensual intimate images, 99% of which target women, are also a pervasive problem, sparking celebrities such as Taylor Swift, Kristin Bell, and Xochitl Gomez to speak out in anger and disgust. Some stars were even minors at the time, adding implications for child health and safety. In addition, deepfakes pose an economic threat, with the New York Times publishing an article in 2023 on the growing prevalence of AI-generated telephone calls to trick individuals into transferring money to fake accounts. That same year, ⅓ of deepfake victims lost over $1,000 dollars in consumer fraud scams. 

Yet, even recognizing these risks, there is little federal-level regulation regarding AI, leaving the job to the states. With the lengthy process of passing a bill, it’s hard for legislation to keep pace with technologies that are evolving at lightning speed. As a result, regulation is largely piecemeal and protections vary widely based on what part of the country you reside in. 29 states have enacted at least 50 bills thus far, but these statutes vary greatly in their comprehensiveness, not to mention the 21 states still lacking government action. Even more, many deepfake regulations don’t specifically address the risks posed by GAI, instead extending bills pertaining to other areas. 

For instance, deepfake regulation in Oklahoma has struggled to gain support, with the state enacting only one of six proposed bills. Its scope is extremely narrow as well, extending protections only to intimate images of minors. If someone were to circulate synthetic content misrepresenting the interests of a political candidate, legal action would rely on general defamation standards — which may not adequately address the nuances of deepfakes. Conversely, a candidate in Arizona would have more avenues for remedy under state law, which prohibits the distribution of altered content of a candidate within 90 days of an election. This disparity is only one example of the inequities that exist across state lines for protections against AI-generated images and videos. 

Collaborative Solutions for the Future of AI Governance 

These trends in GAI regulation mirror previous emerging technologies. When the internet revolutionized American life in the 1990s, we saw a lag in regulatory response, localized inconsistencies, and ethical concerns abound. Yet, GAI is unique in its fast-paced ability to self-learn, its potential for development by small teams or individuals rather than companies or governments, and its capacity to deploy at large scales almost immediately. In light of these past experiences, cross-sector collaboration in the face of these generative technologies is crucial. The law provides a critical line of defense against infringements on personal privacy and ensures numerous valuable protections for Americans. However, its efficacy increases when paired with innovations from private companies and civil society. 

OpenOrigins’ work exemplifies how the industry can fill gaps while the lawmaking process ensues. The startup uses blockchain technology to verify the authenticity of various forms of media. In the absence of comprehensive pathways for legal relief, the best thing to do is to not fall victim to a deepfake scam in the first place — which OpenOrigins helps consumers to do. Of course, this solution doesn’t address the harms derived from deepfakes that misrepresent one’s interests, actions, or physical state, but it does help protect against using generated content for financial scams. 

With the looming threat of deepfakes, laws must evolve to address the new challenges posed by GAI, but individuals and private enterprises also play a critical role in bridging the gaps left by policies that wildly differ between states. By fostering transparency, investing in verification technologies, and strengthening legal frameworks, we can ensure that GAI’s transformative innovations benefit society far more than they harm it. Americans can fully embrace the opportunities GAI offers while minimizing its potential for misuse through the strategic implementation of safeguards.