Skip to Main Content

State of Elections

A student-run blog from the Election Law Society

A Deep-Dive into California’s “Deepfake” Disclosure Requirements

November 8, 2024

By: Caroline Olsen

When Vice President Kamala Harris launched the first advertisement of her campaign for President, Christopher Kohls (@MrReaganUSA) quickly responded with a computer-generated voiceover parody, entitled “Kamala Harris Campaign Ad PARODY.” After Elon Musk reposted the parody on X (formerly Twitter), California Governor Gavin Newsom vowed to sign a bill that would make “manipulating a voice in an ‘ad’ like this one . . . illegal.” On September 17, 2024, he signed three. Despite having received bipartisan support, California’s latest legislation demonstrates the inherent challenges of designing deepfake regulations to “ensure . . . elections are free and fair.”

While these laws contain varying prohibitions and requirements, they each offer one common safe harbor: disclosure. Effective immediately, California Assembly Bill No. 2839 (AB 2839) prohibits a “person, committee, or other entity” from “knowingly distribut[ing] an advertisement or other election communication containing materially deceptive content,” unless “the content includes a disclosure stating[:] This _____ has been manipulated.” By January 2025, California Assembly Bill No. 2655 (AB 2655) will require large online platforms to identify and label “materially deceptive content . . . within an advertisement or election communication . . . no later than 72 hours after a report is made.” The label must state: “This _____ has been manipulated and is not authentic.” Similarly, California Assembly Bill No. 2355 (AB 2355), effective January, will require that “qualified political advertisements . . . include, in a clear and conspicuous manner, the following disclosure: Ad generated or substantially altered using artificial intelligence.”

Unsurprisingly, two of these laws have already been challenged. Christopher Kohls challenged AB 2839 and AB 2655 on the day they were signed into law. The Complaint seeks a declaratory judgment that both laws are unconstitutional, facially and as-applied, under the First and Fourteenth Amendments and the California constitution. Notably, the Complaint challenges the laws’ disclosure policies that require speakers to label deepfake content with “a precisely-worded disclaimer.” Between the “obviously far-fetched content” and “PARODY” label, the Complaint alleges that the “satirical nature” of Kohls’s video is clear and these mandatory labeling requirements would “alter[] the nature of his message.” For “visual media,” like Kohls’s parody video, AB 2839 requires that the disclosure is “no smaller than the largest font size of other text appearing in the visual media” and “appear[s] for the duration of the video.” The Complaint responds to that provision with an image, “illustrating a futile attempt to incorporate the absurdly large label that AB 2839 requires.”

Kohls’s challenge draws attention to an implicit assumption at the core of these disclosure requirements: transparency defeats deception. This is not necessarily true, especially in the context of artificial intelligence. Even if advertisements contain a deepfake disclosure in “absurdly large” text, it remains difficult to distinguish whether the underlying data is based in truth or falsity—or, likely, both. Without a readily available mechanism for “fact-checking” the underlying data, the disclosure safe harbor does not adequately protect voters from deception or manipulation.

Furthermore, the disclosure requirements might not be as effective as policymakers expect in changing how people consume election-related media. Consider, for example, when New York City passed legislation requiring fast-food restaurants to post caloric information. Presumably, this disclosure would help “curb obesity by helping consumers make better-informed decisions.” Behavioral economist Dan Ariely, studying the effects of this legislation on individual decision-making, explained, “You would expect that the moment you give people information . . . people would stop consuming high-caloric stuff. . . . [I]t actually went the other way around. People said, ‘Hey, only 800 calories!’” Perhaps already suspecting certain foods were calorically dense, some people might not have felt deterred by the disclosure. In the context of parody campaign advertisements, this finding suggests deepfake disclosures might not affect how the public actually consumes information and participates in elections.

In X Corp. v. Bonta, the Ninth Circuit analyzed California Assembly Bill 587 (AB 587), which requires social media companies to post their terms of service, under the First Amendment. In its opinion, the court noted that “[e]ven a pure ‘transparency’ measure, if it compels non-commercial speech, is subject to strict scrutiny.” Likewise, California’s election-related deepfake legislation will likely be subject to the highest degree of judicial scrutiny. As more states prepare to adopt similar laws, challenges like Kohls’s should inform a more narrowly tailored approach to election-related deepfake legislation.

State

California

Topics

Campaigns Candidates and Parties