The FCC Can’t Stop Political Deepfakes
Arguments from the Abundance Institute’s Comment to the FCC
As we approach the 2024 U.S. Presidential Election, the use of generative AI tools like ChatGPT and Claude has sparked widespread concern. Headlines have warned of an "AI election," with fears that political deepfakes could disrupt democratic processes. In response, the Federal Communications Commission (FCC) has proposed a rule requiring disclosures for political advertisements that use AI-generated content. While well-intentioned, this rule is flawed and unlikely to solve the problem it aims to address.1
The 2024 election cycle provides researchers with a wealth of data on the impact of generative AI. At my organization, the Abundance Institute, we have been analyzing how AI has been used in political communication through our AI Election Observatory, which tracks media coverage on AI’s role in the election. Despite initial fears, the actual impact of AI-generated political disinformation has been smaller than expected. AI’s role in political advertising, while present, has not been as significant as some headlines suggest.
The FCC’s proposed rule would apply to broadcast radio and television ads but not to internet media or streaming platforms, which are increasingly popular avenues for political messaging. One of the rule’s main justifications comes from a January 2024 incident involving a fake robocall of President Biden. A political consultant used generative AI to create an audio recording of Biden, urging New Hampshire voters to abstain from voting in the Democratic primary. While serious, this case does not point to a widespread problem in broadcast media. The robocall violated existing laws, and the person responsible is facing criminal charges. Additionally, this was not a broadcast issue; it involved telephone calls, which the FCC's rule would not regulate.
Our AI Election Observatory has identified only a handful of cases where AI was used in political advertising. Out of nearly 40,000 media articles analyzed, just four political ads used generative AI: three by the Ron DeSantis campaign and one by the Republican National Committee. The FCC’s rule would not cover these ads, as the Commission only has authority over traditional broadcast channels like radio and television, not over digital platforms where most AI-generated ads have appeared. This disconnect highlights the rule’s limited effectiveness. The FCC is not positioned to regulate AI-generated political ads.
The rule also raises practical concerns. Defining AI from both a technical and legal perspective is difficult. The FCC’s definition of AI-generated content as "image, audio, or video generated using computational technology" is overly broad. Modern cameras, smartphones, and video editing software all use AI-driven algorithms to enhance image quality and sound, which could technically require AI disclosures under this rule. If the FCC defines AI too broadly, it risks sweeping in everyday content creation technologies, forcing advertisers to over-disclose or mislabel their ads. This could lead to disclosure fatigue, where audiences ignore warnings that no longer hold meaning.
The rule’s imprecision could cause more confusion than clarity. For example, video content enhanced by AI stabilization or audio cleaned up by AI noise reduction software would technically require an AI disclosure, even though these enhancements have no impact on the truthfulness of the content. The proposed rule fails to draw a clear line between AI used for editing and AI used for deception. By making disclosures too broad, the FCC risks diluting the very purpose of transparency.
In addition, the FCC has not provided enough data to justify the need for such a sweeping rule. Generative AI is undoubtedly a powerful tool, but its role in political advertising remains limited. Rather than rushing to implement a broad regulation, the FCC should gather more data on the specific ways AI is being used in political ads—particularly in areas where they have jurisdiction, like broadcast media.
To improve the rule, the FCC should also adopt more precise technical language. Terms like “machine learning” or specific model architectures, such as “transformers” or “diffusion models,” would increase precision of covered content. A more targeted rule would focus on the type of AI most likely to deceive voters, rather than capturing all AI-enhanced content.
While questions remain about how generative AI will impact political advertising, the FCC’s proposed rule is not the solution. Moreover the FCC is not the appropriate federal agency to regulate political deepfakes. A more nuanced approach that addresses the specific risks of AI-generated disinformation—without overburdening advertisers with vague and broad requirements—would be far more effective. For now, the FCC’s rule falls short of that mark.