FEC Advances AI Deepfake Regulation for Election Ads
Washington, D.C. – The Federal Election Commission (FEC) has taken a closer step toward regulating AI-generated deepfake content in political ads before the 2024 election, aiming to safeguard voters against election disinformation.
The FEC’s unanimous procedural vote on Thursday urges the regulation of ads that use AI to falsely depict political opponents saying or doing something they never did.
The meeting follows a plea from the advocacy group Public Citizen, requesting clarification on whether existing federal regulations against “fraudulent misrepresentation” encompass AI-generated deepfakes in campaign communications.
While the manipulation of fake images, videos, or auto clips is not novel, AI has made such content more accessible, affordable, and capable of swaying public perception. Florida GOP Governor Ron Desantis, a contender in the 2024 presidential race, has already embraced these techniques to influence voters when his campaign team published questionable photos of Donald Trump and Dr. Anthony Fauci embracing each other on Twitter in June.
The Commission’s vote reflects its intention to explore this matter. Still, the actual formulation of rules governing these ads will only occur after a 60-day public comment period, which is expected to begin next week. Given the concern that AI evolves rapidly while legislation progresses slowly, addressing this issue becomes pivotal.