Warnings about generative AI fueling election misinformation in 2024 ultimately did not materialize as feared, according to Meta. The tech giant stated that AI-generated content accounted for less than 1% of election-related misinformation flagged by its fact-checkers across major global elections this year.
This finding was part of Meta’s year-end analysis on safeguarding elections, covering key polls in the U.S., UK, Bangladesh, India, Pakistan, Indonesia, France, South Africa, Mexico, Brazil, and the European Union.
Nick Clegg, Meta’s President of Global Affairs, emphasized that while initial concerns about AI-driven disinformation were valid, their actual impact on the company’s platforms was “modest and limited in scope.” During a briefing, Clegg highlighted Meta’s preemptive measures, such as expanding AI labeling and refining its AI-powered tools. For example, its Imagine AI image generator blocked 590,000 attempts to create deepfakes of prominent political figures like Joe Biden, Kamala Harris, and Donald Trump during the run-up to U.S. elections.
Meta also disrupted 20 covert influence campaigns aimed at spreading misinformation. These campaigns were often run by networks of fake accounts that tried to make themselves look more popular by using fake likes and followers. Even though some of these campaigns used AI to help create content, Meta found that AI didn’t make the campaigns any more effective. In fact, the use of AI only had a small impact on their ability to spread misinformation.
Meta’s method of detecting and shutting down these campaigns focused on the behavior of the accounts, not just the content they posted. So, whether the content was created by AI or not, the company was able to identify and remove these networks based on how they acted, making AI less of a factor in these operations.
Despite its efforts, Meta acknowledged areas for improvement. Clegg highlighted the need for more precise enforcement of policies to strike a better balance between curbing misinformation and protecting free expression. Meta also pointed out that misinformation linked to foreign influence operations often appeared on competing platforms like X and Telegram, where similar oversight might be lacking.
As Meta distances itself from political content by deprioritizing news on Facebook and limiting political recommendations on Instagram and Threads, it continues to refine its approach to misinformation. The company plans to review its policies and adjust strategies as needed to tackle emerging challenges in the evolving information landscape.