Examples of this include text-to-video, deepfake videos, text-to-image, AI-altered image, text-to-speech, voice cloning, and text-to-text.
[8] In the 2024 French legislative election, deepfake videos appeared claiming: i) That they showed the family of Marine le Pen.
In the videos, young women, supposedly Le Pen's nieces, are seen skiing, dancing and at the beach "while making fun of France’s racial minorities": However, the family members don't exist.
[12][13][14] A video The All-India Anna Dravidian Progressive Federation party posted showed an audio clip of Jayaram Jayalalithaa even though she had died in 2016.
[15][16] The Deepfakes Analysis Unit (DAU) is an open source platform created in March 2024 for the public to share misleading content and assess if it had been AI-generated.
[21] iii) Less than 3 months before the elections, a deepfake video showed U.S. rapper Eminem endorsing the Economic Freedom Fighters party while criticizing the ANC.
[23] Seoul hosted the 2024 Summit for Democracy, a virtual gathering of world leaders initiated by US President Joe Biden in 2021.
Created on social media, the video was "widely circulated" and often "accompanied by claims that Xi supported candidates from one of the two opposition parties".
[26] ii) In a deepfake video U.S. congressman Rob Wittman is shown appearing to support Taiwan's Democratic Progressive Party.
[32] Officials from the ODNI and FBI have stated that Russia, Iran, and China used generative artificial intelligence tools to create fake and divisive text, photos, video, and audio content to foster anti-Americanism and engage in covert influence campaigns.
[35][38] In 2023, while he was still running for re-election, the presidential campaign of Joe Biden prepared a task force to respond to AI images and videos.
[39] A Democratic consultant working for Dean Phillips also admitted to using AI to generate a robocall which used Joe Biden's voice to discourage voter participation.
[46][47][48] California has enacted legislation that makes using deepfakes to discredit political opponents illegal within sixty days of an election.
[53] Russia was thought to be the most prolific nation targeting the 2024 presidential election with their influencing operations "spreading synthetic images, video, audio and text online", according to U.S intelligence officials.
[53] Iran has reportedly generated fake social media posts stories and targeted "across the political spectrum on polarizing issues during the presidential election".
[53] The Chinese government has used "broader influence operations" that aim to make a global image and "amplify divisive topics in the U.S. such as drug use, immigration, and abortion".
[56] Outside of the US elections, a deepfake video of Moldova’s pro-Western president Maia Sandu shows her "throwing her support behind a political party friendly to Russia.
[54] Slovakia's liberal party leader had audio clips faked which discussed "vote rigging and raising the price of beer".
[59] These platforms are part of complex and opaque systems which can result in a "significant impact on freedom of expression", with the generalisation of AI in campaigns also creating huge pressures on "voters’ mental security".
[59] While AI collides with the reasoning processes of people, the creation of "dangerous behaviours" can happen which disrupt important levels of society and nation states.