Nowadays, almost anybody can create audio or video recordings of celebrities or regular people acting and talking in ways they never would have by using simple, widely accessible artificial intelligence (AI) software. The rights of the people whose voices and images are being appropriated will need to be more seriously protected. Big tech companies have teamed up in a historic attempt to stop the misuse of AI in the upcoming elections throughout the world.
Deepfakes continue to develop and each time with better quality, more convincing, and closer to reality. The large number of elections around the world during 2024 raises with it the concern about the inclusion of artificial intelligence in these electoral processes, which may compromise the integrity of the elections. Voter manipulation by deepfakes is one of the main discussions in many countries of the world that are preparing for elections. About 4 billion people will turn to the ballot boxes in over 50 different countries. Concerns have been expressed by academics, journalists, and politicians over the use of AI-generated content in political influence operations.
Nonetheless, AI-generated content will also have a greater impact on our social life. Recently, viral cases have been related to celebrities, but given how fast deepfakes are evolving, we will have deepfake videos of regular people who are not celebrities or politicians and do not arouse public interest in their jobs or activities. This will be a very serious threat to societies, and that is why it is very important to have collective initiatives against AI-generated trickeries.
Case studies of recent deepfakes
Deepfakes, or non-AI-based manipulations called “cheapfakes,” are not new and have been around for a while. However, with ChatGPT's impact on bringing AI to a wider audience, in the last year, billions of dollars have been invested in AI companies. The development of programs that facilitate their production, especially deepfakes, has multiplied the use of artificial intelligence to produce deepfakes that target the public. Even now, in addition to video manipulation, there have been cases where an audio deepfake is produced, which is even easier to create.
The case of audio deepfake of the US President, Joe Biden, that was distributed in New Hampshire, urging people not to vote in the state’s primary, reached more than 20,000 people. The person who was behind this manipulation and paid $150 to produce it, Steve Kramer, stated that he did it as an act of civil disobedience to draw attention to the risks associated with artificial intelligence in politics, and to draw attention to the necessity of AI regulations.
Another big example which showed how deepfakes could be a danger to democracy is the audio deepfake of Slovak politician, Michal Simecka. A recorded voice message was uploaded on Facebook 48 hours before Slovakia's election, appearing to be of Simecka discussing election fraud with journalist, Monika Todova. An example that could have serious political and societal implications is the audio deepfake of London mayor, Sadiq Khan. In early November of 2023, an audio deepfake of Khan went viral when he appeared to be insulting Armistice Day (a celebration to mark the moment when World War I ended) and demanding that pro-Palestine marches take precedence.
In addition to audio deepfakes with political protagonists, video deepfakes with celebrities continue to circulate on the internet. For example, there are video deepfakes of the famous Hollywood actor, Tom Hanks, in which an AI version of him promoted a dental plan, or the famous US YouTuber, MrBeast, who appeared to be hosting “the world's largest iPhone 15 giveaway.”
Image deepfakes of the singer Taylor Swift that were published at the beginning of this year on several social media platforms — X (formerly Twitter), Instagram, Facebook, and Reddit — also became viral. Before it was taken down from X, an image deepfake of Swift was viewed over 45 million times in the roughly 17 hours that it was up on the platform.
A collective initiative against AI-generated trickery
The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” which was announced at the Munich Security Conference, sees 20 major players, including Adobe, Google, Microsoft, OpenAI, Snap Inc., and Meta, committing to use cutting-edge technology to detect and counteract harmful AI-generated content that aims to mislead voters, and also to support efforts to foster public awareness, media literacy, and all-of-society resilience. It is the first time that 20 different companies are getting on board together against AI-generated trickery.
Participating companies agreed to eight specific commitments to mitigate risks related to deceptive AI election content. This initiative between the tech sector and AI aims to target pictures, videos, and audio created by AI that might mislead voters regarding candidates, election officials, and the voting process. However, it does not demand that such content be banned completely.