With the spread of Chatbots spouting falsehoods, face-swapping apps crafting porn videos, and cloned voices defrauding companies of millions, the world is scrambling to rein in AI deepfakes, which have become a misinformation super spreader.
The spread of deepfakes poses a major challenge in countering misinformation, as AI is used to swap faces and craft real-looking deepfakes.
Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.
Furthermore, British campaigner Kate Isaacs, said her “heart sank” when her face appeared in a deepfake porn video and an unknown user posted it on Twitter.
For their part, experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialized skill but is now widely available at low cost.
Weapons of Mass Disruption
With no barriers to creating AI-synthesized text, audio and video, the potential for misuse in identity theft, financial fraud and tarnishing reputations has sparked global alarm.
The Eurasia group called the AI tools “weapons of mass disruption.”
This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography “Mein Kampf.”
While another message features a voice similar to that of the American analyst, Ben Shapiro as he threatens to rape American politician Alexandria Ocasio-Cortez.
Moreover, the voices of directors Quentin Tarantino and George Lucas were also misused.
Information apocalypse
The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse,” a scenario where many people are unable to distinguish fact from fiction.
Super spreader
In early January, China, which leads the world in new technologies’ regulation, enforced new rules that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion.”
The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability.”
In the United States, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislative overreach that could kill innovation or target legitimate content.
In Europe, the British government announced in November that it intends to ban pornographic videos produced using deepfake technology without the consent of victims.
The European Union, meanwhile, is locked in heated discussions over its proposed “AI Act.”
The law, which the EU is racing to pass this year, will require users to disclose deepfakes but many fear the legislation could prove toothless if it does not cover creative or satirical content.
“How do you reinstate digital trust with transparency? That is the real question right now,” said Jason Davis, a research professor at Syracuse University.
“The [detection] tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cyber security, we will never solve this, we will only hope to keep up,” he added.
Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the U.S.-based OpenAI that is capable of generating strikingly cogent texts on almost any topic.
In a study, media watchdog NewsGuard, which called it the “next great misinformation super spreader,” said most of the chatbot’s responses to prompts related to topics such as COVID-19 and school shootings were “eloquent, false and misleading.”