UN Urges Stronger Action to Combat AI Deepfake Threats

UN Report Highlights Escalating Deepfake Concerns

The United Nations’ International Telecommunication Union (ITU) has issued a stark warning about the escalating dangers posed by artificial intelligence-generated deepfakes. In a report released during its “AI for Good Summit” in Geneva, the ITU called for immediate and robust measures to detect and prevent the spread of manipulated multimedia content.

According to the report, deepfakes—AI-generated images, videos, and audio that mimic real individuals—pose significant threats to election integrity, financial systems, and public trust. These synthetic media forms have become increasingly realistic, making it difficult for audiences to distinguish real content from fake.

Call for Advanced Detection Tools and Standards

To address these mounting risks, the ITU urged social media platforms and digital content distributors to implement advanced tools capable of verifying the authenticity of multimedia before it is shared publicly. The organization emphasized the need for digital watermarking and provenance verification to ensure that users can trust the content they encounter online.

“Trust in social media has dropped significantly because people don’t know what’s true and what’s fake,” said Bilel Jamoussi, Chief of the Study Groups Department at the ITU’s Standardization Bureau. He emphasized that deepfake detection is now one of the most pressing challenges, given the rapid advancements in generative AI technology.

Industry Leaders Stress Content Provenance

Leonard Rosenthol, a representative from Adobe—a company that has been tackling deepfake issues since 2019—highlighted the importance of verifying a digital file’s origin. “We need more of the places where users consume their content to show this information,” he said. “When you are scrolling through your feeds you want to know: ‘Can I trust this image, this video?’”

Rosenthol advocated for implementing content credentials and metadata that inform users about who created the content and when it was created, enabling better decision-making and reducing the spread of misinformation.

Global Collaboration Needed to Address the Issue

Dr. Farzaneh Badiei, founder of the digital governance research firm Digital Medusa, emphasized the importance of a unified international approach. Currently, no global regulatory body is solely responsible for overseeing the identification and management of manipulated media. “If we have patchworks of standards and solutions, then the harmful deepfake can be more effective,” she told Reuters.

Badiei stressed that without coordinated global standards, malicious actors could exploit regulatory gaps, allowing harmful content to spread more effectively across international borders.

Watermarking and AI Literacy as Preventive Measures

The ITU is actively working on developing standards for watermarking video content, which accounts for roughly 80% of all internet traffic. These watermarks would embed metadata such as creator identity and timestamps to help verify the content’s source. The goal is to provide users with a transparent trail of a video’s origin and integrity.

Tomaz Levak, founder of Switzerland-based Umanitek, echoed the need for proactive safety measures. He called on the private sector to take the lead in educating users and implementing safeguards. “AI will only get more powerful, faster or smarter… We’ll need to upskill people to make sure that they are not victims of the systems,” Levak stated.

A Growing Need for Digital Verification Infrastructure

As generative AI continues to evolve, experts warn that the tools used to create deceptive content are becoming more accessible and sophisticated. This trend underscores the urgent need for a robust digital infrastructure capable of verifying content at scale. Initiatives such as digital watermarking, AI literacy campaigns, and international cooperation are critical to maintaining information integrity.

The ITU’s report serves as a wake-up call for governments, tech companies, and civil society. Without swift and coordinated action, the spread of deepfakes could erode public trust, disrupt democratic processes, and cause widespread financial harm.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter