Denmark Plans Law to Shield Citizens From AI Deepfakes

Denmark Moves to Tackle the Threat of AI Deepfakes

In a bold step to address the growing threat of artificial intelligence-generated deepfakes, Denmark is drafting new legislation aimed at safeguarding its citizens against unauthorized digital impersonations. The law would grant individuals copyright over their own likeness, including their image and voice, enabling them to demand takedowns of altered content shared without consent.

The move comes amid increasing concerns over the use of generative AI to create hyper-realistic fake images, videos, and audio. These manipulations have been used to humiliate individuals, spread misinformation, and disrupt democratic processes.

A Personal Toll: Marie Watson’s Story

Marie Watson, a 28-year-old Danish video game live-streamer, became a victim of this technology in 2021. She received a manipulated image of herself from an anonymous Instagram account. The photo, originally taken from her own social media, had been digitally altered to appear nude.

“It overwhelmed me so much,” Watson said. “I just started bursting out in tears, because suddenly, I was there naked.”

Watson’s experience is not unique. In the years since her ordeal, deepfakes have become more sophisticated and accessible, thanks to the proliferation of tools from companies like OpenAI and Google. These tools, while useful in many contexts, have also been exploited to create harmful content.

Legislation to Protect Identity and Dignity

The proposed Danish bill, expected to pass in early 2026, would update copyright laws to cover personal characteristics. This includes a person’s appearance and voice, thereby prohibiting the sharing of deepfakes without consent. Citizens would gain legal rights over their digital likeness, giving them recourse if their identity is misused.

While the law would include exceptions for parody and satire, the criteria for such exceptions remain unclear. The initiative has wide backing from Danish lawmakers, who see it as vital to maintaining public trust and combating misinformation.

Global Momentum Against Deepfakes

Denmark’s legislative push aligns with similar efforts worldwide. In the United States, bipartisan legislation was signed into law in 2025 that criminalizes the non-consensual publication of intimate deepfake images. South Korea has also enacted stricter regulations to combat deepfake pornography and hold social media platforms accountable.

Danish Culture Minister Jakob Engel-Schmidt emphasized the democratic implications of deepfakes. “If you’re able to deepfake a politician without their ability to remove that content, it undermines our democracy,” he stated during an AI and copyright conference.

Experts Applaud the Initiative

Henry Ajder, founder of Latent Space Advisory and a leading expert in generative AI, praised Denmark’s efforts. “Right now, when people ask how they can protect themselves from being deepfaked, the answer is often disappointing. This initiative offers real hope,” he said.

Ajder added that platforms like YouTube have made commendable strides in balancing copyright enforcement with creative freedom, but more is needed industry-wide. “We can’t pretend this is business as usual when it comes to identity and dignity,” he noted.

Platforms Under Pressure

The proposed legislation would apply within Denmark but could carry international implications. While individual users wouldn’t face prison or fines, major tech platforms could be penalized for failing to remove unauthorized deepfakes. Companies such as Twitch, TikTok, and Meta (owner of Facebook and Instagram) have yet to respond publicly to the proposed law.

Engel-Schmidt revealed that several European Union countries, including France and Ireland, have expressed interest in Denmark’s approach. As the current EU presidency holder, Denmark’s actions may influence broader regional policy.

Creative Industry Backs the Move

Support for the bill also comes from Denmark’s creative sector. Maria Fredenslund, director of the Danish Rights Alliance, said existing copyright laws fall short in protecting individuals. She cited the case of David Bateson, a Danish voice actor whose AI-cloned voice was widely circulated online without his consent.

“When we reported this to online platforms, they asked which regulation applied. We had no clear answer,” Fredenslund said. The new law could finally provide that clarity.

Watson’s Warning: “When It’s Online, You’re Done”

Marie Watson, still dealing with the aftermath of her own deepfake experience, remains cautious. She acknowledges the value of the proposed law but stresses that enforcement must be robust.

“You could literally just search ‘deepfake generator’ on Google, and dozens of tools pop up,” she said. “It shouldn’t be that easy. These platforms need to do more.”

Watson believes that once harmful content is online, the damage is already done. “You can’t do anything. It’s out of your control,” she said, highlighting the urgent need for stronger digital protections.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter