JD Vance Condemns AI Deepfake Abuse as ‘Unacceptable’

JD Vance and David Lammy United Against AI-Generated Deepfakes

US Vice President JD Vance has reportedly expressed strong opposition to the use of artificial intelligence for creating sexualized deepfake images. According to UK Deputy Prime Minister David Lammy, Vance described such uses of AI as “entirely unacceptable” during a high-level meeting in Washington.

Lammy, speaking to The Guardian, said the two officials discussed the growing concern around the use of AI tools, such as Elon Musk’s Grok, to manipulate images of women and children in disturbing ways. Lammy stated, “I also raised with him the Grok issue and the horrendous, horrific situation in which this new technology is allowing deepfakes and the manipulation of images of women and children, which is just absolutely abhorrent.”

The deputy prime minister emphasized that Vance agreed with his concerns, reportedly condemning the use of AI for such purposes.

Grok and the Rise of AI-Generated Abuse Content

In recent weeks, complaints have surged from users on X (formerly Twitter), particularly women, who claim that their likenesses have been used without consent to generate sexually explicit images using Grok. The Internet Watch Foundation (IWF) also reported that some criminals have exploited this AI tool to create child sexual abuse material.

In response to growing backlash, Grok’s developers appeared to restrict its functionality, limiting image creation and editing features to paid subscribers only. However, this move has not quelled public criticism. Downing Street described the restriction as “insulting,” while Prime Minister Sir Keir Starmer urged platform owner Elon Musk to take more decisive action.

“Get a grip of Grok,” Starmer stated, revealing that he had requested the UK’s media regulator, Ofcom, to consider all enforcement options.

Elon Musk Defends AI Platform Amid Criticism

Musk, known for his outspoken nature, defended his platform by retweeting critiques of the UK government’s focus on Grok. He pointed to other AI platforms that also generate images, albeit non-sexualized, such as women in bikinis, suggesting that officials were using the situation as a pretext for censorship.

“They want any excuse for censorship,” Musk wrote in response to the growing pressure.

The tech mogul had previously stated that users who exploit Grok for generating illegal content would face the same legal consequences as if they had directly uploaded such material.

Vance Silent Publicly but Active Privately

Despite his strong stance in private discussions, JD Vance has yet to issue a public statement on the matter. Sources present at the Washington meeting indicated that Vance was deeply concerned about the implications of AI misuse. He reportedly referred to the spread of “hyper-pornographied slop” enabled by advanced technology.

Lammy remarked, “He recognised how despicable, unacceptable, that is, and I found him sympathetic to that position. And in fact, we’ve been in touch again today about this very serious issue.” Lammy also confirmed that Vance was aware of the changes made by X to Grok’s functionality as of that morning.

Broader Implications for AI Regulation

This incident adds urgency to the ongoing global debate over the regulation of artificial intelligence. As the technology becomes more accessible and powerful, political leaders and watchdog groups are raising alarms about its potential for misuse, particularly in creating realistic yet fake visual content.

The IWF’s findings underscore the darker side of AI innovation. With tools like Grok being misused to generate abusive content, pressure is mounting on tech companies to implement stronger safeguards and ensure ethical compliance. The situation also raises questions about the role of governments and international organizations in setting and enforcing AI standards.

Calls for Accountability and Transparency

Advocates for digital safety are urging companies like X to be more transparent about how their AI tools are used and monitored. Critics argue that merely limiting access to paid users is not enough to prevent misuse. They call for robust content moderation systems, strict user verification protocols, and clearer accountability mechanisms.

Meanwhile, political leaders like Lammy and Vance appear to be laying the groundwork for a collaborative international effort to curb the misuse of AI. Their shared concern could signal the beginning of a bipartisan alliance across the Atlantic, aimed at ensuring AI technologies serve the public good rather than becoming tools for exploitation.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter