AI Fuels Surge in Child Exploitation: Lawmakers and Law Enforcement React

The Growing Threat of AI-Generated Child Sexual Abuse

Artificial intelligence is rapidly transforming the landscape of child sexual exploitation, making it easier for offenders to create and distribute harmful content. Law enforcement agencies and lawmakers are struggling to keep up with the pace of technological advancement, as perpetrators exploit system loopholes and leverage open-source AI models to generate explicit images of both real and fictitious children.

According to experts and officials, offenders are utilizing a range of tools and platforms to produce child sexual abuse material (CSAM) with increasing realism. Between January and September 2025, the National Center for Missing and Exploited Children (NCMEC) reported receiving over a million tips related to AI-generated content through its CyberTipline. Fallon McNulty, executive director of NCMEC’s exploited children division, emphasized, “The almost indistinguishable nature of the content that is being generated makes it extremely difficult for victim identification efforts.”

Challenges for Law Enforcement and Prosecutors

AI-generated CSAM is taking on multiple forms. Sometimes, offenders use photographs of children from public events, transforming them into explicit material with AI. In other cases, they create entirely fabricated images depicting minors. The results are so realistic that it has become increasingly difficult for investigators and prosecutors to distinguish between real and synthetic content.

Michael Prado, deputy assistant director of Homeland Security Investigations’ Cyber Crimes Center, revealed a staggering increase in cases: reports of child exploitation involving generative AI rose by over 600% in the first half of 2025 compared to the previous two years combined. Prado noted, “Collectors of this type of material, sometimes they don’t really differentiate. They’re just looking to increase their collections.”

Despite the surge in reports, only a small fraction result in criminal prosecutions. NBC News identified 36 state and federal cases over the past three years involving AI-generated CSAM across 22 states, with the majority still ongoing. Even so, all concluded cases have resulted in guilty verdicts, underscoring the seriousness with which the courts view these offenses.

Many offenders turn to a constellation of lesser-known AI platforms—such as Bashable.art, undress.ai, and Faceswapper.AI—that offer minimal moderation and are sometimes designed explicitly for adult content creation. In one case, an Idaho man, previously convicted of child sexual abuse, allegedly used Bashable.art’s “unrestricted mode” to create over a thousand explicit images of children. While the platform claims to monitor and report illicit activity, cases like these highlight the difficulty of enforcing standards across the digital landscape.

Other platforms, like undress.ai and DeepSukebe, specialize in creating deepfake nudes, sometimes using images of real minors. In one federal case, a man used AI tools to alter innocent photos from school events into explicit images, leading to a 40-year prison sentence. Open-source models such as Stable Diffusion have been implicated in additional cases, with add-ons specifically designed to generate illegal content. Stability AI, the company behind Stable Diffusion, maintains that it prohibits unlawful use of its tools and is committed to preventing misuse.

Distinguishing Real from AI-Generated Content

The increasing realism of AI-generated CSAM has made it nearly impossible for even experts to determine whether an image depicts a real victim or is completely synthetic. This distinction is critical in court, as charges may differ depending on the nature of the material. In ongoing cases, some defendants have faced obscenity charges instead of CSAM charges when the imagery did not involve real children. However, prosecutors argue that the harm is comparable, regardless of the image’s source.

Kathryn Rifenbark, director of NCMEC’s CyberTipline, explained, “To the victim, the harm is going to be the same. They’re still going to have that impact of that nude picture, whether AI or not, distributed of them online.”

Legislative Efforts and the Path Forward

As the threat grows, lawmakers are racing to implement new regulations. According to watchdog group Public Citizen, 45 states have enacted laws addressing AI-generated intimate deepfakes, many focusing on minors. However, approaches vary significantly; some states target AI companies directly, while others focus on nonconsensual deepfakes or impose requirements for AI platforms.

Federal action is also underway. In May 2025, the TAKE IT DOWN Act made the creation of nonconsensual deepfakes a federal crime and required platforms to remove such content within 48 hours of notification. Later that year, the Senate unanimously passed the ENFORCE Act, aiming to prosecute creators and distributors of AI-generated CSAM on par with other offenders. The legislation is currently pending review in the House of Representatives.

Ilana Beller, an organizing manager at Public Citizen, stressed the importance of state-level laws for handling the sheer volume of cases. “The number of cases related to nonconsensual, intimate deepfakes would just be too much for only federal prosecutors,” she said.

Despite these efforts, the rapid evolution of technology continues to challenge lawmakers and law enforcement. Prado summarized the ongoing struggle: “It’s hard to keep up with the rapidly evolving nature of technology and generative AI.” As AI capabilities expand, the need for proactive and adaptable legal frameworks has never been greater.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter