
In brief
- UNICEF’s research estimates 1.2 million children had images manipulated into sexual deepfakes last year across 11 surveyed countries.
- Regulators have stepped up action against AI platforms, with probes, bans, and criminal investigations tied to alleged illegal content generation.
- The agency urged tighter laws and “safety-by-design” rules for AI developers, including mandatory child-rights impact checks.
UNICEF issued an urgent call Wednesday for governments to criminalize AI-generated child sexual abuse material, citing alarming evidence that at least 1.2 million children worldwide had their images manipulated into sexually explicit deepfakes in the past year.
The figures, revealed in Disrupting Harm Phase 2, a research project led by UNICEF’s Office of Strategy and Evidence Innocenti, ECPAT International, and INTERPOL, show in some nations the figure represents one in 25 children, the equivalent of one child in a typical classroom, according to a Wednesday statement and accompanying issue brief.
The research, based on a nationally representative household survey of approximately 11,000 children across 11 countries, highlights how perpetrators can now create realistic sexual images of a child without their involvement or awareness.
In some study countries, up to two-thirds said they worry AI could be used to create fake sexual images or videos of them, though levels of concern vary widely between countries, according to the data.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM),” UNICEF said. “Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The call gains urgency as French authorities raided X’s Paris offices on Tuesday as part of a criminal investigation into alleged child pornography linked to the platform’s AI chatbot Grok, with prosecutors summoning Elon Musk and several executives for questioning.
A Center for Countering Digital Hate report released last month estimated Grok produced 23,338 sexualized images of children over an 11-day period between December 29 and January 9.
The issue brief released alongside the statement notes these developments mark “a profound escalation of the risks children face in the digital environment,” where a child can have their right to protection violated “without ever sending a message or even knowing it has happened.”
The UK’s Internet Watch Foundation flagged nearly 14,000 suspected AI-generated images on a single dark-web forum in one month, about a third confirmed as criminal, while South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024, with most suspects identified as teenagers.
The organization urgently called on all governments to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession, and distribution.
UNICEF also demanded that AI developers implement safety-by-design approaches and that digital companies prevent the circulation of such material.
The brief calls for states to require companies to conduct child rights due diligence, particularly child rights impact assessments, and for every actor in the AI value chain to embed safety measures, including pre-release safety testing for open-source models.
“The harm from deepfake abuse is real and urgent,” UNICEF warned. “Children cannot wait for the law to catch up.”
The European Commission launched a formal investigation last month into whether X violated EU digital rules by failing to prevent Grok from generating illegal content, while the Philippines, Indonesia, and Malaysia have banned Grok, and regulators in the UK and Australia have also opened investigations.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.

















