×
AI-generated child nudity prompts call for app ban in UK
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The UK children’s commissioner is calling for a government ban on AI applications capable of creating explicit fake images of children, highlighting the growing threat of deepfake technology to young people’s safety and privacy. This push comes amid increasing concerns about AI tools that can digitally remove clothing from photos or generate sexually explicit deepfakes, disproportionately targeting girls and young women who are now modifying their online behavior to avoid victimization.

The big picture: Dame Rachel de Souza, England’s children’s commissioner, is demanding immediate government action against AI “nudification” apps that generate sexually explicit images of children.

  • These applications can create deepfakes—AI-generated images that appear real—and are having “extreme real-world consequences” while going largely unchecked.
  • The technology predominantly targets female bodies, with many apps apparently designed specifically to work only on images of girls and women.

Why this matters: The technology is fundamentally changing children’s online behaviors and creating new forms of digital exploitation.

  • Girls are now actively limiting their social media presence and avoiding posting images online to reduce their risk of being targeted.
  • Children fear that “a stranger, a classmate, or even a friend” could use these easily accessible technologies against them.

What they’re saying: The children’s commissioner emphasized the rapidly evolving nature of the threat in her report published Monday.

  • “The evolution of these tools is happening at such scale and speed that it can be overwhelming to try and get a grip on the danger they present,” Dame Rachel said.
  • Paul Whiteman of school leaders’ union NAHT added that members share these concerns, stating: “This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it.”

The response: The UK government maintains that child sexual abuse material is already illegal, with plans to strengthen legislation.

  • Current laws under the Online Safety Act make sharing or threatening to share explicit deepfake images illegal in England and Wales.
  • In February, the government announced plans for additional offenses specifically targeting the possession, creation, or distribution of AI tools designed to create child sexual abuse material.

Key recommendations: Dame Rachel’s report calls for comprehensive government action beyond a simple ban on these applications.

  • The government should impose legal obligations on developers of generative AI tools to identify, address, and mitigate risks their products pose to children.
  • A systematic process should be established to remove sexually explicit deepfake images of children from the internet.
  • Deepfake sexual abuse should be officially recognized as a form of violence against women and girls.
Ban AI apps creating naked images of children, says children's commissioner

Recent News

As you were: DeepSeek AI resumes downloads in South Korea after brief ban

DeepSeek reappears in South Korean app stores after revising its privacy policy to comply with local data protection laws following a two-month suspension.

AI on the sly? UK government stays silent on implementation

UK officials use AI assistant Redbox for drafting documents while withholding details about its implementation and influence on policy decisions.

AI-driven leadership demands empathy over control, says author

Tomorrow's successful executives will favor orchestration over command, leveraging human empathy and diverse perspectives to guide increasingly autonomous AI systems.