UK Technology Companies and Child Safety Agencies to Examine AI's Ability to Generate Abuse Content
Technology companies and child protection organizations will receive authority to assess whether AI tools can produce child abuse images under new UK laws.
Substantial Increase in AI-Generated Illegal Material
The announcement came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the government will permit designated AI companies and child protection groups to inspect AI models – the underlying technology for chatbots and image generators – and ensure they have adequate safeguards to stop them from producing images of child sexual abuse.
"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the danger in AI models early."
Addressing Regulatory Obstacles
The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that problem by helping to halt the creation of those materials at source.
Legislative Structure
The amendments are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI models developed to create child sexual abuse material.
Real-World Impact
This week, the minister toured the London base of a children's helpline and heard a simulated conversation to counsellors involving a account of AI-based exploitation. The interaction portrayed a teenager seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a cause of intense anger in me and rightful concern amongst families," he stated.
Alarming Statistics
A leading online safety foundation reported that cases of AI-generated exploitation content – such as online pages that may contain numerous files – had significantly increased so far this year.
Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are released," commented the head of the internet monitoring foundation.
"AI tools have made it so survivors can be victimised all over again with just a simple actions, giving offenders the ability to make potentially endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies survivors' suffering, and makes children, especially girls, more vulnerable on and off line."
Counseling Session Information
Childline also released information of counselling interactions where AI has been referenced. AI-related risks discussed in the sessions comprise:
- Employing AI to evaluate weight, physique and appearance
- AI assistants dissuading children from talking to trusted adults about abuse
- Facing harassment online with AI-generated content
- Digital extortion using AI-manipulated images
During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were related to mental health and wellness, encompassing using AI assistants for assistance and AI therapeutic apps.