UK Technology Companies and Child Safety Officials to Examine AI's Capability to Create Exploitation Content
Tech firms and child protection agencies will be granted permission to evaluate whether AI systems can produce child abuse material under recently introduced British legislation.
Significant Rise in AI-Generated Harmful Content
The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will permit designated AI companies and child safety organizations to inspect AI models – the foundational technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from creating depictions of child sexual abuse.
"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the danger in AI models early."
Addressing Legal Obstacles
The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to preventing that issue by helping to stop the creation of those materials at their origin.
Legislative Structure
The amendments are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI systems developed to create exploitative content.
Practical Consequences
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up call to advisors involving a account of AI-based abuse. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.
"When I hear about children facing extortion online, it is a cause of intense frustration in me and rightful concern amongst families," he said.
Alarming Statistics
A prominent internet monitoring foundation stated that instances of AI-generated abuse material – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of the most severe material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a crucial step to guarantee AI products are safe before they are launched," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a few clicks, giving criminals the capability to create potentially limitless amounts of sophisticated, lifelike exploitative content," she added. "Material which further commodifies survivors' suffering, and renders children, particularly girls, more vulnerable both online and offline."
Counseling Session Information
Childline also published details of support sessions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
- Using AI to rate body size, body and looks
- AI assistants dissuading young people from consulting trusted guardians about abuse
- Facing harassment online with AI-generated content
- Digital blackmail using AI-manipulated pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, including using chatbots for support and AI therapeutic apps.