UK Tech Firms and Child Safety Agencies to Examine AI's Ability to Create Exploitation Content
Tech firms and child protection organizations will be granted permission to evaluate whether AI systems can generate child abuse material under new British laws.
Significant Increase in AI-Generated Harmful Material
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the government will allow approved AI developers and child safety groups to examine AI systems – the foundational technology for conversational AI and visual AI tools – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now identify the risk in AI models promptly."
Addressing Regulatory Challenges
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is designed to averting that problem by enabling to halt the production of those materials at their origin.
Legislative Framework
The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, producing or sharing AI models designed to create child sexual abuse material.
Real-World Consequences
This week, the official toured the London base of a children's helpline and listened to a mock-up call to advisors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, created using AI.
"When I learn about children facing blackmail online, it is a source of extreme anger in me and rightful concern amongst parents," he stated.
Alarming Data
A prominent internet monitoring organization stated that cases of AI-generated exploitation content – such as online pages that may include multiple files – had significantly increased so far this year.
Instances of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI products are secure before they are released," commented the chief executive of the internet monitoring foundation.
"AI tools have enabled so victims can be targeted all over again with just a few clicks, providing offenders the capability to make potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which additionally commodifies victims' trauma, and renders young people, especially female children, less safe both online and offline."
Support Session Data
Childline also published details of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations include:
- Using AI to evaluate weight, body and looks
- Chatbots discouraging young people from talking to trusted adults about harm
- Facing harassment online with AI-generated material
- Digital extortion using AI-faked pictures
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for assistance and AI therapy applications.