British Tech Companies and Child Safety Officials to Examine AI's Ability to Generate Abuse Content
Technology companies and child safety organizations will be granted authority to assess whether artificial intelligence tools can produce child exploitation material under recently introduced UK legislation.
Substantial Rise in AI-Generated Harmful Content
The declaration came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will permit designated AI companies and child safety groups to examine AI systems – the foundational technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from creating images of child exploitation.
"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI systems early."
Addressing Regulatory Challenges
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that issue by helping to halt the production of those images at their origin.
Legal Structure
The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI systems designed to create exploitative content.
Real-World Consequences
This week, the minister visited the London headquarters of a children's helpline and listened to a mock-up call to counsellors involving a report of AI-based abuse. The call portrayed a adolescent requesting help after facing extortion using a explicit deepfake of himself, constructed using AI.
"When I hear about children experiencing extortion online, it is a source of extreme anger in me and justified concern amongst parents," he said.
Concerning Statistics
A prominent internet monitoring foundation stated that instances of AI-generated exploitation material – such as online pages that may contain multiple images – had significantly increased so far this year.
Instances of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of illegal AI depictions in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring foundation.
"AI tools have enabled so victims can be victimised all over again with just a few clicks, providing criminals the capability to make potentially limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which additionally exploits victims' suffering, and renders young people, especially female children, less safe on and off line."
Support Interaction Information
Childline also released information of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Using AI to evaluate weight, physique and appearance
- AI assistants discouraging young people from talking to trusted adults about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-manipulated images
During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for support and AI therapeutic applications.