AI used to generate thousands more child abuse videos in 2025, campaigners warn

AI used to generate thousands more child abuse videos in 2025, campaigners warn

Paedophiles and other criminals used AI to generate thousands more child sexual abuse videos last year, contributing to record levels of the harrowing material found online, campaigners have warned.

The Internet Watch Foundation (IWF) said its analysts found 3,440 AI-generated videos of child sexual abuse in 2025, compared to 13 in 2024.

In total, IWF staff dealt with 312,030 confirmed reports of abuse images found online in 2025, up from 291,730 the previous year.

Its research suggests that of the 3,440 AI-generated videos, 2,230 were the most extreme category under UK law, category A, and 1,020 were the second most extreme.

Kerry Smith, IWF chief executive, said: “When images and videos of children suffering sexual abuse are distributed online, it makes everyone, especially those children, less safe.

“Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.

“The frightening rise in extreme category A videos of AI-generated child sexual abuse shows the kind of things criminals want. And it is dangerous.

“Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation and further endanger children both on and offline.

“Now governments around the world must ensure AI companies embed safety by design principles from the very beginning. It is unacceptable that technology is released which allows criminals to create this content.”

The research comes as X announced limits on its AI chatbot Grok’s ability to manipulate images following an outcry over reports of users being able to instruct it to sexualise images of women and children.

The company said earlier this week that it would prevent Grok “editing images of people in revealing clothes” and block users from generating similar images of real people in countries where it is illegal.

Technology Secretary Liz Kendall said she still expects the regulator Ofcom to “fully and robustly” establish the facts, and while the watchdog welcomed the new restrictions, said its investigation will continue as it seeks “answers into what went wrong and what’s being done to fix it”.

The IWF has previously said it wants all nudifying software banned, argues AI companies need to make tools safer before they are made available and has insisted Government should make this mandatory.

Children’s charity the NSPCC said the IWF’s findings were “both deeply alarming and sadly predictable”.

Its chief executive, Chris Sherwood, said: “Offenders are using these tools to create extreme material at a scale we’ve never faced before, with children paying the price.

“Tech companies cannot keep releasing AI products without building in vital protections. They know the risks and they know the harms that can be caused. It is up to them to ensure their products can never be used to create indecent images of children.

“The UK Government and Ofcom must now step in and ensure tech companies are held to account.

“We are calling on Ofcom to use every tool available to them through the Online Safety Act and for Government to introduce a statutory duty of care to ensure generative AI services are required to build children’s safety into the design of their products and prevent these horrific crimes.”

Ms Kendall branded it “utterly abhorrent that AI is being used to target women and girls”, and insisted the Government “will not tolerate this technology being weaponised to cause harm, which is why I have accelerated our action to bring into force a ban on the creation of non-consensual AI-generated intimate images”.

She added: “AI should be a force for progress, not abuse, and we are determined to support its responsible use to drive growth, improve lives and deliver real benefits, while taking action where it is misused.

“That is also why we have introduced a world-leading offence targeting AI models trained or adapted to generate child sexual abuse material. Possessing, supplying or modifying these models will soon be a crime.”

The Lucy Faithfull Foundation, that works to support offenders to stop viewing images of child abuse, said it has also seen the number of people using AI to view and make abuse images double in the last year.

Young people who are worried that indecent images of them have been shared online can use the free Report Remove tool at childline.org.uk/remove

Minister for Safeguarding Jess Phillips said: “This surge in AI-generated child abuse videos is horrifying – this Government will not sit back and let predators generate this repulsive content.”

She added: “There can be no more excuses from technology companies. Take action now or we will force you to.”

Published: by Radio NewsHub
Start your relationship

If you are interested in receiving bulletins from Radio News Hub or would simply like to find out more please fill in the form below. We operate on annual contracts - spread the cost is available.

We aim to get back to you within 48 hours