Paedophiles are using artificial intelligence (AI) to create images of celebrities as children.
The Internet Watch Foundation (IWF) said images of a well-known female singer reimagined as a child are being shared by predators.
On one dark web forum the charity says images of child actors are also being manipulated to make them sexual.
Hundreds of images of real victims of child sexual abuse are also now being created using bespoke image generators.
The details come from the IWF’s latest report into the growing problem, as it tries to raise awareness about the dangers of paedophiles using AI systems that can create images from simple text instructions.
Since these powerful image generation systems entered the public domain, researchers have warned that they have the potential to be misused to generate illicit images.
In May, Home Secretary Suella Braverman and US Homeland Security Secretary Alejandro Mayorkas issued a joint statement committing to tackle the “alarming rise in despicable AI-generated images of children being sexually exploited by paedophiles”.
- What is AI? Understand it with this simple guide
The IWF’s report details how researchers spent a month logging AI imagery on a single darknet child abuse website and found nearly 3,000 synthetic images that would be illegal under UK law.
Analysts said there is a new trend of predators taking single photos of well-known child abuse victims and recreating many more of them in different sexual abuse settings.
One folder they found contained 501 images of a real world victim who was about 9-10 years old when she was subjected to sexual abuse. In the folder predators also shared a fine-tuned AI model file to allow others to generate more images of her.
The IWF says some of the imagery, including that of celebrities as children, is extremely realistic and would be indistinguishable to untrained eyes.
Analysts saw images of mostly female singers and movie stars that had been de-aged using the imaging software to make them look like children.
The report did not identify which celebrities had been targeted.
The charity said it was sharing the research to get the issue put onto the agenda at the UK government’s AI Summit next week at Bletchley Park.
In one month, the IWF investigated 11,108 AI images which had been shared on a dark web child abuse forum.
- Of these, 2,978 were confirmed as images which fail UK law – meaning they depicted child sexual abuse
- More than one in five of these images (564) were classified as Category A, the most serious kind of imagery
- More than half (1,372) of these images depicted primary school-aged children (seven to 10 years old)
- As well as this, 143 images depicted children aged three to six, while two images depicted babies (under two years old).
In June, the IWF warned that predators were starting to explore the use of AI to make depraved images of children, but now the IWF says the fears are a reality.
“Our worst nightmares have come true,” said Susie Hargreaves OBE, the chief executive of the IWF.
“Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point.”
The IWF report reiterates the real world harm of AI images. Although children are not harmed directly in the making of the content, the images normalise predatory behaviour and can waste police resources as they investigate children that do not exist.
In some scenarios new forms of offence are being explored too, throwing up new complexities for law enforcement agencies.
For example, the IWF found hundreds of images of two girls whose pictures from a photoshoot at a non-nude modelling agency had been manipulated to put them in Category A sexual abuse scenes.
The reality is that they are now victims of Category A offences that never happened.
Related Topics
- Artificial intelligence