The Hague — A global campaign has led to at least 25 arrests over child sexual abuse content generated by artificial intelligence and distributed online, Europol said Friday.
“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material, making it exceptionally challenging for investigators due to the lack of national legislation addressing these crimes,” the Hague-based European police agency said in a statement.
The majority of the arrests were made Wednesday during the world-wide operation led by the Danish police, and which also involved law enforcement agencies from the EU, Australia, Britain, Canada and New Zealand. U.S. law enforcement agencies did not take part in the operation, according to Europol.
It followed the arrest last November of the main suspect in the case, a Danish national who ran an online platform where he distributed the AI material he produced.
After a “symbolic online payment, users from around the world were able to obtain a password to access the platform and watch children being abused,” Europol said.
Online child sexual exploitation remains one of the most threatening manifestations of cybercrime in the European Union, the agency warned.
It “continues to be one of the top priorities for law enforcement agencies, which are dealing with an ever-growing volume of illegal content,” it said, adding that more arrests were expected as the investigation continued.
While Europol said Operation Cumberland targeted a platform and people sharing content fully created using AI, there has also been a worrying proliferation of AI-manipulated “deepfake” imagery online, which often uses images of real people, including children, and can have devastating impacts on their lives.
According to a report by CBS News’ Jim Axelrod in December that focused on one girl who had been targeted for such abuse by a classmate, there were more than 21,000 deepfake pornographic pictures or videos online during 2023, an increase of more than 460% over the year prior. The manipulated content has proliferated on the internet as lawmakers in the U.S. and elsewhere race to catch up with new legislation to address the problem.
Just weeks ago the Senate passed a bipartisan bill called the “TAKE IT DOWN Act” that, if signed into law, would criminalize the “publication of non-consensual intimate imagery (NCII), including AI-generated NCII (or “deepfake revenge pornography”), and requires social media and similar websites to implement procedures to remove such content within 48 hours of notice from a victim,” according to a description on the U.S. Senate website.
As it stands, some social media platforms have appeared unable or unwilling to crackdown on the spread of sexualized, AI-generated deepfake content, including fake images depicting celebrities. In mid-February, Facebook and Instagram owner Meta said it had removed over a dozen fraudulent sexualized images of famous female actors and athletes after a CBS News investigation found a high prevalence of AI-manipulated deepfake images on Facebook.
“This is an industry-wide challenge, and we’re continually working to improve our detection and enforcement technology,” Meta spokesperson Erin Logan told CBS News in a statement sent by email at the time.
https://www.cbsnews.com/news/ai-generated-child-sexual-abuse-content-bust-europol-operation-cumberland/