But that technology only works to detect previously reported images, not newly AI-generated images. Normally, content of known victims can be blocked by child safety tools that hash reported images and detect when they are reshared to block uploads on online platforms. “Children’s images, including the content of known victims, are being repurposed for this really evil output,” Portnoff said. Harmful AI materials can also re-victimize anyone whose images of past abuse are used to train AI models to generate fake images. Now, law enforcement will be further delayed in investigations by efforts to determine if materials are real or not. This 'explosion' of 'disturbingly' realistic images could help normalize child sexual exploitation, lure more children into harm's way, and make it harder for law enforcement to find actual children being harmed, experts told the Post.įinding victims depicted in child sexual abuse materials is already a 'needle in a haystack problem,' Rebecca Portnoff, the director of data science at the nonprofit child-safety group Thorn, told the Post. Child safety experts are growing increasingly powerless to stop thousands of 'AI-generated child sex images' from being easily and rapidly created, then shared across dark web pedophile forums, The Washington Post reported.