top of page

Welcome to Crime and Justice News

Tip Line Overwhelmed By AI-Generated Child Abuse Material

A surge in child sexual abuse material generated by AI is overwhelming authorities hindered by outdated technology and laws, says a report released Monday by Stanford University’s Internet Observatory. New AI technologies have made it easier for criminals to create explicit images of children. Stanford researchers are cautioning that the National Center for Missing and Exploited Children doesn’t have the resources to fight the rising threat. The organization’s CyberTipline, created in 1998, is the federal clearing house for all reports on child sexual abuse material (CSAM) online and is used by law enforcement to investigate crimes. Many of the tips received are incomplete or riddled with inaccuracies. Its small staff has struggled to keep up with the volume, the New York Times reports. “Almost certainly in the years to come, the CyberTipline will be flooded with highly realistic-looking AI content, which is going to make it even harder for law enforcement to identify real children who need to be rescued,” said Shelby Grossman, one of the report’s authors.

The National Center for Missing and Exploited Children is combating a new wave of sexually exploitative content generated using AI that is still being defined by legislators and law enforcement officials. Already, amid an epidemic of deepfake AI-generated nudes circulating in schools, some lawmakers are taking action to ensure such content is deemed illegal. AI-generated images of CSAM are illegal if they contain real children or if images of actual children are used, researchers say. Synthetic ones that do not contain real images could be protected as free speech, said one of the report’s authors. The Center for Missing and Exploited Children, which fields tips from individuals and companies like Facebook and Google, has argued for legislation to increase its funding and give it access to more technology. The Stanford researchers found that the organization needs to change the way its tip line works to ensure that law enforcement could determine which reports involved AI-generated content, as well as ensure that companies reporting potential abuse material on their platforms fill out the forms completely.


Recent Posts

See All


A daily report co-sponsored by Arizona State University, Criminal Justice Journalists, and the National Criminal Justice Association

bottom of page