Meta’s AI Providing ‘Irrelevant’ Tips to DOJ, Report US Child Abuse Investigators

Meta’s implementation of artificial intelligence for moderating its platforms is leading to an overwhelming influx of reports regarding child sexual abuse, which are not only proving to be largely ineffective but are also straining resources and impeding crucial investigations. This assertion comes from officials within the US Internet Crimes Against Children (ICAC) task force.
“We receive a substantial number of tips from Meta that seem to lack substance,” remarked Benjamin Zwiebel, a special agent with the ICAC task force in New Mexico, during a recent testimony in the ongoing trial against Meta. The attorney general of the state contends that the company prioritizes profits over child safety. In response, Meta argues against these claims, emphasizing its recent enhancements aimed at improving safety on its platforms, including default protection settings for teen accounts. The ICAC task force is a nationwide network that collaborates with the US Department of Justice to investigate and prosecute online child exploitation and abuse cases.
Another ICAC officer, who wished to remain anonymous to discuss sensitive internal matters, indicated that “Meta is delivering thousands of tips monthly. It’s quite overwhelming due to the sheer volume of reports, yet the quality of these reports leaves a lot to be desired regarding our capacity to take significant action.” The officer also noted that the total number of cyber tips received by their department experienced a doubling from 2024 to 2025.
The unviable tips originating from Instagram, Facebook, and WhatsApp sometimes contain non-criminal information, according to both Zwiebel and two anonymous ICAC officers. Some tips indicate a potential crime, but critical images, videos, or text are often either missing or edited out, which significantly hampers further investigative efforts.
“The volume of unproductive tips from Instagram has surged in recent months, and that’s primarily where we’re observing the absence of necessary information,” another ICAC officer noted. “Without this information, we are unable to advance our investigations. It feels disheartening to realize that a crime has occurred but we cannot identify the perpetrator.”
When asked about Zwiebel’s testimony and the comments from ICAC officers, a Meta spokesperson responded that: “We have been assisting law enforcement in prosecuting criminals for many years. The Department of Justice has consistently praised our quick cooperation, which has facilitated arrests, and the National Center for Missing and Exploited Children (NCMEC) has commended our optimized tip reporting process. In 2024, we handled over 9,000 emergency requests from US authorities, resolving them in an average of 67 minutes—much faster for urgent cases concerning child safety or suicide.”
Meta further pointed out that agent Zwiebel endorsed the use of Meta’s teen accounts feature during his testimony, suggesting it as the sole available option, considering that teens are unlikely to abstain from social media usage.
Raúl Torrez, the New Mexico attorney general leading the case against Meta, acknowledged the company’s cooperation in providing leads related to child abuse: “I want to commend some social media platforms, including Meta, for contributing to reports of images to NCMEC.”
Court filings released on Friday revealed that Meta executives had raised internal concerns about their capability to manage child sexual abuse and notify law enforcement as early as 2019. During that period, the company was in the process of implementing end-to-end encryption in Facebook Messenger, a feature that secures messages from everyone except the intended recipient through advanced cryptography.
Filings raise new questions
“We are about to engage in an irresponsible act as a company. This is extraordinarily reckless,” expressed Monika Bickert, Meta’s head of content policy.
Bickert pointed out that if Messenger were to be encrypted, “there would be no means to detect strategies for terror attack planning or child exploitation,” which could severely hamper collaboration with law enforcement. Furthermore, she highlighted that statements made by Meta regarding their ability to enforce safety operations were “grossly misleading,” per the internal documents.
Another document shared insights from Meta employees, estimating that encrypting Messenger would inhibit their capability to share data proactively with law enforcement across 600 child exploitation cases, 1,454 sextortion instances, 152 terrorism-related cases, and 9 threats of school shootings.
Andy Stone, a Meta spokesman, responded to these claims by stating, “The concerns raised back in 2019 are why we have since developed various new safety features geared towards detecting and preventing abuse, all tailored to function within encrypted chats.”
Child safety organizations have criticized the rollout of Messenger encryption, which was eventually implemented in 2023.
Reporting child abuse en masse
Under current laws, social media platforms based in the US are mandated to report any identified child sexual abuse material (CSAM) present on their services to the National Center for Missing & Exploited Children (NCMEC). NCMEC acts as a national hub for these reports, forwarding them to the appropriate law enforcement entities nationwide and internationally. However, it lacks the authority to filter out any non-viable tips before forwarding them to law enforcement.
Meta stands as the largest contributor of reports to NCMEC. In its 2024 data report, NCMEC recorded that Meta submitted 13.8 million reports across Facebook, Instagram, and WhatsApp, out of a total of 20.5 million tips.
According to NCMEC, over 1 million CyberTipline reports in 2024 were associated with specific US states, making them accessible to ICAC task forces nationwide, along with other federal, state, and local law enforcement agencies for further action.
Both Meta and other social media channels utilize AI to identify and report dubious content on their platforms, while human moderators review a subset of flagged materials before forwarding them for law enforcement action. Past reports from The Guardian highlighted that AI-generated tips that haven’t been manually reviewed by a social media employee frequently cannot be accessed by law enforcement without a warrant due to Fourth Amendment protections. This additional legal requirement also prolongs the investigation of potential criminal activities, as articulated by attorneys involved in these cases.
A Meta representative noted, “It is regrettable that judicial decisions have increased the burden on law enforcement by necessitating search warrants for accessing identical copies of content that we’ve already evaluated and reported. Our image-matching system identifies copies of known child exploitation at an unprecedented scale that is infeasible to perform manually, and we strive to recognize new child exploitation materials through our technology, community reports, and efforts from our dedicated child safety teams.”
Legislative change spurs avalanche of tips
Following the introduction of the Report Act (Revising Existing Procedures On Reporting via Technology) in November 2024, online service providers are now obligated to enhance and expand their reporting requirements. This includes notifying NCMEC’s CyberTipline of not only child sexual abuse material but also threats of imminent abuse, child sex trafficking, and related exploitation. The act also compels providers to preserve evidence for longer durations and imposes stricter penalties for knowingly failing to comply.
Since the enactment of this legislation, there has been a significant rise in the number of unviable tips supplied by Meta. Two ICAC officers suggested that this increase may stem from the company’s efforts to assure compliance with the law. Many of these reported incidents may not constitute crimes, such as discussions among adolescent girls regarding their favorite celebrities.
“Based on my experience and training, it seems they are being submitted via AI, given the typical errors that an AI would make that a human reviewer would easily catch,” Zwiebel noted in court.
Contrastingly, Zwiebel indicated that his department has received markedly fewer tips concerning legitimate cases of child sexual abuse material (CSAM) distribution from Meta compared to previous years.
Every tip received by an ICAC division is required to undergo a review process, but the rising number of irrelevant tips is detracting from valuable time and resources that could be directed toward investigating actual child abuse cases, according to the two officers.
“It’s demoralizing. We are overwhelmed with tips and yearn to get out and do meaningful work,” lamented one ICAC officer. “We lack sufficient personnel to manage this influx. It’s simply unmanageable given the continuous stream of incoming reports.”
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
