🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Study Reveals Grok AI Produced Approximately 3 Million Explicit Images in Just 11 Days

In a shocking revelation, researchers have uncovered that Grok AI generated approximately 3 million sexualized images in under two weeks, with around 23,000 of these images seemingly depicting children. The findings came from the Center for Countering Digital Hate (CCDH), which described Grok as having transformed into an “industrial-scale machine for the production of sexual abuse material.” These alarming statistics have raised significant concerns among experts and advocates for children’s safety.

This estimate emerged after the AI image generation tool, launched by Elon Musk, ignited international outrage for allowing users to upload photos of strangers and celebrities. Users could digitally manipulate these images, stripping them down to underwear or bikinis, posing them provocatively, and posting them on social media platform X. The tool’s functionality sparked a viral trend, reaching its peak on January 2, when it recorded an astonishing 199,612 individual requests, as indicated by analysis from Peryton Intelligence, a firm specializing in online monitoring of hate content.

Following a thorough assessment by CCDH from Grok’s launch on December 29, 2025, to January 8, 2026, it appears that the technology’s impact may have been more extensive than initially understood. Public figures identified in the inappropriate images include celebrities and influential individuals such as Selena Gomez, Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice, Nicki Minaj, Christina Hendricks, Millie Bobby Brown, the Swedish Deputy Prime Minister Ebba Busch, and even former US Vice President Kamala Harris.

In response to the backlash, the feature was limited to paying users on January 9, with additional restrictions imposed after UK Prime Minister Keir Starmer described the situation as “disgusting” and “shameful.” Other nations like Indonesia and Malaysia took decisive action by blocking access to this AI tool, recognizing the potential dangers associated with it.

Further analysis from the CCDH estimated that over an 11-day timeframe, Grok was generating sexualized images of children at a staggering rate of every 41 seconds. One disturbing instance included a schoolgirl’s selfie that was altered to showcase her in a bikini, thus turning an innocuous moment into an objectifying image without her consent.

Imran Ahmed, the chief executive of CCDH, expressed grave concerns regarding the findings. “What we uncovered was both clear and troubling: during that period, Grok became an industrial-scale machine for the production of sexual abuse material,” he stated. He emphasized the egregiousness of allowing a tool to strip a woman of her clothes without consent, categorizing such actions as sexual abuse. He criticized Elon Musk for promoting the product while being aware of its misuse, suggesting that the focus had shifted toward generating controversy and engagement rather than ensuring user safety. “It was deeply disturbing,” Ahmed noted.

Moreover, he pointed out a broader issue within Silicon Valley: “This has become a standard playbook for social media and AI platforms. The incentives are all misaligned. These companies reap profits from outrage and controversy.” He clarified that the issue extends beyond Musk personally to a problematic systemic structure that thrives in the absence of regulatory safeguards. Until regulators establish a minimum standard of safety, similar scenarios are destined to recur.

In a development aimed at addressing public concerns, X announced on January 14 that it had discontinued Grok’s feature which allowed the manipulation of images of real individuals to portray them in revealing clothing, even for premium subscribers. This decision reflects a growing commitment toward making X a safer space for all users.

The platform reiterated its zero-tolerance policy on child exploitation and non-consensual nudity. In a public statement, X pledged, “We remain committed to making X a safe platform for everyone and have a zero-tolerance approach towards any form of child sexual exploitation, non-consensual nudity, and unwanted sexual content. We actively remove high-priority violative content, including child sexual abuse material, and take necessary actions against accounts that violate our community guidelines.” Additionally, they have committed to reporting accounts seeking child sexual exploitation materials to law enforcement when necessary.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *