Millions Using AI Tools to Create Deepfake Nudes on Telegram Amidst Rising Global Digital Abuse

In recent years, an alarming trend has emerged whereby millions of individuals worldwide are utilizing the secure messaging app Telegram to create and disseminate deepfake nudes. This worrying revelation highlights the dark side of advanced AI tools that have dramatically transformed the landscape of online abuse, particularly against women.
An investigation by The Guardian has uncovered at least 150 active Telegram channels, which function as large encrypted group chats, frequented by users from diverse countries, including the UK, Brazil, China, Nigeria, Russia, and India. Certain channels offer paid services for creating ânudifiedâ photos or videos; users simply upload a photo of any woman, and the AI generates a video featuring sexual acts involving that individual. Additionally, many channels provide a continuous feed of imagesâdepicting celebrities, social media influencers, and everyday womenârendered nude or performing sexual acts through AI technology. These channels have also become platforms where users exchange tips and resources regarding various deepfake tools.
While Telegram has long had channels dedicated to the distribution of non-consensual nudes, the recent proliferation of AI technologies means that anyone may quickly become the target of graphic sexual content accessible to millions. For instance, a Telegram channel in the Russian language promoting deepfake âcelebrity leaksâ boasted a bot that could produce nudified images: âa neural network that doesnât know the word ânoâ.â
âChoose positions, shapes, and locations. Do everything with her that you canât do in real life,â the advertisement read, emphasizing the disturbing nature of these tools and their implications.
On another channel, which caters to a Chinese-speaking audience and has nearly 25,000 subscribers, users shared videos of personal acquaintances created through AI to appear as if they are stripping. Furthermore, numerous Telegram channels specifically targeting Nigerian users promote deepfake images along with hundreds of stolen private photos.
Telegram allows users to establish groups or channels capable of sharing content across unlimited contacts, fostering a sense of security in user interactions. According to the platform’s terms of service, the posting of âillegal pornographic contentâ on âpublicly viewableâ channels and bots is strictly prohibited, and users are barred from engaging in activities typically recognized as illegal in most jurisdictions.
Yet, an analysis from Telemetr.io, an independent analytics service, confirms that while Telegram has indeed shut down numerous nudification channels, countless others remain operational. Telegram has communicated to The Guardian that deepfake pornography and its creation tools are expressly forbidden under its terms. They added that âsuch content is routinely removed when discovered,â and that moderators equipped with advanced AI tools actively oversee public sections of the platform to address violations.
In fact, according to Telegram, they removed over 952,000 pieces of inappropriate material within 2025 alone.
The use of AI to create sexualized deepfakes and abuse women has recently surged into public consciousness, particularly following an incident involving Grok, a generative AI chatbot associated with Elon Muskâs platform X. A request was made for Grok to generate numerous images featuring women in bikinis or scant clothing, generated without their consent.
The outrage from this incident prompted Muskâs AI company, xAI, to announce that it would discontinue allowing Grok to produce alterations of real individuals in bikinis. Concurrently, the UK media regulatory agency Ofcom initiated an investigation into X.
While some measures are being taken, the existence of a plethora of platforms, forums, and applicationsâsuch as Telegramâaffords individuals ease of access to exploitative, non-consensual materials, allowing them to create and share this content on demand, often without the knowledge of the victims involved. A report from the Tech Transparency Project unveiled that dozens of nudification applications are accessible on both the Google Play Store and Appleâs App Store, amassing an astonishing total of 705 million downloads collectively.
In response to the findings, an Apple representative announced that 28 out of the 47 nudification apps identified by the Tech Transparency Project had been removed from their platform, whereas a spokesperson from Google stated that âmost of the appsâ had been suspended while an investigation continued.
However, Telegram channels remain a significant part of a more extensive online ecosystem that perpetuates the creation and sharing of non-consensual intimate images, as highlighted by Anne Craanen, a researcher focused on gender-based violence at the Institute for Strategic Dialogue. These channels allow users to circumvent the restrictions imposed by larger platforms like Google and facilitate the exchange of tips on bypassing safeguards against generating harmful content. âThe dissemination and celebration of this material is another aspect,â Craanen emphasizes, noting its correlation with the misogynistic motivations behind such behavior. âCirculating it among other men, boasting about it, is a clear indication of attempting to penalize or silence women.â
In 2022, Meta took steps to shut down an Italian Facebook group in which men had been sharing intimate images of their partners alongside other unsuspecting women. Prior to its removal, the group known as Mia Moglie (translated as âmy wifeâ) boasted around 32,000 members.
Despite such actions, a troubling investigative newsletter discovered that Meta had failed to contain the persistent flow of advertisements for AI nudification tools on its platforms. At least 4,431 nudifier advertisements were identified across their platforms since December 4 of the previous year, although some were classified as scams. A Meta representative stated that all advertisements violating their policies are removed promptly.
The rise of AI technologies has intensified incidents of online violence directed toward women, enabling virtually anyone to create and disseminate abusive images at the click of a button. Alarmingly, in many regions, particularly in the global south, few legal protections exist for victims seeking justice. An analysis by the World Bank suggests that less than 40% of countries have laws safeguarding women and girls from forms of cyber-harassment or cyberstalking. Furthermore, a report from the UN indicates that 1.8 billion women and girls remain unprotected against online bullying and other manifestations of technology-facilitated abuse.
Campaigners have pointed out that insufficient regulation is just one contributing factor to the heightened vulnerability of women and girls in low-income nations. Compounding issues such as limited digital literacy and economic hardships significantly exacerbate these risks. Ugochi Ihe, an associate at TechHer, a Nigerian organization promoting womenâs engagement with technology, relayed instances where women resorting to loan applications have fallen victim to blackmail by men exploiting AI technologies. âEvery day, it seems to grow more inventive in the way abuse occurs,â she remarked.
The repercussions of digital abuse extend into real lives, resulting in profound mental health struggles, social isolation, and job losses. âThese issues have the potential to irrevocably ruin a young girlâs life,â stated Mercy Mutemi, a lawyer from Kenya representing victims of deepfake abuses. Many of her clients have faced job denials or disciplinary actions in their educational institutions due to deepfake images released without their consent.
Moreover, Ihe has also noted that her organization has received complaints from women ostracized by their families after being threatened with intimate and nude images sourced from Telegram channels. âOnce those images are out there, thereâs no way to regain your dignity or sense of identity. Even if the perpetrator later confess, âOh, that was a deepfake,â the damage remains unaltered. The reputational impact is often irreparable,â she concluded.
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
