🚀 Trusted by 5,000+ Advertisers & Premium Publishers

X to prohibit users from monetizing unmarked AI-generated war videos.

Elon Musk’s X is setting forth new guidelines that will prohibit users from monetizing their content on the platform if they persistently share unmarked AI-generated videos related to warfare. This policy comes in response to a surge of misleading visuals flooding social media feeds amid the recent conflict in Iran.

With approximately half a billion monthly users engaging on the X platform, the company announced that any individual who shares AI-generated videos depicting armed conflicts without a clear disclaimer will be barred from earning revenue for a period of 90 days. If users violate this policy again, they will face a permanent suspension from monetization, an announcement made on Tuesday night following a wave of deceptive online footage during the early days of the Iran conflict.

Various social media timelines, including those on X alongside Instagram and Facebook, which are managed by Meta, have been inundated with manipulated battle sequences. These include alarming portrayals like Iranian rockets allegedly targeting a U.S. jet—this particular video reportedly garnered over 70 million views as confirmed by the BBC’s verification team—and another clip, which had artificially edited real missile strike footage by replacing genuine smoke with an exaggerated fireball effect.

Content creators on X can potentially earn hundreds of dollars each month under the platform’s revenue-sharing system if they accumulate a significant number of followers, ideally reaching around 100,000. This financial incentive often drives the production of sensational and viral content, making it crucial to regulate such occurrences, especially in sensitive contexts like war.

Nikita Bier, the head of product at X, emphasized the importance of authentic information during wartime, noting the ease with which today’s AI technology can produce misleading content. “It is crucial that people have access to trustworthy information on the ground during conflicts,” Bier remarked. “From this point forward, users who share AI-generated videos depicting war without a disclaimer identifying them as AI content will be suspended from the creator revenue program for 90 days. Subsequent violations will lead to permanent exclusion from this program.”

In addition to X, many misleading videos related to the conflict have gained significant traction across various platforms. For instance, a misleading clip circulated on Instagram that erroneously claimed to show the aftermath of Iran causing destruction to a U.S. airbase in Riyadh. In reality, the footage was actually 18 months old, depicting the consequences of an Israeli strike on an oil refinery in Hodeidah, Yemen.


The UK-based fact-checking organization, Full Fact, has noted a troubling trend regarding the rapid spread of misinformation that is being accelerated by AI technologies on social media platforms.

Steve Nowottny, the editor at Full Fact, highlighted, “We have increasingly observed a variety of AI-generated images being shared widely as factual content. Instances include misleading visuals purporting to showcase an aircraft carrier and the Burj Khalifa ablaze, along with a contentious image allegedly depicting Ayatollah Khamenei’s body.”

“Even when AI-generated images appear to be of low quality or display visible watermarks, they are often still disseminated extensively. The sheer scope of this misleading content combined with the simplicity of its generation and distribution has become a significant concern for digital communication,” he added.

Researcher Sam Stockwell, affiliated with the UK’s Centre for Emerging Technology and Security, revealed a noticeable uptick in users engaging AI chatbots to assess the authenticity of videos circulating online. “Unfortunately, these chatbots lack proficiency in evaluating real-time events,” he noted.

Despite their shortcomings, individuals continue to share the erroneous assessments offered by chatbots as supportive evidence of the legitimacy of various claims. “People seem inclined to leverage AI outputs to fortify their narratives and arguments concerning the ongoing war,” Stockwell remarked.

Meta has been approached for their comments surrounding these developments, particularly regarding the proliferation of false information through AI-generated content.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *