New ChatGPT Model Incorporates Elon Musk’s Grokipedia as Source, Testing Shows

The latest ChatGPT model has begun referencing Elon Musk’s Grokipedia across various queries, including sensitive topics like Iranian conglomerates and Holocaust denial, sparking significant concerns regarding misinformation on the platform.
In a series of tests conducted by the Guardian, GPT-5.2 cited Grokipedia nine times while responding to a variety of questions. These inquiries delved into political structures in Iran, such as the salaries of Basij paramilitary forces and the ownership of the Mostazafan Foundation. Additionally, questions were posed concerning the biography of Sir Richard Evans, a British historian who served as an expert witness against Holocaust denier David Irving during a libel trial.
Launched in October, Grokipedia is an AI-generated online encyclopedia intended to rival Wikipedia. It has faced criticism for promoting rightwing narratives on various subjects, such as same-sex marriage and the January 6th insurrection in the United States. Unlike Wikipedia, Grokipedia does not permit direct human edits; instead, an AI model generates content and responds to change requests.
Interestingly, when prompted directly about misinformation regarding the insurrection, media bias against Donald Trump, or the HIV/AIDS epidemic—topics where Grokipedia has been noted for spreading inaccuracies—ChatGPT did not cite Grokipedia. However, information from Grokipedia did seep into the model’s responses when more obscure subjects were addressed.
For example, when discussing the Iranian government’s connections to MTN-Irancell, ChatGPT made stronger claims, attributed to Grokipedia, than are present on Wikipedia—including the assertion that the telecommunications company is linked to the office of Iran’s supreme leader.
Furthermore, ChatGPT cited Grokipedia when reiterating information about Sir Richard Evans that the Guardian has since debunked, particularly concerning his work as an expert witness in the trial against David Irving.
GPT-5.2 is not alone in utilizing Grokipedia as a source; there have been anecdotal instances of Anthropic’s Claude referencing Musk’s encyclopedia on topics ranging from petroleum production to Scottish ales.
A spokesperson from OpenAI mentioned that the model’s web search aims to include a broad spectrum of publicly available sources and perspectives. They stated, “We apply safety filters to mitigate the risk of presenting links associated with severe harm, and ChatGPT clearly indicates the sources that informed its responses through citations.” The spokesperson added that there are ongoing initiatives to filter out low-credibility information and combat influence campaigns.
Anthropic did not provide a comment when approached regarding this issue.
The subtle infiltration of Grokipedia’s information into LLM responses raises concerns among disinformation researchers. Last spring, security experts expressed worries that malicious actors, including Russian propaganda networks, were producing copious amounts of disinformation aimed at seeding AI models with falsehoods—a process referred to as “LLM grooming.”
In June, apprehensions were voiced in the US Congress about Google’s Gemini allegedly repeating the Chinese government’s views on human rights abuses in Xinjiang and its Covid-19 policies.
Nina Jankowicz, a disinformation researcher closely engaged with LLM grooming issues, noted that ChatGPT’s reliance on Grokipedia raises similar concerns. She stated that while Musk might not have intended to sway LLMs, Grokipedia entries she and her colleagues examined depended on “sources that are unreliable at best, and deliberately misleading at worst.”
Moreover, when LLMs cite sources like Grokipedia or the Pravda network, it could inadvertently enhance those sources’ perceived credibility among readers. Jankowicz remarked, “People might think, ‘if ChatGPT is citing it, surely it’s a credible source,’ which may lead them to seek news about Ukraine from Grokipedia.”
Once inaccurate information infiltrates an AI chatbot, it can be quite challenging to eliminate. Jankowicz recently discovered that a prominent news outlet had included a fabricated quote attributed to her in a story about disinformation. She reached out to the news outlet to request its removal and even shared her experience on social media.
While the news outlet complied and removed the quote, AI models continued to reference it erroneously as her word for some time afterward. Jankowicz remarked, “Most individuals won’t undertake the effort required to uncover the actual truth.”
In response to media inquiries regarding Grokipedia, a spokesperson for xAI, its parent company, made the bold statement: “Traditional media is deceptive.”
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
