OPENAI

Your chatbot’s sources matter more than you think

The latest version of ChatGPT has been found citing Elon Musk’s Grokipedia as a source, raising concerns about how unreliable information can slip into AI-generated answers.

In testing by the Guardian, GPT-5.2 referenced Grokipedia several times when responding to questions about Iranian political institutions and British historian Sir Richard Evans.

In some cases, the model repeated claims that go beyond what is reported on Wikipedia or that have already been challenged.

Grokipedia, launched in October, is an AI-generated encyclopedia designed to rival Wikipedia.

Unlike Wikipedia, it does not allow human editing and relies entirely on AI to write and update entries.

It has previously been criticised for pushing disputed narratives on topics such as gay marriage and the 6 January US Capitol riots.

While ChatGPT did not cite Grokipedia when asked directly about widely debunked claims, its information appeared when questions focused on more niche or obscure subjects.

Examples included stronger claims about links between the Iranian government and MTN-Irancell, as well as inaccurate details about Evans’ role in David Irving’s libel trial.

Researchers say this kind of indirect sourcing is harder to detect and correct.

The issue is not limited to OpenAI. Other large language models, including Anthropic’s Claude, have also been reported to cite Grokipedia.

OpenAI said its models draw from a wide range of public sources and apply safety filters to limit harmful or low-quality information.

In short

  • AI models can pick up disputed claims through lesser-known sources.

  • Citations from chatbots may boost the credibility of unreliable content.

  • Incorrect information can persist in AI systems even after being corrected.

Anthropic did not respond to requests for comment.

Where it slips

Disinformation experts warn this reflects a broader risk known as “LLM grooming”, where misleading content is created to influence AI outputs.

They also caution that when AI systems cite questionable sources, it can make those sources appear more credible.

Once misinformation enters AI models, removing it can be difficult, even after corrections are made. xAI, which owns Grokipedia, dismissed the concerns.

When your chatbot starts pulling receipts from the wrong drawer. YIKES. - MG

Keep Reading

No posts found