On Monday, 11 May, 2026, James wrote
> I have done some analysis of LLMs, and there is actually a way, maybe
> a little expensive, to determine whether the LLM is hallucinating or
> not.
> You ask the LLM exactly the same question 10 times.
> If it comes back with the same answer all 10 times, it is unlikely to
> be hallucinating.
> If it comes back with different answers most of the time. I.e. the 10
> answers are different, it is hallucinating.
> So, LLMs are not consistent with their hallucinations. I.e. it is not
> the same hallucination every time, so one can use that to detect
> hallucinations.
That's an interesting test - thanks.
In case you missed it, here's an article from The
Register - with a link to the original paper -
about LLM hallucination:
https://www.theregister.com/special-features/2026/01/26/keep-it-simple-stupid-agentic-ai-tools-choke-on-complexity/4271078
The authors of the paper argue that if the
complexity of your query exceeds the complexity of
the model, then the output will be a
hallucination.
Nick.
--
Nick Chalk ................. once a Radio Designer
Confidence is failing to understand the problem.
--
Please post to: Hampshire@???
Manage subscription:
https://mailman.lug.org.uk/mailman/listinfo/hampshire
LUG website:
http://www.hantslug.org.uk
--------------------------------------------------------------