Often see this argument but it doesn't hold water for me. What we call hallucination is usually when the model says something confidently wrong. Yes the sampling procedure is nondeterministic but this is unrelated to hallucinations. The model can generate a distribution to sample with very little weight on the "wrong" output and then this should be ignored by procedures like top-k sampling. The fact that this doesn't easily solve the problem shows that hallucination is a deeper problem in the model itself and not just a byproduct of sampling.
> What we call hallucination is usually when the model says something confidently wrong
This is a poor definition that only applies to language models trained to be truthful. If you trained a language model to lie, and it told the truth, that would also be a hallucination.
Or if a model was trained to never sound confident, and it made confident, but correct, claims.
My definition is more accurate.
> Yes the sampling procedure is nondeterministic but this is unrelated to hallucinations.
It’s not the only factor, but it’s absolutely related. It’s also really easy to explain in a comment.
For example, if you always sampled the lowest ranked token, the model would always hallucinate (by output mostly garbage)
Top-k sampling doesn’t eliminate all errors, unless you’re just always picking the most likely token. At that point the sampling process is deterministic, but we’ve seen model output be poor with that setting for reasons I explain next.
> that hallucination is a deeper problem
Of course, it’s because the training process itself is nondeterministic. We can’t make a perfect model, it’s just not how statistical models work.
Yes exactly. It seems intuitive that the model could generate a better distribution and thus cure hallucination but that doesn't actually match what the model does.
The model doesn't sample a probability distribution of individual "facts"[1] it samples a probability distribution of tokens which are generally parts of words, bits of punctuation etc. That we get "facts" out of it which may even be wrong in the first place is an emergent behaviour because of the attention mechanism.
Totally agree that it's a deeper problem and may be intrinsic to the design of the models and the fact that they are trained on a next word prediction task. Karpathy talks about the models as "dreaming text". In that sense it's not surprising that some of it is whacky. Our dreams are too.
[1] By which I mean atomic things that can be right or wrong
Agreed. I have a loose idea that hallucination is related to training to maximize the probability of individual tokens while ignoring the joint sequence probability, which is along the lines of what you are saying -- it is not trained to output the most probable final sequence, so it gets stuck in the "wrong place" half way through.