Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any reply will always sound like someone is disagreeing, even if they claim not to.

Though in this case I'm not even sure what the comment they're supposedly disagreeing with is even claiming. Is it even claiming anything?



> Any reply will always sound like someone is disagreeing, even if they claim not to.

Disagree! :)

> Though in this case I'm not even sure what the comment they're supposedly disagreeing with is even claiming. Is it even claiming anything?

It's offering support for the claim that LLMs hallucinate 100% of the time, even when their hallucinations happen to be true.


Ah okay I understand, I think. So basically that's solipsism applied to a LLM?

I think that's taking things a bit too far though. You can define hallucination in a more useful way. For instance you can say 'hallucination' is when the information in the input doesn't make it to the output. It is possible to make this precise, but it might be impractically hard to measure it.

An extreme version would be a En->FR translation model that translates every sentence into 'omelette du fromage'. Even if it's right the input didn't actually affect the output one bit so it's a hallucination. Compared to a model that actually changes the output when the input changes it's clearly worse.

Conceivably you could check if the probability of a sentence actually decreases if the input changes (which it should if it's based on the input), but given the nonsense models generate at a temperature of 1 I don't quite trust them to assign meaningful probabilities to anything.


No, your constant output example isn’t what people are talking about with “hallucination.” It’s not about destroying information from the input, in the sense that it you asked me a question and I just ignored you, I’m not in general hallucinating. Hallucinating is more about sampling from a distribution which extends beyond what is factually true or actually exists, such as citing a non-existent paper, or inventing a historical figure.


> It's offering support for the claim that LLMs hallucinate 100% of the time, even when their hallucinations happen to be true.

Well this makes the term "hallucinate" completely useless for any sort of distinction. The word then becomes nothing more than a disparaging term for an LLM.


Not really. It distinguishes LLM output from human output even though they look the same sometimes. The process by which something comes into existence is a valid distinction to make, even if the two processes happen to produce the same thing sometimes.


Why is it a valid distinction to make?

For example, does this distinction affect the assessed truthfulness of a signal?

Does process affect the artfulness of a painting? “My five year old could draw that?”


It makes sense to do so in the same way that it’s useful to distinguish quantum mechanics from classical mechanics, even if they make the same predictions sometimes.


A proposition of any kind of mechanics is what can be true or false. The calculations are not what makes up the truth of a proposition, as you’ve pointed out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: