I was watching the show, 3 Body Problem, and there was a great scene where a guy tells a woman to double check another man’s work. Then goes to the man and tells him to triple check the woman’s work. MoE seems to work this way, but maybe we can leverage different models that have different randomness and maybe we can get to a more logical answer.
We have to start thinking about LLM hallucination differently. When it’s follows logic correctly and provides factual information, that is also a hallucination, but one that fits our flow of logic.
We have to start thinking about LLM hallucination differently. When it’s follows logic correctly and provides factual information, that is also a hallucination, but one that fits our flow of logic.