What you (and the authors) call "hallucination," other people call "imagination."
Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.
Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.
Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]
[0] https://arxiv.org/abs/2401.11817