It absolutely can't be trusted, but it can focus searches for relevant information. I started researching network monitoring solutions and chatgpt provided suggestions for me to look at and provided terms that I use to find pages in documentation that were relevant to me. I saw real solutions that would work that my initial web searches missed. I got to relevant documentation much faster with a fair expectation of what I would find.
It is a greatat summarizing but poor at understanding. It is great at concepts but crap at precision. And every actionable "fact" coming from it needs to be vetted.
You should trust it as much as you'd trust another human.
The quality of the LLM matters. The better LLMs don't hallucinate as much as the rest. ChatGPT-4 appears to be trained to consult the web when it receives questions for which it's likely to hallucinate, such as questions about hard figures.
> You should trust it as much as you'd trust another human.
Eh? Only a sociopath, when confronted with a question they don't know the answer to, confidently makes something up. The human may not know the answer, but they generally know whether they know the answer or not. The machine does not (of course, it doesn't _know_ anything). You should absolutely trust something less than this far less than the average human.
> Only a sociopath, when confronted with a question they don't know the answer to, confidently makes something up. The human may not know the answer, but they generally know whether they know the answer or not.
What? No. Literally fake memories are such a well-known phenomenon the cops will abuse it to generate fake solutions to crimes etc.
Also there are people for which compulsive lying is a pathological problem, who are definitely not sociopaths, and there are many people who demonstrate the processes to non-pathological/non-clinical levels.
People will, indeed, “just make it up” perhaps especially if they don’t really know the answer, because that’s socially embarrassing etc - that’s literally the opposite of sociopathy.
We need a different word than "hallucinate" or "bullshit" because the LLM is executing the same functionality when it _gets_ the correct answer or incorrect. It doesn't _know_ the correct answer in either case.
Hallucinates?
I'd use the term 'bullshits' when it doesn't know the right answer.
This makes it a dangerous thing to trust.