I don't have a clear view on whether LLMs are intelligent or not, but I don't understand your argument at all. Why couldn't an intelligent agent exist that refuses to admit they don't know something, and just make stuff up? I've known people who come pretty close to this behavior.