> If there is a difference, where does that exist? In the mechanism of the LLM, or in your mind?
Thank you for this sentence: it is hard to get across how often Gen-AI proponents are actually projecting perceived success onto LLMs while downplaying error.
You mostly see people projecting perceived error onto LLMs?
I don't think I've seen a single article about an AI getting things wrong, recently, where there was a nuanced notion about whether it was actually wrong.
I don't think we're anywhere close to "nuanced mistakes are the main problem" yet.
But the errors are fundamental, and the successes actually subjective as a result.
That is, it appears to get things right, really a lot, but the conclusions people draw about why it gets things right are undermined by the nature of the errors.
Like, it must have a world model, it must understand the meaning of... etc.; the nature of the errors they are downplaying fundamentally undermines the certainty of these projections.
Thank you for this sentence: it is hard to get across how often Gen-AI proponents are actually projecting perceived success onto LLMs while downplaying error.