Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs learn to model language

Obviously not. Language is just a medium. A model of language is enough to describe how to combine words in legal sentences, not in meaningful sentences. Clearly LLMs learn much more than just the rules that allow to construct grammatically correct language, otherwise they would just babble grammatically correct nonsense such as "The exquisite corpse will drink the young wine". That knowledge was acquired via training on language, but is extra-linguistic. It's a model of the world.



Need evidence for that, afair this is a highly debated point right now, so no room for "obviously".

PS: Plus, most reasoning/planning examples coming from LLM based systems rely in bandaids that work around said LLMs (rlhf'd CoT, LLM-Modulo, Logic-of-Thought, etc) to the point they're being differentiated by the name LRMs: Large Reasoning Models. So much for modelling the world via language just using LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: