Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess it depends on what you understand "learn" to mean.

But in my mind, if I tell the LLM to do something, and it did it wrong, then I ask it to fix it, and if in the future I ask the same thing and it avoids the mistake it did first, then I'd say it had learned to avoid that same pitfall, although I know very well it hasn't "learned" like a human would, I just added it to the right place, but for all intents and purposes, it "learned" how to avoid the same mistake.



This is a silly definition of learning, and any way, LLMs can't even do what you describe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: