Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"The map is not the territory" applies to AI/LLMs even more so.

LLMs don't have a "mental model" of anything.



But if the person writing the prompt is expressing their mental model at a higher level, and the code can be generated from that, the resulting artifact is, by Naur's theory, a more accurate representation of the actual program. That would be a big deal.

(Note the words "if" and "by Naur's theory".)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: