But if the person writing the prompt is expressing their mental model at a higher level, and the code can be generated from that, the resulting artifact is, by Naur's theory, a more accurate representation of the actual program. That would be a big deal.
LLMs don't have a "mental model" of anything.