TBH. I never read prose that couldn't be in some way misinterpreted or misunderstood. Because much of it is context sensitive.
That is why we have programming languages, they, coupled with a specific interpreter/compiler, are pretty clear on what they do. If someone misunderstands some specific code segment, they can just test their assumptions easily.
You cannot do that with just written prose, you would need to ask the writer of that prose to clarify.
And with programming languages, the context is contained, and clearly stated, otherwise it couldn't be executed. Even undefined behavior is part of that, if you use the same interpreter/compiler.
Also humans often just read something wrong, or skip important parts. That is why we have computers.
Now, I wouldn't trust a LLM to execute prose any better then I trust a random human of reading some how-to guide and doing that.
The whole idea that we now add more documentation to our source code projects, so that dumb AI can make sense of it, is interesting... Maybe generally useful for humans as well... But I would instead target humans, not LLMs. If the LLMs finds it useful as well, great. But I wouldn't try to 'optimize' my instructions so that every LLM doesn't just fall flat on its face. That seems like a futile effort.
That is why we have programming languages, they, coupled with a specific interpreter/compiler, are pretty clear on what they do. If someone misunderstands some specific code segment, they can just test their assumptions easily.
You cannot do that with just written prose, you would need to ask the writer of that prose to clarify.
And with programming languages, the context is contained, and clearly stated, otherwise it couldn't be executed. Even undefined behavior is part of that, if you use the same interpreter/compiler.
Also humans often just read something wrong, or skip important parts. That is why we have computers.
Now, I wouldn't trust a LLM to execute prose any better then I trust a random human of reading some how-to guide and doing that.
The whole idea that we now add more documentation to our source code projects, so that dumb AI can make sense of it, is interesting... Maybe generally useful for humans as well... But I would instead target humans, not LLMs. If the LLMs finds it useful as well, great. But I wouldn't try to 'optimize' my instructions so that every LLM doesn't just fall flat on its face. That seems like a futile effort.