But the LLM interacts with the program and the world through debugger, run-time feedback, linter, fuzzer etc., we can collect all the user feedback, user pattern ... Moreover, it can also get visual feedback. Reason through other programs like physic simulation etc. Use a robot to interact with the device running the code physically. Can use proof verifier like lean, to ensure its logical model of the program is sound. Do some back and forth between the logical model and the actual program through experiments. Maybe not now, but I don't see why the LLM needs to be kept in the Chinese Room.
That's true in general but not true of any current LLM, to my knowledge. Different subsets of those inputs and modalities, yes. But no current LLM has access to all of them.