Yes. My proposal is to not give the agent Bash, because it is not required for the sorts of things you want it to be able to do. You can whitelist specific actions, like git commits and file writes within a specific directory. If the LLM proposes to read a URL, that doesn't require arbitrary code; it requires a system that can validate the URL, construct a `curl` etc. command itself, and pipe data to the LLM.
I am not a security researcher, but this combination does not align with "safe" to me.
More practically, if you are using a coding agent, you explicitly want it to be able to write new code and execute that code (how else can it iterate?). So even if you block Bash, you still need to give it access to a language runtime, and that language runtime can do ~everything Bash can do. Piping data to and from the LLM, without a runtime, is a totally different, and much limited, way of using LLMs to write code.
> write new code and execute that code (how else can it iterate?)
Yeah, this is the point where I'd want to keep a human in the loop. Because you'd do that if you were pair programming with a human on the same computer, right?
When I have paired, normally the other person can e.g. run the app without getting my review & signoff. Because the other person also is a programmer, (typically) working on their computer.
The overall result will be the product of two minds, but I have never seen a pairing session where the driver waits for permission to run code.
It is very much required for the sorts of things I want to do. In any case, if you deny the agent the bash tool, it will just write a Python script to do what it wanted instead.