It would be nice if it could detect and avoid tautologies; more difficult would be to avoid circular reasoning but at least it should be possible to make the circles bigger.
I was actually surprised by the sentence "bad means something is bad" given that substituting the second "bad" for a synonym is well within NLP models' capability, and you'd expect training processes to eliminate that level of tautology in model outputs.
Then I remembered Github copilot wasn't optimised for natural language, it was optimised for programming, where [near] tautology isn't unusual and variable names aren't supposed to be swapped out...