That's actually how I spent a lot of my time. GPT drafts code of mixed quality, and I spend a lot of time reviewing it.
What's interesting is that GPT isn't trained to produce high-quality code. It's trained as an autocompletion tool. I'd be curious how smart such a system could be if we knew how to engineer it for quality.
Because people will generate more crap and throw it at you as the quality control. They'll have less knowledge overall as they cede more to chat GPT. Also your impact will be in cr, so they'll send you more people. You get faster and more responsive using chat GPT to review, so you'll do more reviews, causing people to send more frequent ones to get your feedback faster. Eventually just hire some interns to toss plausible shit at gpt and then you. Fire the ftes. Done.