Different people write code of different quality, and we'd expect LLMs to imitate the median quality of code. So it's clear that if you have a background in something then it will generate worse code than you would - however, if you'd compare this code with something written by a new junior hire, how would it compare? IMHO there are many people who are objectively below median, and for them I'd expect that a LLM can generate more secure code than they would write themselves.
The power of LLMs atm is that they allow a person to be "median" in a very wide range of topics. This is extremely powerful, even to folks that can attain an above average mastery. It also does this quickly.
There was a recent article on Slashdot titled something how metaverse may increase the gdp by like 2 or 3 percent. It was obviously laughed at. It's actually LLMs that will do this, and easily.
Yeah, a coworker tried an exercise like this, and it was basically (even with as-an-expert prompts) like trying to mentor a high school intern that's trolling you (except that it won't actually get better at anything in the process.)