I like to think of mozart and the amount of music he listened to and was influenced by and compare it to what (I imagine) is used by these machines. Heck, you don't even instruct the machines on scales, rhythms or other sorts of fundamental musical aspects, you just tell the machines to make noise that sounds like what they've hoovered up. I'm still convinced that AI music proponents want to blur and obscure the differences between human composers and machine "composers" but I personally think the differences are pretty interesting and worthy of consideration.
My best tracks are extended in 10-20 second sections. And I carefully choose the prompt and what to keep and where to continue (crop), sometimes just a few seconds. So you can compose with these models, basically, you have to if you want to get something that isn't bland.
So overall, yes, anyone can, with some luck create a "radio friendly" track with music AI, but not everyone can create good art with it. You still need to understand music, and the more you know about genres and composers and how they fit together the more power you have to create musically interesting patterns.
Keep in mind that Sunauto has 3400 generic tags + an undisclosed amount of artist names. You need to understand what makes it possible to combine different Artists/Musicians and what could lead to something interesting.