I realised a while back a lot of computer music is really computer science music - it's people who know something about computers but not much about the art of music, playing with relatively trivial algorithms to create music-like results.
There's also the academic musical equivalent - music professors using stock faddy techniques like serialism or (currently) number and group theory.
It's not that this is an impossible problem. It's more that the set of people who can code machine learning algorithms and understand music theory and are creative enough to invent new algorithmic techniques and to create more-than-listenable music is incredibly small - double figures, if that.
So progress in non-trivial computer music has been incredibly slow. The DSP side has been far more successful, because DSP is - in most ways - a much simpler problem.
Music is (dis)harmonies over rythmic patterns. There isn't anything inherently artistic about humans that computers can't replicate with time even the ability to compose an original song. That is besides the lives of humans and their appearance and history which is important but not the only factor.
The irony is that musicians are actually striving for, but failing at, reaching the perfection level that computers have.
And so for computers to sound more human like they have algorithms that make them more "sloppy"
What composition algorithms lack is not the ability to compose like humans but a life that will give them angels and a story.
Then again a lot of music is really formulaic anyway and computers are used for most of it. There is nothing in a few years that will hinder some sort of computer star to be born. But it's probably never going to connect with us the same way another human can. Not for now at least.
I think that's a good example of what I'm saying - just because you don't understand the details doesn't mean professional musicians and composers don't have much deeper insight into music than you do.
If you think music is [list of numbers] that can be made more "human" with a bit of timing randomisation, then of course it's all perfectly straightforward.
In reality there's rather more happening.
>What composition algorithms lack is not the ability to compose like humans but a life that will give them angels and a story.
No, the music basically sucks as music. The number of people willing to listen to it voluntarily without being paid to - usually as students or academics - is vanishingly small.
The story part only becomes relevant after that problem is solved.
And while it's true that music is formulaic, it's also true that computer music hasn't yet worked out how to copy all the details of the formulas - never mind produce original and memorable new formulas from scratch.
The best formula copier is probably Cope's EMI, and that sounds exactly like what it is - a slightly confused cut-and-paste cliche machine, not a human composer with a point to make.
Music becomes meaningful in the listeners mind, and the things that make it meaningful is both that it's formulaic (structure) and whatever the performer instills in the listenter.
I somewhat agree about computer music, though it appears to be an extension of the process driven composition that has been part of western (art) music for a while now (e.g. modulation).
Sometimes I wonder if linguistics has more to offer composition than algebra (speculation, as I know next to nothing about (non-CS) linguistics).
> It's not that this is an impossible problem. It's more that the set of people who can code machine learning algorithms and understand music theory and are creative enough to invent new algorithmic techniques and to create more-than-listenable music is incredibly small - double figures, if that.
Are you pursuing something like this? Or know anyone who is? This is one of my main interests (alongside better interfaces for composition and well, making music). I'm actually back at University for my second degree to study this sort of thing.
If you have a blog or anything I'd be interested to find out...
The reason for it has a lot to do with how Academia works. You need to write papers and produce innovative works. If you don't then you won't get funding or advance your career.
> or (currently) number and group theory.
That was Xenakis back in the 1950s !
Xenakis is the exception. He's really the foundation of academic computer music and his music is amazing and moving and his compositional concepts are still being hacked out by music programmers today.
There's also the academic musical equivalent - music professors using stock faddy techniques like serialism or (currently) number and group theory.
It's not that this is an impossible problem. It's more that the set of people who can code machine learning algorithms and understand music theory and are creative enough to invent new algorithmic techniques and to create more-than-listenable music is incredibly small - double figures, if that.
So progress in non-trivial computer music has been incredibly slow. The DSP side has been far more successful, because DSP is - in most ways - a much simpler problem.