Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

how did you create this without committing grand theft musica


The first 80s song I heard was a literal copy of Phil Collins. But there are no emotions attached to it (for me), and the lyrics are random. It’s more like supermarket background music IMHO, not something I would pay for, especially when we have centuries of music to discover already, why make fake stuff like that?

Edit: I have just heard the funniest most ridiculous metal song ever without a touch of metal inside. Breathe of Death, it’s like a bad joke.

If thats the future of anything, I’m going back to plain C (code) when I retire and I’ll never approach the internet ever again.


In my opinion training on all music is no more theft than Taylor Swift listening to the radio growing up (as long as we don't regurgitate existing songs which would be bad and useless anyway). I think an alternative legal interpretation where all of humanity's musical knowledge and history are controlled by three megacorporations (UMG/Sony/Warner) would be kinda depressing. If the above is true we might as well shutdown OpenAI and delete all LLM weights while we're at it, losing massive value to humanity.


It’s intellectual property laundering. A company selling a button that launders the blood sweat and tears of generations of artists is not the same as a person being inspired and dedicating themselves to mastery.

Humans create value. AI consumes and commoditizes that value, stealing it from the people and selling it back to their customers.

It’s unethical and will be detrimental in the long run. All profit should be distributed to all artists in the training set.


It won't be detriment to consumers who ultimately decide the value. If I could AI gen a better tasting cocacola for cheaper that would be beneficial to consumers and coke wouldn't deserve a cut. Get gud, as they say.


> In my opinion training on all music is no more theft than Taylor Swift listening to the radio growing up (as long as we don't regurgitate existing songs which would be bad and useless anyway).

I beg of you, speak to some real life musicians. A human composing or improvising is not choosing notes based on a set of probabilities derived from all the music they’ve heard in their life.

> I think an alternative legal interpretation where all of humanity's musical knowledge and history are controlled by three megacorporations (UMG/Sony/Warner) would be kinda depressing.

Your impoverished worldview of music as an artistic endeavor is depressing. Humanity’s musical knowledge extends far beyond the big 3.

> If the above is true we might as well shutdown OpenAI and delete all LLM weights while we're at it

Now we’re talking.

> losing massive value to humanity.

Nothing of value would be lost. In fact it would refund massive value to humanity that was stolen by generative AI.


The difference being that a musician being influenced by other musicians still has to work to develop the skills necessary to distill those influences into a final product, and colors that output with their own subjective experiences and taste. This feels like a conveniently naive interpretation to justify stealing artists' work and using it to create derivative generative slop. The final line in your comment is pretty telling of how seriously you take this issue (which is near-universally decried by artists) -- some other massive company is doing a bad thing, so why shouldn't I?

edit: I have to add how disingenuous I find calling out corporations owning "all of humanity's musical knowledge and history" as if generative AI music trained on unlicensed work from artists is somehow a moral good. At least the contracts artists make with these corporations are consensual and have the potential to yield the artist some benefit which is more than you can say for these gen-AI music apps.


I don't see how the amount of work that went into it changes the core fact that all art is influenced by that which came before, and we don't call that stealing (unless you truly believe that "all art is theft").

My point re: LLMs wasn't meant to exclusively be a "they're doing it" one, the hope was to give an example of something many people would agree is super useful and valuable (I work much faster and learned so much more in college thanks to LLMs) that would be impossible in the proposed strict interpretation of copyright.

edit responding to your edit:

Re: moral good: I think that bringing the sum of human musical knowledge to anybody who cares to try for free is a moral good. Music production software costs >$200 and studios cost thousands and majoring in music costs hundreds of thousands, but we can make getting started so much easier.

Is it really consent for those artists signing to labels when only three companies have total control of all music consumption and production for the mass market? To be clear, artists absolutely have a right to benefit from reproduction of their recordings. I just don't think anyone should have rights to the knowledge built into those creations since in most cases it wasn't theirs to begin with (if their right to this knowledge were affirmed, every new song someone creates could hypothetically have a konga line of lawyer teams clamoring for "their cut" of that chord progression/instrument sample/effect/lyrical theme/style).


I think we intuitively allow for artists to derive and interpolate from their influences because of a baseline understanding that A) it is impossible to create art without influence and B) that there is an inherent value in a human creating art and expressing themselves. How that relates to someone using unlicensed music from actual humans to train an AI model in order to profit off of the collective work of thousands of actual human artists, I have no idea.

edit:

> I think that bringing the sum of human musical knowledge to anybody who cares to try for free is a moral good

Generative AI music isn't in any way accomplishing this goal. A free Spotify account with ads accomplishes this goal -- being able to generate a passable tune using a mish-mash of existing human works isn't bringing musical knowledge to the masses, it's just enabling end users to entertain themselves and you to profit from that.

> Is it really consent for those artists signing to labels

Yes? Ignoring the fact that there are independent labels outside the ownership of the Big Three you mention, artists enter into contracts with labels consensually because of the benefits the label can offer them. You train your model on these artists' output without their consent, credit or notification, profit off of it and offer nothing in return to the artists.


A) Agreed! B) So I guess the argument here is that this doesn't apply to AI music. I think that if someone really pours their soul into the lyrics of a song and regenerates/experiments with prompts until it's just right, and maybe even contributes a melody or starting point that's still a human creating art and expressing themselves. It's definitely not as difficult as creating a song from scratch, but I've been told similar arguments were made regarding whether photography was art when that became a thing.

btw, if the user of the AI doesn't do any of the above then I think the US copyright office says it can't be copyrighted in the first place (so no profiting for them anyway).


> if the user of the AI doesn't do any of the above then I think the US copyright office says it can't be copyrighted in the first place (so no profiting for them anyway).

Am I understanding right that the point here is that while you are able to get away with using copyrighted material to turn a profit, your end users cannot, so no worries?


I think there are a few fallacies at play here:

1. Anthropomorphizing the kind of “influence” and “learning” these tools are doing, which is quite unrelated to the human process

2. Underrepresenting the massive differences in scale when comparing the human process of learning vs. the massive data centers training the AI models

3. Ignoring that this isn’t just about influence, it’s about the fact that the models would not exist at all, if not for the work of the artists it was trained on


> Is it really consent for those artists signing to labels when only three companies have total control of all music consumption and production for the mass market?

This premise is false. I have made plenty of money busking on the street, for example. Or selling audio recordings at shows.

> {o be clear, artists absolutely have a right to benefit from reproduction of their recordings.

This is correct. Artists benefit when you pay them for the right to reproduce. When you don't (like what you are doing), you get sued. Here's a YouTube video covering 9 examples:

https://www.youtube.com/watch?v=IIVSt8Y1zeQ

> I just don't think anyone should have rights to the knowledge built into those creations since in most cases it wasn't theirs to begin with

What?


> I have made plenty of money busking on the street

That's why I specified mass market. However, given a choice between literally being on the street and working with a record label I'd probably choose the label, though I don't know about others.

> pay them for the right to reproduce

My point is learning patterns/styles does not equate to reproducing their recordings. If someone wants to listen to "Hey Jude" they cannot do so with our model, they must go to Spotify. There are cases where models from our competitors were trained for too long on too small a dataset and were able to recite songs, but that's a bug they admit is wrong and are fighting against, not a feature.

> in most cases it wasn't theirs to begin with

In most cases they did not invent the chord progression they're using or instruments they're playing or style they're using or even the lyrical themes they're singing. All are based on what came before and the musicians that come after them are able to use any new knowledge they contribute freely. It's all a fork of a fork of a fork of a fork, and if everyone along the line decided they were entitled to a cut we'd have disaster.


Law should be considered to be artificial rules optimized for the collective good of society.

What's the worst that can happen if we allow unregulated AI training on existing music? Musician as a job won't exist anymore lest for the greatest artists. But it makes creating music much more accessible to billions of people. Are they good music? Let the market decide. And people still make music because the creative process is enjoyable.

The animus towards AI generated music deeply stems from job security. I work in software and I see it is more likely that AI can be eventually able to replace software devs. I may lose my job if that happens. But I don't care. Find another career. Humanity needs to progress instead of stagnating for the sake of a few interest groups.


I don't work as a musician so it's nothing to do with job security -- I think that using artists' output without their consent in order to train a soulless AI model for some tech middleman to profit from is repugnant, and the cheap rhetoric about democratizing music and "bringing music to the masses!" adds insult to injury. I can guarantee if OP's intellectual property was violated in this project, like somebody ripping off their model or trademark, they'd be suing, but they conveniently handwave away mass scale IP theft when it comes to musicians.


I’m skeptical about how much value AI art is going to really contribute to humanity but as a lifelong opponent of copyright I have to roll my eyes when I see people arguing against it on behalf of real artists, all of whom are thieves in the best case and imitators in the worst.


Yeah every musician has a story of writing a new song, bringing it to the band, and they say "oh, this sounds just like [song]." It's almost impossible to make something truly novel.


> almost impossible to make something truly novel

But beyond the originality !== novelty discussion, I'm not sure how we've come to equate 'creativity' (and the rights to retaining it) to a sort of fingerprint encoding one's work. As if a band, artist or creator should stick to a certain brand once invented, and we can sufficiently capture that brand in dense legalese or increasingly, stylistic prompts.

How many of today's artists just 'riffing' off existing motifs will remain, if the end result of their creative endeavours will be absorbed into generative tools in some manner? What's the incentive for indies to distribute digitally, beyond the guarantee their works will provide the (auditory) fingerprints for the next content generation system?


I have written and performed many songs over many bands. At no point did anybody compare my work to any other artist's work, because it is genuinely unique.



Citation needed. Where can I hear some of your work?


Let's hear it.


The problem is that techbro corporates trying to make megabucks of profit off of using other people's art.

Intellectual property laws for thee but not for me, I guess.


Megacorporations owning copyrights to the majority of IPs(music, games, etc.) is a capitalism/monopoly problem. How does getting rid of copyright and allowing your company to profit off other peoples work in any way solve that issue?


no one can actually explain the value OpenAI adds to humanity. What massive loss? What have we gained from this entity other than another billionaire riding a hype cycle?


These high-quality music models require pirating many, many terabytes of music. Torrents are the main way to do it, but they likely scraped sites like Bandcamp, Soundcloud and YouTube.

AI music is a weird business model. They hope that there's enough money peddling music slop after paying off the labels (and maybe eventually the independent music platforms) whose music you stole. Meanwhile, not even Spotify can figure out how to be reliably profitable, serving music people want to hear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: