> I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway.
Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message:
[...] More broadly there is,
as yet, no broad consensus on the licensing implications of code
generators trained on inputs under a wide variety of licenses
And in the patch itself:
[...] With AI
content generators, the copyright and license status of the output is
ill-defined with no generally accepted, settled legal foundation.
What other commenters pointed out is that, beyond the legal issue, other problems also arise form the use of AI-generated code.
It’s like the seemingly-confusing gates passing through customs that say “nothing to declare” when you’ve already made your declarations. Walking through that gate is a conscious act that places culpability on you, so you can’t simply say “oh, I forgot” or something.
The thinking here is probably similar: if AI-generated code becomes poisonous and is detected in a project, the DCO could allow shedding liability onto the contributor that said it wasn’t AI-generated.
> Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message
Don’t be ridiculous. The majority of people are in fact honest, and won’t submit such code; the major effect of the policy is to prevent those contributions.
Then you get plausible deniability for code submitted by villains, sure, but I’d like to hope that’s rare.
Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message:
And in the patch itself: What other commenters pointed out is that, beyond the legal issue, other problems also arise form the use of AI-generated code.