I would argue that it's a tool outputting text. There is no authorship, just publication of the tool's output. The output is designed to resemble authored works, but here the human we can credit is just using a tool and hoping the output is useful.
Right, and as I said I think that's reasonable. My "counter" would be that their are pieces on display at the MOMA which amount to more-or-less random splatters of paint, and we still attribute those to specific artists†.
But I don't know the right answer. What I know for sure is who the author isn't: it's not the paint manufacturer, and it's not gravity.
---
† I'm not hating on these pieces, they're a huge part of why I like going to MOMA, I never know what to expect!
It's a fun new debate in a sea of boring ones, at least.
I have a hard time crediting the text generator with any kind of agency, and therefore liability. We know the nature of the tool, we know it can output basically any text when prompted correctly.
As in your analogy, I think OpenAI is about as responsible for the output of the thing as a typewriter manufacturer is for the contents of novels written on their machines. It's a tool, for now.
But you certainly can't make ME responsible as an author of some libel just because I prompted the machine. As a rule, I have no idea what it will output when I prompt it. I have an expectation, sometimes even a goal, but no assurances whatsoever, therefore I cannot be held responsible.
What I can be held responsible for is the dissemination of that text, or perhaps fraudulently holding it up as more than just generated text from my tool.
I think it's the whole AI/agency debate that even gives any credence at all to this being libel, but I think most of us agree at this point that it's just a text generating tool. The text it generates is largely irrelevant to a discussion about libel IMO, because libelous generated text should just be left unpublished/unused. It's a useless output, of which the tool generates many.
I am not a lawyer, but I would assume (!) that libel implies publishing the libelous information. Which means you cannot be guilty of libel for writing a ChatGPT prompt any more than you can be guilty of libel for writing "Brian Hood murders puppies" in a secret personal diary. Unless, as you stated, you go and publish the information as fact.
> I think it's the whole AI/agency debate that even gives any credence at all to this being libel, but I think most of us agree at this point that it's just a text generating tool. The text it generates is largely irrelevant to a discussion about libel IMO, because libelous generated text should just be left unpublished/unused.
Well, we got here because an Australian politician is in fact suing for libel, which I think is exceedingly ridiculous and potentially a dangerous precedent, whatever you think of language models.