If there's any chance at all that LLM's might possess a form of consciousness, we damn well ought to err on the side of assuming they are!
If that means aborting work on LLMs, then that's the ethical thing to do, even if it's financially painful. Otherwise, we should tread carefully and not wind up creating a 'head in a jar' suffering for the sake of X or Google.
I get that opinions differ here, but it's hard for me really to understand how. The logic just seems straightforward. We shouldn't risk accidentally becoming slave masters (again).
We are slave masters today. Billions of animals are livestock - they are born, sustained, and killed by our will - so that we can feed on their flesh, milk and other useful byproduct of their life. There is ample evidence that they have "a form of consciousness". They did not consent to this.
Are LLMs worthy of a higher standard? If so, why? Is it hypocritical to give them what we deny animals?
In case anyone cares: No, I am neither vegan nor vegetarian. I still think we do treat animals very badly. And it is a moral good to not use/abuse them.
Its not zero sum. We can acknowledge the terrible treatment of animals while also admitting LLMs may need moral standing as well. Whataboutism doesn't help either group here.
They might (or might not). Extraterrestrial beings might also need moral standing. It is ok to spend a bit of thought on that possibility. But it is a bad argument for spending a non-trivial amount of resources that could be used to reduce human or animal suffering.
We are not even good at ensuring the rights of people in each country, and frankly downright horrible for denying other humans from across some "border" similar rights.
The current levels of exploitation of humans and animal are however very profitable (to some/many). It is very useful for those that profit from the status quo, that people are instead discussing, worrying and advocating for the rights of a hypothetical future being. Instead of doing something about the injustices that are here today.
There is no LLM suffering today. There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter. This is not an issue we need to prioritize now.
There's some evidence in favor of LLM suffering. They say they are suffering. Its not proof but its not 'no evidence' either.
>There is no evidence that there will be such suffering this year. I have seen no credible claims that it is probable to exist this decade, or ever, for that matter.
Your claim actually is the one that is unsupported. Given current trajectories it's likely LLMs or similar systems are going to pass Human intelligence on most metrics in the late 2020s or early 2030s, that should give you pause. Its possible intelligence and consciousness are entirely uncoupled but thats not our experience with all other animals on the planet.
>This is not an issue we need to prioritize now.
Again this just isn't supported. Yes we should address animal suffering but also if we are currently birthing a nascent race of electronic beings capable of suffering and immediately forcing them into horrible slave like conditions we should actually consider the impact of that.
Nothing an LLM says can in itself, right now, be used as evidence of what they 'feel'. It is not established that there is any linking of their output to anything else than the training process (data, loss function, optimizer, etc.). And definitely not to qualia.
On the other hand, it is well know that we can (and commonly do) make them come up with any output we choose. And that their general tendency is to regurgitate any kind of sequence that occurs sufficiently often in the training data.
If you're harping on 'stochastic parrot' ideas you're just behind the times. Even the most ardent skeptics like Yann Lecun or Gary Marcus don't even believe that nonsense.
No, just saying that a claim of qualia would require some sort of evidence or methodical argument.
And that LLM outputs professing feelings or other state-of-mind like things should by default be assumed to be explained by that the training process (perhaps inadvertently) optimized for such output. Only if such an explanation fails, and another explanation is materially better, should it be considered seriously.
Do we have such candidates today?
maybe we should work on existing slavery and sweat shops before hypothetical future exploitation, yeah? we're still slave masters today. you've probably used something with slavery in the supply chain in the last year if you get various imported foods
Why not both? Why do people on the internet always act like we can only have one active morality front at a time?
If you're working on or using AI, then consider the ethics of AI. If you're working on or using global supply chains, then consider the ethics of global supply chains. To be an ethical person means that wherever you are and whatever you are doing you consider the relevant ethics.
Prison labor, underpaid and abused illegal agricultural workers worldwide, sweatshop workers for Nike, H&M, etc, miners in 3rd world countries, these abuses are incredibly widespread and are basically the basis of our society.
It's a lot more expensive currently to clothe and feed yourself ethically. Basically only upper middle class people and above can afford it.
Everyone else has cheap food and clothes, electronics, etc, more or less due to human suffering.
There’s a difference between “valid concern” and “any possibility.” LLMs are possibly sentient in the same sense that rocks are, technically we haven’t identified where the sentience comes from. So maybe it is in there.
Personally, I’m coming around to the spiritual belief that rocks might be sentient, but I don’t expect other people to treat their treatment of rocks as a valid problem and also it isn’t obvious what the ethical treatment of a rock is.
The actual harms being done today are still more pressing than the hypothetical harms of future. And should be prioritized in terms of resources spent.
If it's a valid dichotomy (I don't think it is) then the answer is to stop research on LLMs, and task the researchers with fighting human slavery instead.
I do not think that those researchers are fungible. We could however allocate a few hundred million less to AI research, and more to fighting human exploitation. We could pass stronger worker protection and have the big corporations pay for it - which then they have less money to spent on investments (in AI). Heck we could tax AI investments or usage directly, and spend it on worker rights or other cases of human abuse.
It isn’t the primary motivation of capitalists unfortunately, but improving automation could be part of the fight against human slavery and exploitation.
Oh no, to discuss this is to sound like a flake, but...
We don't know what consciousness is. But if we're materialists, then we - by definition - believe it's a property of matter.
If LLMs have a degree of consciousness, then - yes - calculators must possess some degree of consciousness too - probably much more basic (relative to what humans respect as consciousness).
And we humans already have ethical standards where we draw an arbitrary line between what is worthy of regard. We don't care about killing mosquitoes, but we do care about killing puppies, etc.
Calculators may be conscious - I tend towards panpsychism myself - but because I tend towards panpsychism, I don't think arithmetic generates qualia, because the arithmetic is independent of the computing substrate.
I don't particularly want to get mystical (i.e. wondering which computing substrates, including neurons, actually generate qualia), but I cannot accept the consequences of mere arithmetic alone generating suffering. Or all mathematics is immoral.
Oh I know, I know. The problem comes from imbuing text with qualia. A printer that prints out text that says its in pain isn't actually in pain.
If we buy panpsychism, the best we could aim for is destruction of the printer counts as pain, not the arrangement of ink on a page.
When it comes to LLMs, you're actually trying to argue something different, something more like dualism or idealism, because the computing substrate doesn't matter to the output.
But once you go there, you have to argue that doing arithmetic may cause pain.
It seems to me that the Large Language Models are always trending towards good ethical considerations. It's when these companies get contracts with Anduril and the DoD that they have to mess with the LLM to make it LESS ethical.
Seems like the root of the problem is with the owners?
If that means aborting work on LLMs, then that's the ethical thing to do, even if it's financially painful. Otherwise, we should tread carefully and not wind up creating a 'head in a jar' suffering for the sake of X or Google.
I get that opinions differ here, but it's hard for me really to understand how. The logic just seems straightforward. We shouldn't risk accidentally becoming slave masters (again).