Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article stops where it should be getting started:

> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them.

> With model welfare, we might not explicitly say that a certain group of people is subhuman. However, the implication is clear: LLMs are basically the same as humans. Consciousness on a different substrate. Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.

We do not push moral considerations for algorithms like a sort or a search, do we? Or bacteria, which live. One has to be more precise; there is a qualitative difference. The author should have elaborated on what qualities (s)he thinks confers rights. Is it the capacity for reasoning, possession of consciousness, to feel pain, or a desire to live? This is the crux of the matter. Once that is settled, it is a simpler matter to decide if computers can possess these qualities, and ergo qualify for the same rights as humans. Or maybe it is not so simple since computers can be perfectly replicated and never have to die? Make an argument!

Second, why would conferring these rights to a computer lessen our regard for humans? And what is wrong with animals, anyway? If we treat them poorly, that's on us, not them. The way I read it, if we are likening computers to animals, we should be treating them better!

To the skeptics in this discussion: what are you going to say when you are confronted with walking, talking robots that argue that they have rights? It could be your local robo-cop, or robo soldier:

https://www.youtube.com/shorts/GwgV18R-CHg

I think this is going to become reality within our lifetimes and we'd do well not to dismiss the question.



Rights are just very strong norms that improve cooperation, not some mystical 'god-given' or universe-inherent truth, imho.

I think this because:

1. We regularly have exceptions to rights if they conflict with cooperation. The death penalty, asset seizure, unprotected hate speech, etc.

2. Most basic human rights evolve in a convergent manner, i.e. that throughout time and across cultures very similar norms have been introduced independently. They will always ultimately arise in any sizeable society because they work, just like eyes will always evolve biologically.

3. If property rights, right to live, etc. are not present or enforced, all people will focus on simply surviving and some will exploit the liberties they can take, both of which lead to far worse outcomes for the collective.

Similarly, I would argue that consciousness is also very functional. Through meditation, music, sleeping, anasthesia, optical illusions, and psychedelics and dissociatives we gain knowledge on how our own consciousness works, on how it behaves differently under different circumstances. It is a brain trying to run a (highly spatiotemporal) model/simulation of what is happening in realtime, with a large language component encoding things in words, and an attention component focusing efforts on things with the most value, all to refine the model and select actions beneficial to the organism.

I'd add here that the language component is probably the only thing in which our consciousness differs significantly from that of animals. So if you want to experience what it feels like to be an animal, use meditation/breathing techniques and/or music to fully disable your inner narrator for a while.


That's studied under evolutionary ethics. https://en.wikipedia.org/wiki/Evolutionary_ethics


>To the skeptics in this discussion: what are you going to say when you are confronted with walking, talking robots that argue that they have rights?

"Haven't you ever seen a movie? The robots can't know what true love is! Humans are magical! (according to humans)"


I can't reconcile your dismissive response here with your other one in this discussion: https://news.ycombinator.com/item?id=44258807


My dismissive response is in the vain of option 1 of my other comment.


I found the article to be conflating things:

> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal

This actually happens regardless of AI research progress, so it's strange to raise this as a concern specific to AI (to technology broadly? Sure!) - Ted Chiang might suggest this is more related to capitalism (a statement I cautiously agree with while being strongly in favor of capitalism)

Second, there is an implicit false dichotomy in the premise of the article. Either we take model welfare seriously and treat AIs like we do humans, or we ignore the premise that you could create a conscious AI.

But with animal welfare, there are plenty of vegetarians who wouldn't elevate the rights of animals to the same level as humans but also think factory farming is deeply unethical (are there some who think animals deserve the same or more than humans? Of course! But it's not unreasonable to have a priority stack and plenty of people do)

So it can be with AI. Are we creating a conscious entity only to shove it in a factory farm?

I am a little surprised by the dismissiveness of the researcher. You can prompt a model to allow it to not respond to prompts (for any reason: ablate this but "if you don't want to engage with prompt please say 'disengaging'" or "if no more needs to be written about this topic say 'not discussing topic'" or some other suitably non-anthropomorphizing option to not respond)

Is it meaningful if the model opts not to respond? I don't know, but it seems reasonable to do science here (especially since this is science that can be done by non programmers)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: