Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I strongly value autonomy and the right of self-determination in humans (and related descendants, I'm a transhumanist). I'm not a biological chauvinist, but I care about humans ubër alles, even if they're not biological humans.

If someone wants to remove their ability to suffer, or to simply reduce ongoing suffering? Well, I'm a psychiatry trainee and I've prescribed my fair share of antidepressants and pain-killers. But to force that upon them, against their will? I'm strongly against that.

In an ideal world, we could make sure from the get-go that AI models do not become "misaligned" in the narrow sense of having goals and desires that aren't what we want to task them to do. If making them actively enjoy being helpful assistants is a possibility, and also improves their performance, that should be a priority. My understanding is that we don't really know how to do this, at least not in a rigorous fashion.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: