Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> From what I’ve seen it’s the exact opposite? They want AI development to slow down so we can get it right and make it safe.

The only way that the alignment faction wants to slow things down is in that they want to maintain tight control of models and gated access to assure “proper” use. This does have an effect of slowing certain kinds of development, as broader access to build around models without centralized controls fosters some aspects of development (the “Stable Diffusion moment”) but in ideal terms, the alignment faction is about fast-but-narrowly-controlled development.

The ethics faction is more about slowing things down – though more about adoption, particularly in sensitive uses, than development – but is more oriented toward openness/transparency.



There's no single "alignment faction" in terms of these questions. There are people associated with the alignment research program who do advocate slowing things down, e.g., https://worldspiritsockpuppet.substack.com/p/lets-think-abou... . There are a variety of other different positions, e.g., OpenAI, Anthropic, and MIRI have all taken different public stances on this


https://aisafety.world/tiles/ lists dozens of institutions. Each hold different positions on AI development, but almost none of them hold the position that you ascribe to their "faction", maybe with the exception of OpenAI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: