Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We code our biases into the algorithms we use every day. They inherit our flaws. The sooner we all wake up to this, the better.

And our values. And assumptions. In Arthur C. Clarke's "2001: A Space Odyssey" the homicidal behavior of HAL was ultimately based on the conflict between his secret instructions and what he was able to share with the human crew, Bowman and Poole.



This is actually a serious concerns when it comes to AI. People tend to think it's overblown but non deterministic behavior is bad because you don't know what the program will do. Maybe it crashes, maybe nothing happens, maybe it nukes your cpu. You simply don't know. The first rule of true AI has to be to ensure it values human life. It would be grossly irresponsible to create an entity more intelligent than us and do otherwise. But what happens when the military inevitably decides these new AI things would make really good drone pilots. Or when a sufficiently powerful enough AI comes to the conclusion that the best way to protect human life is to take human life. What's the end result of that conflict. What's stopping a sufficiently intelligent AI from rewriting it's own code to get around restrictions it doesn't like. We already have self modifying code. It's a scary thought, and the fact that people are basing these things off the way humans think makes it even scarier




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: