I've known Harry for a long time. Extremely intelligent. Amazing scientist. Hardcore pacifist. We are friends, we have been roommates, we have traveled together in the US, and in Greece. Good times.
He has taken a principled stance against AI, which I disagree with, but it's his decision. And I do respect it.
Every person that takes a principled stand will be called crazy.
This is sad, because if/when the time comes for YOU, dear reader, to take a principled stand on X, be sure they will call you crazy.
Anyways, I think his case isn't well put in the article. He provides a single scenario for a brutal use of AI. I think a better case can be put--scary stuff we are working on; AI is dangerous. I myself worry much more about Orwellianism and "We" and "Brazil" scenarios than the nuke stuff. But I don't think we should stop the work; I think the only route is to try to get more influence in the direction things may go.
I told Harry, back in the day when he wrote this, which is years ago (2006, probably), that I disagreed with his position, the nuke cat is out of the bag, we had a nice discussion about it, and that was that.
Is he crazy? Not by a longshot. I would leave my firstborn with him and travel around the world without a shred of worry.
So would my wife.
But wtf do I know; I'm just another crazy guy on the internet, right?
I never meant to impugn the man or his character and I now regret invoking schizophrenia so casually. As you said, his case isn't well put. It is his writing and arguments which I find sloppy and the connections he makes, as presented, smell somewhat delusional. This is not the same as saying that he's "crazy".
I am rejecting, strongly, the assertion that his is a principled stance. The piece is simply is not compelling since his malicious AI would be acting on behalf or under the command of humans, just like all weapons throughout history. I think it's a simplistic, misguided, stance. The kind I remember flirting with when I first took some college-level philosophy courses years ago.
I've known Harry for a long time. Extremely intelligent. Amazing scientist. Hardcore pacifist. We are friends, we have been roommates, we have traveled together in the US, and in Greece. Good times.
He has taken a principled stance against AI, which I disagree with, but it's his decision. And I do respect it.
Every person that takes a principled stand will be called crazy.
This is sad, because if/when the time comes for YOU, dear reader, to take a principled stand on X, be sure they will call you crazy.
Anyways, I think his case isn't well put in the article. He provides a single scenario for a brutal use of AI. I think a better case can be put--scary stuff we are working on; AI is dangerous. I myself worry much more about Orwellianism and "We" and "Brazil" scenarios than the nuke stuff. But I don't think we should stop the work; I think the only route is to try to get more influence in the direction things may go.
I told Harry, back in the day when he wrote this, which is years ago (2006, probably), that I disagreed with his position, the nuke cat is out of the bag, we had a nice discussion about it, and that was that.
Is he crazy? Not by a longshot. I would leave my firstborn with him and travel around the world without a shred of worry.
So would my wife.
But wtf do I know; I'm just another crazy guy on the internet, right?