Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pardon me but ... I think that there are way worse dangers than "humanoid bombs"... One of the main reason is that to achieve a nuclear explosion you need to have a critical mass and the it's hard to conceal for a lot of reasons (radiation etc...).

What's the difference with a car that could have a bomb in its trunk? Or a bag? A lot of scientists have wondered about these ethical questions but I believe that the benefits of high performance IA outweights the downsides of its research.

BUT I definitively agree with that:

"Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal."

*edit: And what about an android to dismantle the atomic bomb instead of humans ? Sounds good to me!



  > I think that there are way worse dangers than "humanoid bombs"
Yes, I am much more concerned about the scope for ubiquitous surveillance and systematic domination that even fairly modest gains in AI will allow. Something along the lines of the Emergency society in A Deepness in the Sky.


This is too much. About a week ago I finished reading A Deepness in the Sky, and a couple of days after that I started reading Charles Stross' Accelerando. In that one I ran across this passage:

    > "Cats," says Pamela. "He was hoping to trade their uploads to the Pentagon
    > as a new smart bomb guidance system in lieu of income tax payments. Something
    > about remapping enemy targets to look like mice or birds or something before
    > before feeding it to their sensorium. The old kitten and laser pointer trick."
    >
    > Manfred stares at her, hard. "That's not very nice. Uploaded cats are a
    > *bad* idea."
Those are some lovely coincidences.


It's been forever since I read the book, but wasn't the whole point of the Emergency culture they didn't use AI and relied entirely on hyper-focused humans with enslavement implants?


Didn't Sherkaner Underhill initially think that the Emergent zipheads were actually an AI?


Wasn't that Pham Nuwen, after the initial attack?


If off-the-shelf AI has the intelligence of even a ten year old, imagine the amount of automated snooping that can go on.

Why worry about sifting for keywords in text messages when you can literally read all the text messages and infer meaning from them, even when it's deliberately obfuscated?

It could lead to a Brazil-like future where you're drawn in to a mess of trouble because you used too many euphemisms when texting your significant other. "I'm bringing home the package right now..."

What do we do to push back against this kind of thing? Drop off the grid as Stallman would have you try? Eject yourself from society as a whole? Or will it be practical at that point to have sufficiently private, well encrypted channels of communication that you won't have to worry too much about that sort of thing? Technology does cut both ways.

It doesn't require an emergent intelligence to cause a massive shift in the way we view technology. A number of low-level intelligences that can be easily replicated may be the first disruption.


Then again, maybe AI-based surveillance could eliminate crime without embarrassment from other people violating your privacy.


Maybe I'm just naive but it seems to me like in a world where the power of the atom bomb can fit in a briefcase, theres no need for androids to get bombs within striking range of their intended targets.


That world already exists. The hard part is getting the isotopes necessary, but there are elements that will do the job. See my post: http://news.ycombinator.com/item?id=4067089

For a terrorist getting the elements would be hard, but a government would have no trouble, and given the long half life it's a pretty safe bet to assume this bomb actually exists somewhere.


>That world already exists.

Which was my point. Worrying about hypothetical android bomb carriers is silly when we live in a world where you can pack an atomic bomb in the space of a travelers briefcase. That allows for enough mundane methods of delivery to make suicide androids unnecessary. And as pointed out elsewhere, you don't have to have sentient AI to create something sophisticated enough to deliver a bomb.

EDIT: Struck the word "almost" from it's place next to silly in the above sentences.


> What's the difference with a car that could have a bomb in its trunk? Or a bag?

Or, for that matter, a personal firearm?


Or an internet?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: