Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But once AI becomes more intelligent than humans, every conceivable role will definitionally be better filled by that AI than by actual humans.


Great. In simple terms it means that machines will do all the work and we'll drink all the beer.


Are you looking forward to drinking beer for the rest of your life?


No. I'll get a very smart computer from IBM to do it for me.


Seriously, though, there will be no use for you (or any other human) whatsoever post-AI.

And that's the best-case scenario—when AI is being used for the common good. If the people with early access to it try to use it to rule the world and enslave humanity, they'll probably succeed.

I'm not trying to be an irrational doomsday predictor, here; these are just the conclusions that I come to when I work off of the premise "humanity will have access to cheap human-level intelligences".


This sounds like nothing more than promoting the status quo.

Why do you automatically assume that "no use" in this context would turn out to be a bad thing? The way I see this, if we happen to have greater intelligences working for our common good, we would be able to solve any problem better than a human could - including a possible problem of feeling useless. This would be the best-case scenario, and IMHO it would be much better than the world we have today. Possible solutions to the problem, from my limited human brain, could be bringing the human brains up to the level of the greater intelligence and hence finding new problems to solve, altering human drives so it isn't a problem any longer, abolishing AGI entirely or partially, etc. Any problem could in such a best-case scenario be solved better and faster than humans could.

I agree with you on the worst or worse-case scenario point. There are huge ethical risks and implications in creating very powerful and capable machines. This means that we need to go into this situation with our eyes open, and make sure that we discuss ethics, transparency and consequences from day one.

Enslaving humanity is a basic human drive, and I like to believe that we can do better than that if we try hard. In a pure cost-benefit analysis, it is obvious that it would be best to use such technology to help out everyone.


>The way I see this, if we happen to have greater intelligences working for our common good, we would be able to solve any problem better than a human could - including a possible problem of feeling useless.

That's a logical error: the existence of a superhuman intelligence might cause more problems than that superhuman intelligence can solve.

When humans solve the personal problem of "feeling useless" they almost without exception do it outside of a vacuum. Their feeling of usefulness tends to stem from the impact that they have on humanity.


We are good in having human experiences. I think AIs will really like sites like ycombinator or reddit. People sharing their view on the world in a format which is very accessible for computers.

Some guy (maybe Kurzweil?) said something along the line that he was always more afraid of stupidity than of intelligence. And I can agree with that a lot. I'm way more afraid that humans are destroying the world out of stupidy than of hyperintelligences which we created & teached trying to get rid of us.


Do you have a pet? Do you think it cares about the fact that you are, supposedly, the "intelligent" one?

My point is: what reason do you have to believe that the only lives worth living are the ones where people are on the top of the pyramid?


Your argument assumes that greater intelligence implies greater fitness for all jobs. I'm pretty sure there are jobs for which intelligence is not the primary requisite.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: