Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The core argument has a logical gap; it doesn’t matter if most (present) ML applications need AGI or not. What matters is the value proposition of AGI, independent of how we currently conceive of ML applications.

Focus on this question: “Is general intelligence at some level valuable at a particular price point?” General intelligence is generally valuable, so there is a pressure to advance it, whether by improving capability and/or decreasing cost.

Now the question becomes empirical — what kind of general intelligence can be built that achieves certain parameters?

Aiming to exceed the power efficiency of the human brain is a tempting target. (Whether it is wise is another question; what happens when human intelligence doesn’t provide competitive advantage?)



That's a fair point I hadn't considered! If intelligence is valuable in humans, and some cost factor of advancing human intelligence can be surpassed digitally like this, (I don't know how you'd measure intellect efficiency, somehow involving calories-in/good-descisions-out or something?) then there's economic incentive.

But that feels very far off, even in the current exponential curve of efficiency we're on. Can't go on forever.


Some suggestions:

- Don't assume only digital intelligence. - Intelligence is fundamentally valuable to humans. No need to add any caveats.

> But that feels very far off, even in the current exponential curve of efficiency we're on. Can't go on forever.

What exactly feels far off to you?

In many cases, sure, exponential growth can't go on forever, but one has to be careful about spelling out what you mean. What are your axes?

Don't forget that each successive technology may introduce a new growth pattern.

One has to be quite careful in analyzing and forecasting these things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: