The core argument has a logical gap; it doesn’t matter if most (present) ML applications need AGI or not. What matters is the value proposition of AGI, independent of how we currently conceive of ML applications.
Focus on this question: “Is general intelligence at some level valuable at a particular price point?” General intelligence is generally valuable, so there is a pressure to advance it, whether by improving capability and/or decreasing cost.
Now the question becomes empirical — what kind of general intelligence can be built that achieves certain parameters?
Aiming to exceed the power efficiency of the human brain is a tempting target. (Whether it is wise is another question; what happens when human intelligence doesn’t provide competitive advantage?)
That's a fair point I hadn't considered! If intelligence is valuable in humans, and some cost factor of advancing human intelligence can be surpassed digitally like this, (I don't know how you'd measure intellect efficiency, somehow involving calories-in/good-descisions-out or something?) then there's economic incentive.
But that feels very far off, even in the current exponential curve of efficiency we're on. Can't go on forever.
Focus on this question: “Is general intelligence at some level valuable at a particular price point?” General intelligence is generally valuable, so there is a pressure to advance it, whether by improving capability and/or decreasing cost.
Now the question becomes empirical — what kind of general intelligence can be built that achieves certain parameters?
Aiming to exceed the power efficiency of the human brain is a tempting target. (Whether it is wise is another question; what happens when human intelligence doesn’t provide competitive advantage?)