Hinton obviously has no financial interest, in fact reputationally he has a massive incentive against warning of AI doom or the impact of AGI if it were not the case. Though of course he is old and at the end of his career so one could argue he is overly cautious and misled over trends.
It would only make sense to compare current sentiment over imminent AGI to past predictions of imminent AGI as long as conditions then were the same as conditions now. Conditions now are obviously different than conditions any time over the past century.
Even if you argued that we are as far away from AGI now as we were in 1990, 1970, or 1920, if I showed what ChatGPT could do to people in any of those eras, they would each claim that it is AGI. One could claim that its apparent intelligence is just a parlor trick and a consequence of the wealth of training data it has access to, but nevertheless many claimed "impossibilities in our lifetime" with regards to what LLM's can do are being achieved every year.
To me it seems less at LLM's and current AI is all some trick and that we are eons away from true reasoning, but rather many of the things that make human intelligence useful are still pretty hard to crack, but we are nevertheless building a bridge to reach that level one OOM at a time, with lots of engineering effort also eroding the gap month by month.
We can cherry-pick AI experts to pay attention to. Yann LeCun recently called current AI "dumber than a cat." Stephen Wolfram has written some very well thought-through articles about LLMs and doesn't find much to worry about. Gary Marcus. I wouldn't ignore Geoff Hinton, but he can fall victim to the same mistakes anyone else can, as Gary Smith [1] described.
Pointing out that people of any era can get easily fooled by things they don't fully understand supports my position. Putting aside the obvious stock pumping and hype going on in the current AI bubble, plenty of people want to believe in AGI, or want to fear it, or just want to stay relevant. A decade from now we will only remember the people who made correct predictions.
It would only make sense to compare current sentiment over imminent AGI to past predictions of imminent AGI as long as conditions then were the same as conditions now. Conditions now are obviously different than conditions any time over the past century.
Even if you argued that we are as far away from AGI now as we were in 1990, 1970, or 1920, if I showed what ChatGPT could do to people in any of those eras, they would each claim that it is AGI. One could claim that its apparent intelligence is just a parlor trick and a consequence of the wealth of training data it has access to, but nevertheless many claimed "impossibilities in our lifetime" with regards to what LLM's can do are being achieved every year.
To me it seems less at LLM's and current AI is all some trick and that we are eons away from true reasoning, but rather many of the things that make human intelligence useful are still pretty hard to crack, but we are nevertheless building a bridge to reach that level one OOM at a time, with lots of engineering effort also eroding the gap month by month.