There’s quite a gulf of difference between saying something is a dead end to full on general artificial intelligence, and saying it’s all dead and will collapse.
I have no idea if LLMs will be general AI, but they defo aren’t going anywhere
I think the question we've failed to properly ask is "are all humans general intelligences".
I frequently encounter people who appear less refined in reasoning and communication than an LLM. Granted, being an awkward communicator is excusable but interrogation of these peoples belief systems seem to reveal a word model more than a world model.
The other way to look at it is that people are still really useful and helpful even if they don’t have a PHD or super advanced reasoning capabilities. A lot of useful work requires neither. So LLMs can do that work at least.
There was once a blog (maybe it still exists, idk) called Bitfinexed, which researched fraud perpetrated by the Bitfinex/Tether creators. He forecast every month for multiple years an imminent Tether crash, based on the multiple data points and logical conclusions. His prediction was wrong, since Tether org is still alive and out of jail. But this doesn't mean that his chain of arguments and logic was wrong. It was simple a case when fraud and stakes were so big, that through corruptions and some assets infusions, the whole scheme had been saved.
Just because something is a case of "old man yelling at clouds" doesn't mean that underlying logic is always wrong. Sometimes markets can be irrational longer than we expect.
"It is difficult to get a man to understand something, when his salary depends on his not understanding it" or something like that. There's quite an appetite on the internet for ai derision articles.
It doesn’t matter what incredible things neural networks do, in his mind they’re always a dead end that will collapse any day now.