Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gary Marcus saying the same things Gary Marcus has always said.

It doesn’t matter what incredible things neural networks do, in his mind they’re always a dead end that will collapse any day now.



There’s quite a gulf of difference between saying something is a dead end to full on general artificial intelligence, and saying it’s all dead and will collapse.

I have no idea if LLMs will be general AI, but they defo aren’t going anywhere


I think the question we've failed to properly ask is "are all humans general intelligences".

I frequently encounter people who appear less refined in reasoning and communication than an LLM. Granted, being an awkward communicator is excusable but interrogation of these peoples belief systems seem to reveal a word model more than a world model.


The other way to look at it is that people are still really useful and helpful even if they don’t have a PHD or super advanced reasoning capabilities. A lot of useful work requires neither. So LLMs can do that work at least.


There should be a [Gary Marcus] tag in the link, it's all one needs to know, as soon as I saw that I closed the page.

(I notice in retrospect it does show that it's a link to his substack so I guess that's sufficient warning, I didn't see it)


I tend to get more benefit from looking at why someone thinks what they think - than what they think.


There was once a blog (maybe it still exists, idk) called Bitfinexed, which researched fraud perpetrated by the Bitfinex/Tether creators. He forecast every month for multiple years an imminent Tether crash, based on the multiple data points and logical conclusions. His prediction was wrong, since Tether org is still alive and out of jail. But this doesn't mean that his chain of arguments and logic was wrong. It was simple a case when fraud and stakes were so big, that through corruptions and some assets infusions, the whole scheme had been saved.

Just because something is a case of "old man yelling at clouds" doesn't mean that underlying logic is always wrong. Sometimes markets can be irrational longer than we expect.


True, but the novelty of the post here is that Sutton now agrees with him.


"It is difficult to get a man to understand something, when his salary depends on his not understanding it" or something like that. There's quite an appetite on the internet for ai derision articles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: