Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There will never be enough computing power to create AGI using machine learning that can do the same [as the human brain], because we’d run out of natural resources long before we'd even get close

I don’t understand how people can so confidently make claims like this. We might underestimate how difficult AGI is, but come on?!



I don't think the people saying that AGI is happening in the near future know what would be necessary to achieve it. Neither do the AGI skeptics, we simply don't understand this area well enough.

Evolution created intelligence and consciousness. This means that it is clearly possible for us to do the same. Doesn't mean that simply scaling LLMs could ever achieve it.


I'm just going by the title. If the title was, "Don't believe the hype, LLMs will not achieve AGI" then I might agree. If it was "Don't believe the hype, AGIs is 100s of years away" I'd consider the arguments. But, given brains exist, it does seem inevitable that we will eventually create something that replicates it even if we have to simulate every atom to do it. And once we do, it certainly seem inevitable that we'll have AGI because unlike brain we can make our copy bigger, faster, and/or copy it. We can give it access to more info faster and more inputs.


The assumption that the brain is anything remotely resembling a modern computer is entirely unproven. And even more unproven is that we would inevitably be able to understand it and improve upon it. And yet more unproven still is that this "simulated brain" would be co-operative; if it's actually a 1:1 copy of a human brain then it would necessarily think like a person and be subject to its own whims and desires.


>The assumption that the brain is anything remotely resembling a modern computer is entirely unproven.

Related discussion (from 2016): https://news.ycombinator.com/item?id=11729499


We don’t have to assume it’s like a modern computer, it may well not be in important ways, but modern computers aren’t the only possible computers. If it’s a physical information processing phenomenon, there’s no theoretical obstacle to replicating it.


> there’s no theoretical obstacle to replicating it

Quantum theory states that there are no passive interactions.

So there are real obstacles to replicating complex objects.


That's only a problem if the relevant functional activity is a quantum effect. We have no problem mass producing complex macroscopic functional objects, and in the ways that are relevant human brains are all examples of the same basic system. Quantum theory doesn't seem to have been an obstacle to mass producing those.


The main problem I see here is similar with the main problem in science:

Can we being inside our brain fully understand our own brain?

Similar with can we being inside our Universe fully understand it?


How is that "the main problem in science"?

We can study brains just as closely as we can study anything else on earth.


> it does seem inevitable that we will eventually create something

Also don't forget that many suspect the brain may be using quantum mechanics so you will need to fully understand and document that field.

Whilst of course you are simulating every atom in the universe using humanity's complete understanding of every physical and mathematical model.


People have been saying that for a decade and no one actually believes scaling is all you need. They say that to raise more resources and to diss the symboligists. AI advancement has been propelled by a steady stream of new architectural innovations, which always seem to be invented as soon as sufficient compute is available.


> Evolution created intelligence and consciousness

This is not provable, it an assumption. Religious people (which account for a large percent the population) claim intelligence and/or consciousness stem from a "spirit" which existed before birth and will continue to exist after death. Also unprovable, by the way.

I think your foundational assertion would have to be rephrased as "Assuming things like God/spirits don't exist, AGI must be possible because we are AGI agents" in order to be true


There's of course a wide spectrum of religious thought, so I can't claim to cover everyone. But most religious people would still acknowledge that animals can think, which means either that animals have some kind of soul (in which case why can't a robot have a soul?) or that being ensoulled isn't required to think.


> in which case why can't a robot have a soul

It's not a question of whether a robot can have a soul, it's a question of how to a) procure a soul and b) bind said soul to a robot both of which seem impossible given or current knowledge


What relevance is the percentage of religious individuals?

Religion is evidently not relevant in any case. What ChatGPT already does today religious individuals 50 years ago would have near unanimously declared behavior only a "soul" can do.


> What relevance is the percentage of religious individuals?

Only that OP asserted as fact something that is disputed as fact by a large percentage of the population.

> Religion is evidently not relevant in any case.

I think it's relevant. I would venture to say proving AGI is possible is tantamount to proving God doesn't exist (or rather, proving God is not needed in the formation of an intelligent being)

> What ChatGPT already does today religious individuals 50 years ago would have near unanimously declared behavior only a "soul" can do

Some religious people, maybe. But that sort of blanket statement is made all the time "[Religious people] claimed X was impossible, but science proved them wrong!"


I think their qualifier "using machine learning" is doing a lot of heavy lifting here in terms of what it implies about continuing an existing engineering approach, cost of material, energy usage, etc.

In contrast, imagine the scenario of AGI using artificial but biological neurons.


For some people, "never" means something like "I wouldn't know how, so surely not by next year, and probably not even in ten".


For other people "never" means "I know exactly how to do this but it will take longer than the heat death of the universe"


So imagine this: you work at company getting millions of dollars for spreading the word. What do you do?

There are less and less people in IT and also Data companies that really care about correctness and efficiency of the solutions. ChatGPT in opinions (not only mine - like I've learned along last few weeks) is getting worse every release. The language gets better, the lies and hallucinations gets better, but being an informative and helpful tool - more like Black Mirrors idea of filling gap after someone who died, not a real improvement in science or social-metrics.


"There will never be enough computing power to compute the motion of the planets because we can't build a planet."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: