Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Do you believe AGI is (practically) possible?
5 points by atleastoptimal on March 19, 2024 | hide | past | favorite | 44 comments
With AGI (Artificial general intelligence) being a computer system generally as smart and capable as a median human at all tasks, and can be trivially scaled learn to do any task at least as well as a human can.

Most leaders in AI claim it'll be developed before 2030. Some claim as early as 2027. However a lot of people seem to think AI is a hype cycle and that deep learning has hit or will hit a wall significantly below AGI capabilities.

Do you believe that AGI is possible? If so, do you believe it will be achieved within the decade? If not, what are your best arguments why it is impossible or not going to happen in the foreseeable future?



I belive by 2030 or so we will may have models that can generate video, audio, and text that is indistinguishable from a normal human being ("The Zoom Call Turing Test"). But that doesn't mean anything- it's unclear whether such a system is intelligent, or merely a highly generalized mimic.

It will probably require an enormous amount of compute power and data, which means it will only be available to groups with $100+B to spend.


> it's unclear whether such a system is intelligent, or merely a highly generalized mimic.

This seems to ignore the entire point of the Turing Test.


Precisely! I await the time when more people realize the turing test doesn't tell you anything about the underlying intelligence of the agent you're interacting with.


What's the difference then between something being intelligent and something being able to do everything something that is intelligent can but isn't intelligent?


That's a great question! I would say that probably (but hard to prove), if something is a generalized intelligence mimic, it's probably best to treat it as intelligent, and as long as we don't have a reliable, quantitative metric for intelligence, we won't be able to say for sure. Although I was originally a solipsist, I take it as given that other humans, dogs, cats, octopuses, and a wide range of other creatures are "truly" intelligent, but I honestly have no biophysical way to test that.


> I belive by 2030

Achieved already in 2024


There's no tech today that can pass a zoom call turing test, not even close when you account for the live aspect. There are plenty of things on a zoom call that LLMs currently can't do like "Hey can you whistle your favorite tune?"


I don’t think you need a strict superset of human talents to be an AGI. Certainly whistling tunes is not required.


I believe a (packer[1]) AGI is possible now thanks to the memory prosthesis that is part of MemGPT[2].

Packers are people who memorize facts to cope with the world. My experience working with them is they are very capable people, but when they are forced to deal with Computers, their mental model of the world breaks... any deviation from the expected response throws them off. If you give them instructions that account for those variations, they then do just fine again. 8) (It took a while to figure out what was going on, and how to deal with them)

I believe the small context window of LLMs will tend to force it into a packer style of thinking. (Once it's no longer in training, it only has the "compiled" knowledge to fall back on, and it can't learn any new deep assumptions)

I'm not sure how you would go about training a (mapper) style AGI.

One thing that is very clear to me is that there's a superhuman level of learning in process while an LLM is being trained. Features present in the training data are compiled into the resulting weights. The fact that the resulting intelligence is even able to coherently communicate through the slightly randomized bag of words output system, with all of it's inherent loss, surely hints at this.

[1] https://wiki.c2.com/?MappersVsPackers

[2] https://memgpt.ai/


> Do you believe that AGI is possible?

Yes. A bunch of atoms orbiting around a star ended up in a configuration that can think. Our star dust human brains. The fact we exist proves atoms can be configured to have GI (no need to call it artificial).

I don't believe in souls created by divine intervention. So I believe the essence of life or intelligence is something that can be created by us. The same way gravitational and chemical forces led to the human brain.

What is possible has barely been explored at all.


> A bunch of atoms orbiting around a star ended up in a configuration that can think

And perhaps an unknown quantity of perhaps unidentified other ingredients.

"Of course cars drive themselves. Every day I see them driving on the roads from my top-floor office"


But it may be entirely possible that there's some form of intelligence that can't be replicated by digital computers. Humans are very analog after all.


I see it as "humans have a lot of sensors and Data coming in to be processed".

for generation of some knowledge or logical thinking and reasoning, no controller for muscles of tongue, vocal chords, breathing, etc is needed.

So if you take off, whatever is needed in and for our brain to keep the brain alive and to controll the body, there will be much less neurons left and then the question arise from my side:

Is it just the difference how a neuron (analog) and LLM (weights. Time and value discrete) work? Or is it the grade of connection and interlinking between the neurons, that makes the difference, if compared to the inner workings of an LLM?

If it's interconnection and interlinking, then more training on more parameters and some future data structures will surely help. If it's the inner workings of a neuron, new attention mechanism or new techniques for "querying" could do the game...

For me, we already have a "intelligence". It's just still dumb xD


I believe it is probably possible due to the very very mathematical nature of the universe and it’s reliable calculability.

The only difference might be semiconductors might thermally not be able to do it since to dissipate the heat you need distance which then limits communication speed due to c and you might need that speed.

OTOH we can do tricks evolution missed so there is that. Quantization isn’t available for grey matter.


It's not possible at our current levels of understanding. Forget about AI, and step it back to things we've been collectively trying to understand for hundreds of years:

Dominant models of economics don't work, we don't understand consciousness, or understand human motivations with much clarity (e.g. what causes people to become entrepreneurs).

AI would be built on models, yet how do we model the complex biologically-derived survival instincts and biases that have evolved in us over hundreds of years? It has to be impossible unless the AI has a biological component as models aren't reality. Yet our emotions are the underlying - often subconscious - drivers of our actions.

Even your phrasing. There is no "median human", and how could there be? To calculate a median relies on variables, yet the number of variables is practically infinite leading to different "medians" depending on the input variables.

Toning down expectations - e.g. to avoid aiming for capability "in all tasks" - is likely to lead to great benefits, but without a biological component I just can't imagine AI ever reaching the levels you mention. It currently can't "learn", only infer, combine, replicate and predict patterns.

In general, for the last 200 years we've thrown out the importance of consciousness hoping to find mechanical explanations for the universe and complex phenomena such as financial markets and economics. But those models just don't work. Fortunately progress is being made with studies into biases, e.g. behavioural finance & economics. Yet their potential for leading to prediction is still questionable, at both individiual and group levels.

So I think the goal of non-biological AI is fundamentally impossible since it's based on a flawed premise of mechanical humans interacting in a mechanical universe.


> a computer system generally as smart and capable as a median human

This implies:

  1. a median human is somehow smart
  2. a system that behaves like a human is desirable
For (1), I think it's highly debatable regarding the amount of dumb errors/mistakes I (and the average developer/sys admin) make daily, heck even hourly/minutely. Yesterday, I did commit a format string instead of a formatted string (aka: `"{foo}bar"` vs `f"{foo}bar"` in Python. That's not really smart.

For (2), do we want an autonomous system to mimic the weakest link in the security chain of an IT system? aka: the human. Do we want AIs that put passwords next to the computer screen? Or use `password123` because it's easier to remember?

My point is, we don't want AGI as smart as us, we want AGI smarter than us, and far more efficient that us. I don't want an AGI that forget the `f` in my example earlier.


Sam Altman's target was 2025: https://twitter.com/sama/status/1081584255510155264?lang=en

This was the state of AI then: https://www.theverge.com/2019/11/7/20953040/openai-text-gene...

I think even the experts can get overexcited though, in line with how McAfee predicted Bitcoin would hit a million dollars.


I wonder what McAfee is up to these days?

edit: It appears he's committed suicide in jail


I thought you meant that as an off-key joke, and wasn't sure how to respond.

He did strongly hint in advance (as in with a tattoo) that it wasn't suicide. But faking his own assassination sounds like a McAfee thing to do. I guess we'll never know.

Sometimes I envy these high flyers with this reality distortion superpower, and sometimes I don't.


Not a joke, I used to keep up with him years ago and I apparently missed the news that he had killed himself. Honestly sounds like a lot easier way out than being tortured by Columbian cartel members.


I think it's definitely going to be done by 2040, 2030 is a maybe though. I'm a bit skeptical about using LLMs as the basis for AGI - you would call a human conscious even if they had never learned a language in their life. Maybe some kind of model trained on audio/video with a video-based "internal monologue" could be more likely to achieve it.

But by 2040 I think we'll have the capabilities to simulate every neuron in a human brain, and that will get us undeniable AGI, unless intelligence comes from some other source that we haven't discovered yet.


"Definitely". You have an extremely high estimation of yourself as a soothsayer.


This is silly to assume we can reach something so sophisticated before we even able to define what is it that we're trying to achieve exactly.

Am I only one seeing the naked king here?


Well, intelligence itself is hard to define. We'd consider pretty much all humans "intelligent" even though certain things for some humans are near impossible for others. The greatest common denominator of human intelligence that people generally seem to imply when they talk about AGI is a multivariate overlap of ability to learn, embodiment, abstract pattern recognition, visual and motion acuity, and emotional understanding. However many of those facets are hard to test.

Intelligence is something that has thousands of variables. It is a spectrum with many points of "emergence" where something near impossible before becomes possible. There are of course the many benchmarks we give LLM's, (MMLU, ARC, etc), but the more practical test is whether a model can completely replace a human in an economically viable activity.


Everything you written here is subject to interpretation.

Even “replace human”.

For example today there are only 2 pilots in an airline plane. And even that is “just in case”. In early days there were 3.

So literally technology has already “completely replaced” some humans.

Does it mean “intelligence”?


I believe it will not happen until we have a firm definition and understanding of what intelligence is.

Basic intuition and experience --- you can't build/create that which you do not understand.

That being said, I see no reason to believe it is possible to achieve AGI using digital binary silicon gate logic.

The only working examples we have of intelligence are analog and organic.


> I believe it will not happen until we have a firm definition and understanding of what intelligence is.

It's quite possible that A(G)I systems will be built that outperform humans at various tasks, before our (full) understanding of "how?", catches up.

More likely: the list of tasks that AI systems can do better than humans, keeps growing. Yesterday it was playing chess, today it's 'playing' with language, tomorrow it serves as your (mental) sidekick, does your shopping, or advances arts & science.

I'm convinced we'll home in on a better grasp of what "general intelligence" is (in essence). How it emerges from neural nets or biological brains. And that things like personality, skills, moods, memories, creativity, etc etc are ultimately the result of structure, scale, the way a brain (biological or artificial) is connected to its environment, and learning.

Yes, this may remove some of the 'magic' that makes us humans. Or: highten our appreciation of the marvel that (biological) brains are.


So would you eat a wild mushroom if an AI classification app said it was OK?


Thing is, we don't need AGI for the singularity to occur. we're already at a place where these things are useful. agi would make them more useful, yeah, but focusing on AGI is the wrong question. look at what models are able to enable along the way. we don't need full AGI to get there.


Don't know about AGI, but humanity has convincingly achieved NGS (Natural General Stupidity).


Overtrained on too much Facebook and tiktok parameters. Too less memory in mainbrain, allowing an context window of < 150 tokens the most.

Some also forgot their root pwd - so no chance for reboot.

We're doomed. Ben Affleck is Batman. We'll all die. The world Will come to an end.


Yes of course it's possible.

The only people who deny the possibility of AGI believe that there is some supernatural Jesus magic in our neurons.

As far as timeline goes, who knows? Sometime in the next billion years.


Possible but maybe 2060-2100. The big rub with LLMs is training time and methodology. RAG exists because despite the name, these AI systems cannot really learn, at least human level stuff.


I think it is probably possible, but probably not before 2030; it will take much longer than that.


Not possible within this century.


Why do you believe this?


Because we are nowhere close to it. There's a difference between a moon landing and interstellar human travel.


What specifically makes it so infeasible in your opinion, given the rate of AI progress over the past 5 years? Do you believe no amount of scaling will reach human intelligence?


I see a often made error in here. There is a confusion of having a "knowledge graph/network/retrieval system " and a brain, having 99% of it's mass for movement control, sensor data processing, keeping alive etc..

So it's a definition problem what general intelligence is and what should it be capable to do. We're far far away from walking robots on the streets, but we're not very far away from reasoning systems that can be used to retrieve information, like it would happen the same whenever one asks an expert - the information flows off the expert's brain by utilizing 99 percent of the brain for doing the necessary movements so the information is translated to oscillated air waves..

I give it 1-2 years given the pace the whole topic is developed at and the Money pumped into. All the people saying "neii.. next century" or "it's a pigeon", "nothing different than an look up dictionary" - all this guys clearly confuse and not clear about things.


You write like a zhopa or zhopoi.


Both, I guess. Next thing to try v zhope...


I do not believe scaling the existing models will reach human intelligence.


it will not happen with LLM, so it's a complete unknown what could be next and when


Maybe, maybe not. What is possible is investor returns lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: