Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Have you altered your life plans due to coming AI developments?
7 points by atleastoptimal on Oct 14, 2024 | hide | past | favorite | 18 comments
The heads of the major AI labs (Dario Amodei, Sam Altman) including multiple now Nobel laureates (Demis Hassabis, Geoffrey Hinton), and many others have all echoed a similar sentiment over the past year in their public communications:

1. AGI is coming

2. It will completely change society, the meaning of work and the structure of our economy, perhaps making human labor irrelevant, if it does not destroy us

3. It likely will come in 2-10 years

Hinton himself said that he plans to "get his affairs in order", as he believes humanity only has 4 years left. If you agree with them, you would have to concede that any plans you make with a time horizon longer than that window until AGI will be completely upended and perhaps made irrelevant. If so, have you considered this and has it altered any of your plans for the near future?

If you have not, is it because you do not believe in one or more of the 3 points above, or is it because you feel since nobody really knows how it will go down, it is pointless to worry, fret or change things?



> If you have not, is it because you do not believe in one or more of the 3 points above, or is it because you feel since nobody really knows how it will go down, it is pointless to worry, fret or change things?

I've said it before, and I'll say it again - I am not particularly worried about "evil AI" scenarios ala The Terminator or The Matrix. OTOH, I think there is a real, but hard to quantify, chance of significant job displacement at a scale that hasn't been seen before, and the emergence of even greater disparity in wealth/power/quality-of-life between the "haves" and "have nots". Or to put it another way, I could see a "cyberpunk dystopia" of sorts emerging.

So am I "altering my life plans" in response to that? Not in major ways. And to the extent that I have, I'm not honestly sure I want to talk about it here. I'll just say that I have started focusing more on saving money than what I used to, and pick up another couple of boxes of ammo every now and then. But still, no, for the most part I haven't made any big changes.


If "humanity only has 4 years left", then what's the point in "getting your affairs in order"?

Personally, I don't believe 3. I'm not sure I believe 1, either.


What’s the point? Next season of Doomsday Preppers.


No. I don’t plan for the future in any case because I can’t predict what might happen. And I certainly don’t make decisions because of what someone else predicts. Do you think people should alter their life plans because of predictions around climate change? The second coming of Jesus?

I prefer to deal with the reality of now, not worry much about what could happen, and keep myself adaptable.


>Do you think people should alter their life plans because of predictions around climate change?

Such should reasonably affect plans based on purchasing property in areas prone to negative effects due to climate change, like hazardous weather or water scarcity.

>The second coming of Jesus?

That is different than AGI timelines since there is no scientific or independently corroborated evidence pointing to this being a reasonable probability.


What evidence do we have other than predictions from people who have an obvious financial interest, or just pump stock like Altman?

Do you know how many people have predicted imminent AGI (and many other doomsday scenarios) in the last 100 years? I have lived through two AI winters already. I won’t say it can’t happen but I don’t think anyone knows when or what it might look like, or what social/economic effects it might have.


Hinton obviously has no financial interest, in fact reputationally he has a massive incentive against warning of AI doom or the impact of AGI if it were not the case. Though of course he is old and at the end of his career so one could argue he is overly cautious and misled over trends.

It would only make sense to compare current sentiment over imminent AGI to past predictions of imminent AGI as long as conditions then were the same as conditions now. Conditions now are obviously different than conditions any time over the past century.

Even if you argued that we are as far away from AGI now as we were in 1990, 1970, or 1920, if I showed what ChatGPT could do to people in any of those eras, they would each claim that it is AGI. One could claim that its apparent intelligence is just a parlor trick and a consequence of the wealth of training data it has access to, but nevertheless many claimed "impossibilities in our lifetime" with regards to what LLM's can do are being achieved every year.

To me it seems less at LLM's and current AI is all some trick and that we are eons away from true reasoning, but rather many of the things that make human intelligence useful are still pretty hard to crack, but we are nevertheless building a bridge to reach that level one OOM at a time, with lots of engineering effort also eroding the gap month by month.


We can cherry-pick AI experts to pay attention to. Yann LeCun recently called current AI "dumber than a cat." Stephen Wolfram has written some very well thought-through articles about LLMs and doesn't find much to worry about. Gary Marcus. I wouldn't ignore Geoff Hinton, but he can fall victim to the same mistakes anyone else can, as Gary Smith [1] described.

Pointing out that people of any era can get easily fooled by things they don't fully understand supports my position. Putting aside the obvious stock pumping and hype going on in the current AI bubble, plenty of people want to believe in AGI, or want to fear it, or just want to stay relevant. A decade from now we will only remember the people who made correct predictions.

[1] https://mindmatters.ai/2024/01/computers-still-do-not-unders...


I'd like to see your scientific evidence that AGI is a reasonable probability.


I hope you'd at least concede without needing bulletproof evidence that it's a more reasonable possibility than the second coming of Jesus.

The best evidence I can offer is the continued saturation of benchmarks, the continued effectiveness of scaling laws with respect to improvements in general reasoning, the massive improvements in algorithmic capabilities which lead to better use of existing models and inference-time compute, and the overall lack of fundamental blockers articulated that differentiate AI and human reasoning that can't be fixed via these developments.

Every LLM of a given size is invariably at or near the state of the art with respect to all benchmarks available to test general reasoning. GPT-4 was the first model in the >1T parameter range, and every model that emerged around that level (Claude 3 Opus, Gemini Ultra) reached that level of performance in MMLU, MATH, etc. All subsequent improvements in models have been distillations and algorithmic improvements, synthetic data, etc. but so far a ~10T model has not been attempted, though it is likely around the scale for the upcoming GPT-5, Claude 4 and Gemini 2 models.

Sources: https://www.dwarkeshpatel.com/p/will-scaling-work https://www.lesswrong.com/posts/arPCiWYdqNCaE7AQv/superintel... https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto...


So LLMs get better at what LLMs do, according to benchmarks that measure LLM performance. We have no idea if that leads to sentience. Most likely it doesn’t, because we don’t have a definition of sentience, much less a model that would let us create a sentient machine.

Evidence from lesswrong.com has about the same weight as evidence from the Bible.


>Evidence from lesswrong.com has about the same weight as evidence from the Bible.

What's the point of even continuing this conversation if you make hyperbolic claims like that? If you were to justify your reasoning I would at least give you the benefit of the doubt as I did with my previous comment. Please evaluating the extent to which your biases are affecting your reasoning before you make claims like that and hope to be taken seriously.


Nothing from the "rationalist" cult carries any weight with me. It seems more like a self-congratulatory circle jerk of nerds telling each other how smart they are than a serious intellectual group.


Point 1 is false, and therefore points 2 and 3 are irrelevant.

The energy requirements of actual AGI (not the already energy intensive toys we're playing with now) are simply too huge for us to be able to produce, let alone sustain.


I don't think the current systems we have are on the path to AGI, whatever that is. So it's hard to say it will be here soon. And when it does come I don't think it will transform society overnight. It will probably take many decades.


So you're in the "slow timelines, slow takeoff" camp. By "current systems" do you mean LLM's based on transformers? What about them do you believe would make a sub-decade path to AGI infeasible?


By "current systems" I mean any technology I'm aware of. LLMs and transformers are trained on the data we feed them. And they can only do a limited number of things with the data. In certain domains that's enough to be impressive but I'd hardly call that a general learning architecture like a human being or even like a fruit fly. Living systems are constantly learning, constantly updating, constantly adapting, even before cognition and consciousness as we know it evolved.


Now I don’t mind marrying someone who might be a bit messy because by the time we have kids we will have humanoids to clean the house




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: