Hacker Newsnew | past | comments | ask | show | jobs | submit | eqmvii's commentslogin

Could this be an experiment to show how likely LLMs are to lead to AGI, or at least intelligence well beyond our current level?

If you could only give it texts and info and concepts up to Year X, well before Discovery Y, could we then see if it could prompt its way to that discovery?


> Could this be an experiment to show how likely LLMs are to lead to AGI, or at least intelligence well beyond our current level?

You'd have to be specific what you mean by AGI: all three letters mean a different thing to different people, and sometimes use the whole means something not present in the letters.

> If you could only give it texts and info and concepts up to Year X, well before Discovery Y, could we then see if it could prompt its way to that discovery?

To a limited degree.

Some developments can come from combining existing ideas and seeing what they imply.

Other things, like everything to do with relativity and quantum mechanics, would have required experiments. I don't think any of the relevant experiments had been done prior to this cut-off date, but I'm not absolutely sure of that.

You might be able to get such an LLM to develop all the maths and geometry for general relativity, and yet find the AI still tells you that the perihelion shift of Mercury is a sign of the planet Vulcan rather than of a curved spacetime: https://en.wikipedia.org/wiki/Vulcan_(hypothetical_planet)


An example of why you need to explain what you mean by AGI is:

https://www.robinsloan.com/winter-garden/agi-is-here/


> You'd have to be specific what you mean by AGI

Well, they obviously can't. AGI is not science, it's religion. It has all the trappings of religion: prophets, sacred texts, origin myth, end-of-days myth and most importantly, a means to escape death. Science? Well, the only measure to "general intelligence" would be to compare to the only one which is the human one but we have absolutely no means by which to describe it. We do not know where to start. This is why you scrape the surface of any AGI definition you only find circular definitions.

And no, the "brain is a computer" is not a scientific description, it's a metaphor.


> And no, the "brain is a computer" is not a scientific description, it's a metaphor.

Disagree. A brain is turing complete, no? Isn't that the definition of a computer? Sure, it may be reductive to say "the brain is just a computer".


Not even close. Turing complete does not apply to the brain plain and simple. That's something to do with algorithms and your brain is not a computer as I have mentioned. It does not store information. It doesn't process information. It just doesn't work that way.

https://aeon.co/essays/your-brain-does-not-process-informati...


> Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

This article seems really hung up on the distinction between digital and analog. It's an important distinction, but glosses over the fact that digital computers are a subset of analog computers. Electrical signals are inherently analog.

This maps somewhat neatly to human cognition. I can take a stream of bits, perform math on it, and output a transformed stream of bits. That is a digital operation. The underlying biological processes involved are a pile of complex probabilistic+analog signaling, true. But in a computer, the underlying processes are also probabilistic and analog. We have designed our electronics to shove those parts down to the lowest possible level so they can be abstracted away, and so the degree to which they influence computation is certainly lower than in the human brain. But I think an effective argument that brains are not computers is going to have to dive in to why that gap matters.


It is pretty clear the author of that article has no idea what he's talking about.

You should look into the physical church turning thesis. If it's false (all known tested physics suggests it's true) then well we're probably living in a dualist universe. This means something outside of material reality (souls? hypercomputation via quantum gravity? weird physics? magic?) somehow influences our cognition.

> Turning complete does not apply to the brain

As far as we know, any physically realizable process can be simulated by a turing machine. And FYI brains do not exist outside of physical reality.. as far as we know. If you have issue with this formulation, go ahead and disprove the physical church turning thesis.


That is an article by a psychologist, with no expertise in neuroscience, claiming without evidence that the "dominant cognitive neuroscience" is wrong. He offers no alternative explanation on how memories are stored and retrieved, but argues that large numbers of neurons across the brain are involved and he implies that neuroscientists think otherwise.

This is odd because the dominant view in neuroscience is that memories are stored by altering synaptic connection strength in a large number of neurons. So it's not clear what his disagreement is, and he just seems to be misrepresenting neuroscientists.

Interestingly, this is also how LLMs store memory during training: by altering the strength of connections between many artificial neurons.


ive gotta say this article was not convincing at all.


A human is effectively turning complete if you give the person paper and pen and the ruleset, and a brain clearly stores information and processes it to some extent, so this is pretty unconvincing. The article is nonsense and badly written.

> But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

Really? Humans don't ever develop memories? Humans don't gain information?


probably not actually turing complete right? for one it is not infinite so


> And no, the "brain is a computer" is not a scientific description, it's a metaphor.

I have trouble comprehending this. What is "computer" to you?


Cargo cults are a religion, the things they worship they do not understand, but the planes and the cargo themselves are real.

There's certainly plenty of cargo-culting right now on AI.

Sacred texts, I don't recognise. Yudkowsky's writings? He suggests wearing clown shoes to avoid getting a cult of personality disconnected from the quality of the arguments, if anyone finds his works sacred, they've fundamentally misunderstood him:

  I have sometimes thought that all professional lectures on rationality should be delivered while wearing a clown suit, to prevent the audience from confusing seriousness with solemnity.
- https://en.wikiquote.org/wiki/Eliezer_Yudkowsky

Prophets forecasting the end-of-days, yes, but this too from climate science, from everyone who was preparing for a pandemic before covid and is still trying to prepare for the next one because the wet markets are still around, from economists trying to forecast growth or collapse and what will change any given prediction of the latter into the former, and from the military forces of the world saying which weapon systems they want to buy. It does not make a religion.

A means to escape death, you can have. But it's on a continuum with life extension and anti-aging medicine, which itself is on a continuum with all other medical interventions. To quote myself:

  Taking a living human's heart out without killing them, and replacing it with one you got out a corpse, that isn't the magic of necromancy, neither is it a prayer or ritual to Sekhmet, it's just transplant surgery.

  …

  Immunity to smallpox isn't a prayer to the Hindu goddess Shitala (of many things but most directly linked with smallpox), and it isn't magic herbs or crystals, it's just vaccines.
- https://benwheatley.github.io/blog/2025/06/22-13.21.36.html


Basically looking for emergent behavior.


It'd be difficult to prove that you hadn't leaked information to the model. The big gotcha of LLMs is that you train them on BIG corpuses of data, which means it's hard to say "X isn't in this corpus", or "this corpus only contains Y". You could TRY to assemble a set of training data that only contains text from before a certain date, but it'd be tricky as heck to be SURE about it.

Ways data might leak to the model that come to mind: misfiled/mislabled documents, footnotes, annotations, document metadata.


There's also severe selection effects: what documents have been preserved, printed, and scanned because they turned out to be on the right track towards relativity?


This.

Especially for London there is a huge chunk of recorded parliament debates.

More interesting for dialoge seems training on recorded correspondence in form of letters anyway.

And that corpus script just looks odd to say the least, just oversample by X?


Oh! I honestly didn't think about that, but that's a very good point!


Just Ctrl+F the data. /s


I think not if only for the fact that the quantity of old data isn't enough to train anywhere near a SoTA model, until we change some fundamentals of LLM architecture


Are you saying it wouldn't be able to converse using english of the time?


Machine learning today requires an obscene quantity of examples to learn anything.

SOTA LLMs show quite a lot of skill, but they only do so after reading a significant fraction of all published writing (and perhaps images and videos, I'm not sure) across all languages, in a world whose population is 5 times higher than the link's cut off date, and the global literacy went from 20% to about 90% since then.

Computers can only make up for this by being really really fast: what would take a human a million or so years to read, a server room can pump through a model's training stage in a matter of months.

When the data isn't there, reading what it does have really quickly isn't enough.


That's not what they are saying. SOTA models include much more than just language, and the scale of training data is related to its "intelligence". Restricting the corpus in time => less training data => less intelligence => less ability to "discover" new concepts not in its training data


Could always train them on data up to 2015ish and then see if you can rediscover LLMs. There's plenty of data.


Perhaps less bullshit though was my thought? Was language more restricted then? Scope of ideas?


I mean, humans didn't need to read billions of books back then to think of quantum mechanics.


Which is why I said it's not impossible, but current LLM architecture is just not good enough to achieve this.


Right, what they needed was billions of years of brute force and trial and error.


I think this would be an awesome experiment. However you would effectively need to train something of a GPT-5.2 equivalent. So you need lot of text, a much larger parameterization (compared to nanoGPT and Phi-1.5), and the 1800s equivalents of supervised finetuning and reinforcement learning with human feedback.


This would be a true test of can LLMs innovate or just regurgitate. I think part of people's amazement of LLMs is they don't realize how much they don't know. So thinking and recalling look the same to the end user.


That is one of the reasons I want it done. We cant tell if AI's are parroting training data without having the whole, training data. Making it old means specific things won't be in it (or will be). We can do more meaningful experiments.


This is fascinating, but the experiment seems to fail in being a fair comparison of how much knowledge can we have from that time in data vs now.

As a thought experiment I find it thrilling.


OF COURSE!

The fact that tech leaders espouse the brilliance of LLMs and don't use this specific test method is infuriating to me. It is deeply unfortunate that there is little transparency or standardization of the datasets available for training/fine tuning.

Having this be advertised will make more interesting and informative benchmarks. OEM models that are always "breaking" the benchmarks are doing so with improved datasets as well as improved methods. Without holding the datasets fixed, progress on benchmarks are very suspect IMO.


I fail to see how the two concepts equate.

LLMs have neither intelligence nor problem-solving abillity (and I won't be relaxing the definition of either so that some AI bro can pretend a glorified chatbot is sentient)

You would, at best, be demonstrating that the sharing of knowledge across multiple disciplines and nations (which is a relatively new concept - at least at the scale of something like the internet) leads to novel ideas.


I've seen many futurists claim that human innovation is dead and all future discoveries will be the results of AI. If this is true, we should be able to see AI trained on the past figure it's way to various things we have today. If it can't do this, I'd like said futurists to quiet down, as they are discouraging an entire generation of kids who may go on to discover some great things.


> I've seen many futurists claim that human innovation is dead and all future discoveries will be the results of AI.

I think there's a big difference between discoveries through AI-human synergy and discoveries through AI working in isolation.

It probably will be true soon (if it isn't already) that most innovation features some degree of AI input, but still with a human to steer the AI in the right direction.

I think an AI being able to discover something genuinely new all by itself, without any human steering, is a lot further off.

If AIs start producing significant quantities of genuine and useful innovation with minimal human input, maybe the singularitarians are about to be proven right.


I'm struggling to get a handle on this idea. Is the idea that today's data will be the data of the past, in the future?

So if it can work with whats now past, it will be able to work with the past in the future?


Essentially, yes.

If the prediction is that AI will be able to invent the future. If we give it data from our past without knowledge of the present... what type of future will it invent, what progress will it make, if any at all? And not just having the idea, but how to implement the idea in a way that actually works with the technology of the day, and can build on those things over time.

For example, would AI with 1850 data have figured out the idea of lift to make an airplane and taught us how to make working flying machines and progress them to the jets we have today, or something better? It wouldn't even be starting from 0, so this would be a generous example, as da Vinci way playing with these ideas in the 15th century.

If it can't do it, or what it produces is worse than what humans have done, we shouldn't leave it to AI alone to invent our actual future. Which would mean reevaluating the role these "thought leaders" say it will play, and how we're educating and communicating about AI to the younger generations.


Many people (including Michael Burry) have had this feeling over and over since 2008, and were basically always wrong! Markets are tricky beasts to predict.


To plagiarize Howard Marks, when you try to time the market, you have to be right twice: both on when to get out and when to get back in. Even being right once is incredibly hard.

Or, to quote Peter Lynch: "Far more money has been lost by investors preparing for corrections or trying to anticipate corrections than has been lost in corrections themselves."


Ya I listen to this space a lot. 2015, 2016, 2018 and 2020 were a blur of “I’m cashing out and moving my 401k to money market” on several podcasts because of impending doom


It sucks because eventually they're right (and the rest of us were still laughing at the earlier podcasts).


I always wonder if they somewhat right. Using the chart from the article we have large spikes in margin debt at a bunch of years that initially were followed by a crash but now are possibly followed by money printing preventing the crash. So although Burry has the right idea the rules/market has changed and his analysis no longer holds.

That said, I think 2025 is too early for the AI bubble to pop. Even Burry was buying CDS in 2005 [1] so if you're seeing something your convinced is a crack right now it's going to take a few years to actually fracture.

- 2000 -- Followed by a crash

- 2007 -- Followed by a crash

- 2011 -- (ish) USG added a bunch of money into the system

- 2015 -- Counter example?

- 2018 -- Counter example?

- 2021 -- Large crash, USG added a bunch of money into the system

- 2025q1 -- Tariff crash

- 2025q3 -- Too early to tell

[1]: https://en.wikipedia.org/wiki/Scion_Asset_Management


The problem is Tina!

There Is No Alternative

- Gold? Dead asset

- Cash? Good luck with inflation

- Bitcoin? My ass…

So what else can you do as a rational investor than to invest most of your cash into an S&P500 or World fund?


Instead of cash, I hold treasuries. The rest is spread out among low holding cost index funds (watch out for fees... they will kill your profits) and use dividend re-investment. Split things between tax advantaged and non tax advantaged depending on your short and long term goals (ask a certified financial advisor with fiduciary duty for strategies that work for you. It's worth the small fee)

Every time the market takes a crap, I buy. I rarely sell. Keep enough cash or near cash assets in a no penalty account(s) to cover unexpected costs so aren't forced to sell.

A luxurious set up for sure (which took about a decade to get set up) but it's repeatable and fairly stable.

Now, if you have real wealth (like $10s of millions of liquid assets) then look to setting up a MFO or SFO and focus on tax efficiency, etc. That's a whole different set of strategies.


Interesting.

So US Treasury securities instead of cash right?

And then every time there is a dip, sell the treasuries and buy ETFs?


Sure. Dollar cost averaging across a broad spectrum works well for a (very) conservative investor. I try never to have more than 10% of my liquid assets in speculative deals (straight up gambling stuff... individual stock picks, day trading, options, etc). The rest I try to keep as long term investing and/or cash or near cash.


Give it to entrepeneurs/researchers doing intrinsicly cool things like cancer research, without knowing how you will get any of it back right at the start. The problem is NOT lack of productive investments, its that Uber rich people think its not fair if they ever lose.


Military manufacturers are a reasonably safe haven these days as Europe is desperately trying to re-arm itself following the Russian invasion of Ukraine, the Middle East is in flames once again and there's a ton of uncertainty and small scale hostilities around China/India/Pakistan.

Urban residential real estate is also a safe haven assuming you still are allowed to invest there. Demand is not going to shrink any time soon (as most Western governments are running rural areas to the ground for them being too expensive to bring on modern standards and expectations in infrastructure), and supply is so scarce that even large developments and re-zoning will hardly make a dent in demand.


Invest in a fund which underweights bubble stocks by tracking a suitable alternative index:

https://www.bogleheads.org/wiki/Alternative_indices

Dividend weighted indexes are the classic option, and fundamental weighted index are a newer one.


Maybe unfashionable equities? Utility stocks, Japan/S Korea, BRKB etc?


Golds been rallying really aggressively recently


So, too late now, you're saying. I read overvalued, soon to crash…

Perhaps we should be buying up Yuan…


Chinese equities have actually been great performers recently (off of a base of ultra-pessimism), but that's mostly the onshore market, not the ADR paper you can buy in the west.

Buying the yuan on the other hand is directly taking a stance against CCP state controlled currency policy. A less advisable and knowable bet.


Then invest in Chinese airlines, banks...


Yes I know, same with Bitcoin.

I mean it’s a dead asset class since it doesn’t fund any economic activity. It’s just a store of wealth


Yeah, frankly I think there is no truly safe place for investments at this point.

We might as well just enjoy the ride knowing at least when it hits the bottom, we'll all of us be in the same tough spot.


Well isn't that the whole point? At a fundamental level, investment profits are a payment for the risk you take. No risk equals no profit. There are "safe" investments currently. You can get paid 4% a year roughly to hold treasuries right now. Considered a "risk free" investment (Which sure, maybe the merits can be argued).

But at the end of the day the only way to profit from an investment is taking some risk. It all comes down to pricing that risk.


> Gold? Dead asset

What? Gold is at a record high, and with inflation it will only go higher.

https://www.macrotrends.net/1333/historical-gold-prices-100-...


> https://www.macrotrends.net/1333/historical-gold-prices-100-...

In nominal terms perhaps, but in inflation adjust terms it's roughly what it hit in 1980:

* https://www.investopedia.com/gold-price-history-highs-and-lo...

https://graphics.thomsonreuters.com/11/07/CMD_GLDNFLT0711_VF...

And there have been long (10y) stretches where it's remained flat: it takes a lot of patience to HODL through something like that. Even if equities (e.g., holding an index fund) are flat at least you get some yield.

With a pure commodity play like gold (or BTC) your only way of returns in price appreciation.


And what was happening in the 80s?

Inflation.

And what’s happening now?


> And what was happening in the 80s? Inflation.

Gold was high in 1980 specifically and dropped after 1980 even when inflation was still high.

Gold also had a peak in 2012: was there inflation then?

> And what’s happening now?

Nothing. Inflation peaked in February 2023 and has been dropping ever since:

* https://fred.stlouisfed.org/series/CORESTICKM159SFRBATL

Gold didn't start going up until September 2023 and has been rising. Gold and inflation are currently inversely related.


Dead in the sense that it is not useful for society.


How is it not useful for society if it’s worth $3300 an ounce? It’s a metal prized physical properties and it’s also used in industry.


My favorite LLM tells me that roughly 85% of the world’s gold is simply “lying around” in the sense of being held as jewelry, bars, coins, or reserves. In contrast, only about 15% of the gold is actively utilized in production or technological applications.

So I'd say it's the same as having cash under your pillow.


If it’s the same thing as cash under your pillow, and it’s useless, then give me all the cash under your pillow.


Yes, exactly. That’s the idea!

I give my cash to you, and, as exchange, I will own a (usually small) share of your company.

Then you’ll hopefully be successful and my shares will raise.

Good for you. Good for me. Good for society since you created jobs.


It is not that the markets are tricky. Predicting what the Fed will do with interest rates is tricky. By lowering rates they feed more money into the market. Take a look at the last 15 years and you will realize the only thing that gave us a minor recession was COVID, adn that was becasue intrest rates were zero.

https://fred.stlouisfed.org/series/FEDFUNDS

But they can't do this for much longer, inflation is the first sign, which is why Trump is raising tariffs.

You can see Bond prices going up. Trumps tarrifs are aimed and lowering T Bill rates:

https://fred.stlouisfed.org/series/DGS10


> But they can't do this for much longer, inflation is the first sign, which is why Trump is raising tariffs.

Trump is raising tariffs because he thinks they are a good idea and has since the 1980s:

> “The fact is, you don’t have free trade. We think of it as free trade, but you right now don’t have free trade,” Trump said in a 1987 episode of Larry King Live that’s excerpted in Trump’s Trade War. “A lot of people are tired of watching the other countries ripping off the United States. This is a great country.”

* https://www.pbs.org/wgbh/frontline/article/trumps-tariff-str...

Trump's mindset is a 1980s NYC real estate guy (zero-sum, one-off games), which when applied to global trade, is basically mercantilist:

* https://en.wikipedia.org/wiki/Mercantilism

Meanwhile, in the real world, commerce is often non-zero-sum (both parties get something of value, i.e., "win-win"), and you play multiple rounds with each trading partner and reputation matters (rather than one-off, where burning your bridges could be an actual strategy).


Trump was talking about free trade in the 80s, not in increasing tarrifs, which is not free trade. In fact, it’s the complete opposite.


Trump is saying that nobody is doing free trade except America, and so we should stop being free. He’s had a tariff fixation for decades.


> which is why Trump is raising tariffs.

I question this bit. (That may be why he's raising tariffs; I question whether it will work.)

When tariffs go up, prices go up (delusions that "other countries will pay" notwithstanding). That shows up in inflation statistics, which in turn will (probably) show up in T Bill rates, but as a higher rate, not a lower one.

Except... tariffs might be a one-off increase. They may not compound the way "regular" inflation does. So maybe it will work in the medium term?


Oh, I don’t think it’s going to work at all. For some reason, he doesn’t think it’s gonna raise inflation enough to matter.


Who knows if this is the reason that Trump is putting in place this trade war and increasing tariffs, but this paper is a very interesting perspective on why someone in the position the US is in, in regards to the global economy is extremely thought provoking [1].

Seeing as how Trump appointed the author into his political circle though could be evidence this is the ultimate goal.

The paper is quite lengthy, however, in the beginning Stephen explains this idea of the Triffin Dilemma. A country that acts as the worlds reserve currency and thus creates enormous demand for their currency for things outside of goods are at a disadvantage that exasperate their trade deficit. This is implicit for a countries currency where most global trade is settled in their dollars, not to mention the benefits of holding the world reserve currency as a value store or investment.

I've wondered since the tarifs were announced how much impact they can actually have, but besides that point what is a reserve currency country to do? Give up their reserve currency status? There are significant downsides to that as well...

[1] https://www.hudsonbaycapital.com/documents/FG/hudsonbay/rese...


The market can be irrational longer then you can be solvent.


Sure, if you bet against the crowd with leverage or a tight funding leash.

Global equity index ETF have reliably yielded 5% returns over 12-15 year periods for ~75 years.


I barely remember the time before reddit - crazy how the redesign seemed to kill it the first time around!


predates the iphone!


“prepare 3 envelopes” always leaves out the “what to do in case of Nazi robot” part.


surely there are better things to be commenting on?


You're jealous!

Old advice: If you have nothing good to say, best to say nothing at all.


love this every time i see it, but haven’t yet had the courage to reply to somebody’s “hey” with it


Coming around to this conclusion myself after experimenting with the tools for a few weeks.

Things have changed.


It’s the difference between partnership track and various “counsel” type titles at law firms.

The former grows (or at least maintains) a book of business and is valuable, the latter has key skills and is useful but clearly not to the same degree.

And like many other contexts, the vast pipeline of people willing to dedicate their lives to trying to be useful mitigates the value of just being useful over time.


and law


AI agents and testing “vibe coding”

It doesn’t feel there yet, but starting to seem some workflows could be close. And non-technical folks at business are starting to pay attention and want projects moving in those areas.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: