I mean, looking at the math that powers these models I don't see how they can replace reasoning. The tokens mean absolutely nothing to the algorithm. It doesn't know how algebra works and if you prompt ChatGPT to propose a new theorem based on some axioms it will produce something that sounds like a theorem...
... but believing it is a theorem would be similar to believing that horoscopes can predict the future as well.
Maybe some day we'll have a model that can be trained to reason as humans do and can do mathematics on its own... we've been talking about that possibility for decades in the automated theorem proving space. However it seems that this is a tough nut to crack.
Training LLMs already takes quite a lot of compute resources and energy. Maybe we will have to wait until we have fusion energy and can afford to cool entire data centers dedicated to training these reasoning models as new theorems are postulated and proofs added.
... or we could simply do it ourselves. The energy inputs for humans compared to output is pretty good and affordable.
However having an LLM that also has facilities to interact with an automated theorem proving system would be a handy tool indeed. There are plenty of times in formal proofs where we want to elide the proof of a theorem we want to use because it's obvious and proving it would be tedious and not make the proof you're writing any more elegant; a future reasoning model that could understand the proof goals and use tactics to solve would be a nice tool indeed.
However I think we're still a long way from that. No reason to get hyped about it.
... but believing it is a theorem would be similar to believing that horoscopes can predict the future as well.
Maybe some day we'll have a model that can be trained to reason as humans do and can do mathematics on its own... we've been talking about that possibility for decades in the automated theorem proving space. However it seems that this is a tough nut to crack.
Training LLMs already takes quite a lot of compute resources and energy. Maybe we will have to wait until we have fusion energy and can afford to cool entire data centers dedicated to training these reasoning models as new theorems are postulated and proofs added.
... or we could simply do it ourselves. The energy inputs for humans compared to output is pretty good and affordable.
However having an LLM that also has facilities to interact with an automated theorem proving system would be a handy tool indeed. There are plenty of times in formal proofs where we want to elide the proof of a theorem we want to use because it's obvious and proving it would be tedious and not make the proof you're writing any more elegant; a future reasoning model that could understand the proof goals and use tactics to solve would be a nice tool indeed.
However I think we're still a long way from that. No reason to get hyped about it.