Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(note to conversation participants - I think jiggawatts might be arguing about $50B/qtr x 24 qtr = $1 trillion and kllrnohj is arguing $20 billion * 2^6 years = $1 trillion - although neither approach seems to be accounting for NPV).

That is assuming Nvidia can capture the value and doesn't get crushed by commodity economics. Which I can see happening and I can also see not happening. Their margins are going to be under tremendous pressure. Plus I doubt Meta are going to be cycling all their GPUs quarterly, there is likely to be a rush then settling of capital expenses.



Another implicit assumption is that LLMs will be SoTA throughout that period, or the successor architecture will have an equally insatiable appetite for lots of compute, memory and memory bandwidth; I'd like to believe that Nvidia is one research paper away from a steep drop in revenue.


Agreed with @roenxi and I’d like to propose a variant of your comment:

All evidence is that “more is better”. Everyone involved professionally is of the mind that scaling up is the key.

However, like you said, just a single invention could cause the AI winds to blow the other way and instantly crash NVIDIA’s stock price.

Something I’ve been thinking about is that the current systems rely on global communications which requires expensive networking and high bandwidth memory. What if someone invents an algorithm that can be trained on a “Beowulf cluster” of nodes with low communication requirements?

For example the human brain uses local connectivity between neurons. There is no global update during “training”. If someone could emulate that in code, NVIDIA would be in trouble.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: