Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, good luck to Apple then. Hopefully this attempt at killing Nvidia goes better than the first time they tried, or when they tried and gave-up on making OpenCL.

I just don't understand how they can compete on their own merits without purpose-built silicon; the M2 Ultra doesn't shine a candle to a single GB200. Once you consider how Nvidia's offerings are networked with Mellanox and CUDA universal memory, it feels like the only advantage Apple has in the space is setting their own prices. If they want to be competitive, I don't think they're going to be training Apple models on Apple Silicon.



S&P 500 average P:E - 20 to 25

NASDAQ average P:E - 31

NVidia's P:E - 71

That's a market of 1 vendor. That's ripe for attack.


It's ripe for attack. But Nvidia is still in its growing phase, not some incumbent behemoth. The way Nvidia ruthlessly handled AMD tell us that they are ready for competition.


Let's check in with OpenCL and see how far it got disrupting CUDA.

You see, I want to live in a world where GPU manufacturers aren't perpetually hostile against each other. Even Nvidia would, judging by their decorum with Khronos. Unfortunately, some manufacturers would rather watch the world burn than work together for the common good. Even if a perfect CUDA replacement existed like it did with DXVK and DirectX, Apple will ignore and deny it while marketing something else to their customers. We've watched this happen for years, and it's why MacOS perennially cannot run many games or reliably support Open Source software. It is because Apple is an unreasonably fickle OEM, and their users constantly pay the price for Apple's arbitrary and unnecessary isolationism.

Apple thinks they can disrupt AI? It's going to be like watching Stalin try to disrupt Wal-Mart.


> Let's check in with OpenCL and see how far it got disrupting CUDA.

That's entirely the fault of AMD and Intel fumbling the ball in front of the other team's goal.

For ages the only accelerated backend supported by PyTorch and TF was CUDA. Whose fault was that? Then there was buggy support for a subset of operations for a while. Then everyone stopped caring.

Why I think it will go different this time: nVidia's competitors seem to have finally woken up and realized they need to support high level ML frameworks. "Apple Silicon" is essentially fully supported by PyTorch these days (via the "mps" backend). I've heard OpenCL works well now too, but have no hardware to test it on.


> That's a market of 1 vendor. That's ripe for attack.

it's just a monopoly [1] , how hard can it be?

/s

- [1] practically, because of how widespread cuda is


cuda is x86. the only way from 100% market share is down.

…though it took two solid decades to even make a dent in x86.



nono - I don't mean cuda works on x86. I mean cuda is x86 - for gpgpu workloads - as in a defacto standard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: