Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From Geekbench: https://browser.geekbench.com/opencl-benchmarks

Apple M3: 29685

RTX 4090: 320220

When you line it up like that it's kinda surprising the 4090 is just $1800. They could sell it for $5,000 a pop and it would still be better value than the highest end Apple Silicon.



Comparing these directly like this is problematic.

The 4090 is highly specialized and not usable for general purpose computing.

Whether or not it's a better value than Apple Silicon will highly depend on what you intend to do with it. Especially if your goal is to have a device you can put in your backpack.


I'm not the one making the comparison, I'm just providing the compute numbers to the people who did. Decide for yourself what that means, the only conclusion I made on was compute-per-dollar.


A bit off-topic since not applicable for iPad:

Adding also M3 MAX: 86072

I wonder the results if the test would be done on Asahi Linux some day. Apple implementation is fairly unoptimized AFAK.


That's for OpenCL, Apple gets higher scores through Metal.


And Nvidia annihilates those scores with CUBlas. I'm going to play nice and post the OpenCL scores since both sides get a fair opportunity to optimize for it.


Actually, I'd like to see Nvidia's highest Geekbench scores. Feel free to link them.

It's stupid to look at OpenCL when that's not what's used in real use.


This is true, but... RTX 4090 has only 24GB RAM and M3 can run with 192GB RAM... A game changer for largest/best models...


CUDA features unified memory that is only limited by the bandwidth of your PCIe connector: https://developer.nvidia.com/blog/unified-memory-cuda-beginn...

People have been tiling 24gb+ models on a single (or several) 3090/4090s for a while now.


Shhh, don't correct the believers, they might learn something.


I think it would be simpler to compare cost/transistor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: