Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no other way to get 256 GB/s of memory bandwidth for this cheap, and that's quite valuable in many workloads. I'm curious to get one for compiling code too.

You can get similar bandwidth with server boards that cost 5-10x as much, or with a Mac Studio that costs 2.5x as much.



Note that this is what CAMM[1] memory is intended to solve, although it remains to be seen to what extent it catches on.

[1] https://en.m.wikipedia.org/wiki/CAMM_(memory_module)


According to Framework, CAMM / LPCAMM is simply not compatible with this line of AMD chips, do to signal integrity reasons.


CAMM will be fine on laptops and other smaller form factor devices for CPU-class memory speeds, but it does not have the bus width or lanes to match solutions like Strix Halo, Grace, Apple M-series -- the memory bandwidth being a large part of their appeal. Increasing the bus width on CAMM modules is going to compromise many of the other advantages.

The problem is that these are integrated shared-memory systems with a single RAM pool. That's nice for a lot of reasons, but GPUs need many more memory channels and larger bus widths than CPUs do in order to do work and remain fed at a reasonable power draw. It's an inherent design trade off. I don't see a CAMM style solution for GPU memory coming anytime soon except on the low end.


> You can get similar bandwidth with server boards

Could be wrong, but I don't think you can. The bandwidth limit, AFAIK, is a problem with the DDR5 spec. These soldered solutions can go faster specifically because they aren't DDR5.


Desktop platforms only have 2 memory channels, amd's latest Epyc servers have 12 channels per socket. Strix Halo has 4 channels.


Please don't misused channels. In DDR4 1 channel was 64 bits. In DDR5 1 channel is 32 bits. So a 128 bit wide DDR4 system had 2 channels, but a 128 bit wide DDR5 system has 4 channels.

The latest AMD server is 12 DIMMS wide, but has 24 channels of DDR5.


Hmm, I think a Threadripper 7965WX can get you there. Probably around 4-5k all in so I guess similar pricing to a Mac Studio.


Nvidia Grace is DDR5, hopper is HBM. Many servers like Intel and AMD's latest and greatest all use DDR5.


Or the NVidia Project DIGITS device at 1.5x the cost, but, also Q2 2025 instead of Q3.


But no published memory bandwidth.


512GB/s from insiders


Sounds promising, hope it's true, much like a mac studio with the m2 max. Otherwise a 128GB amd strix with 256GB/sec memory bandwidth for $2k looks good.


A Mac mini with an M4 Pro and 64GB of memory has the same bandwidth and costs £1,999, compared to £1,750 for the Framework Desktop when factoring in the minimum costs for storage, tiles, and necessary expansion cards.


True, but less RAM.


One thing to note on the more RAM: for the 128GB option, my understanding is that the GPU is limited to using only 96GB [1]. In contrast, on Macs, you can safely increase this to, for example, 116GB using `sysctl`.

[1] https://www.tomshardware.com/pc-components/cpus/amds-beastly...


On linux, the gpu can go up to 110 GB.


It can go higher actually, just that when I setup my test devices I had a "ought to be enough for everyone" moment when typing `options amdgpu gttsize=110000`. I guess this number spread too far, heh.

See also:

[1] https://en.wikipedia.org/wiki/Graphics_address_remapping_tab...

[2] https://www.kernel.org/doc/html/v4.19/gpu/amdgpu.html#:~:tex...


Apologies, I stand corrected. Do you have a reference for this? I'm genuinely curious why the 96GB "limit" is so frequently cited - I assumed it must be a hardware limitation.


It's mentioned in LTT video: https://youtu.be/-lErGZZgUbY?t=126

(video also features Framework's founder/CEO)


That's a Windows limitation. On Linux it's 110GB.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: