More RAM is always nice but I'm secretly hoping we'll start to see more ECC support in the future. With these humongous modules and even with a teeny tiny bitflip probability corruption chance becomes non insignificant.
Oh yes, I understand that. I only wish that ECC support in general starts getting more traction in consumer electronics. Nowadays (unless you go to super noisy super expensive server hardware) maybe with an AMD processor maybe a motherboard manufacturer will have a 20-links-deep document that says that these ECC modules may be supported, proceed at your own risk, might set your flat on fire, kill kittens etc. When you had a couple of gigs of ram it was probably irrelevant but if you have multiple TB of RAM caching file access ECC should become normalised.
Yea, the biggest downfall of ECC in computers was Intel intentionally disabling ECC in dies uses for the consumer processors and leaving it only for Xeons. As a way of forcefully keeping the market segregated.
AMD otoh has brought ECC to the table in Ryzens without the same shenanigans
I know some people won’t be happy until every laptop has ECC RAM and is super cheap, but the reality is that the demand for ECC RAM is very low. The majority of users would choose the extra battery life and lower price if given the option.
I looked and it's hard. Had to resort to reddit recommendations.
Nice circular reasoning. But nothing will change till we're not vocal enough about ECC benefits and shady pricing. I assure you though, it's not about my happiness :)
There’s lots of off the shelf laptops available with ECC memory, some even in slim form factors.
For desktops the entire Thinkstation lineup has ECC available to option or as standard.
For the higher priced models you cant even order them with non-ECC memory.
I think inline ECC (the module performs the ECC) is mandatory with LPDDR4 (the error rates on current silicon are too high to leave it out), but link ECC (between the CPU and the module) is optional.
Note that link ECC + inline ECC don't give you end-to-end protection, since the controller in the memory module can still flip bits. DDR5 is moving to on-die ECC (which, unlike DDR <= 4's side-band ECC) also isn't end-to-end.
I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.
This article defines all the terms, but is very vague about what things are mandatory, or how reliable the error correction schemes are. For instance, it carefully doesn't say that SECDED schemes detect all two bit errors, instead it says they detect at least some:
> I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.
I doubt it will be phased out for servers. I haven't seen anyone reporting that on-die ECC in DDR5 has a reporting mechanism, and reporting on ram errors is important for server reliability.
I really wish we’d just get in-band ECC on normal consumer platforms. That way we’d need no special DIMMs, in applications where ECC was desired it could be enabled and the capacity penalty would be paid, in other applications it could be disabled and no capacity would be lost.
I like this idea. 64gb of ram non-ecc, 48gb in ecc. Dynamic, succinct, and enables more supply chain cross over for not having two (three?) separate DIMM types.
On AMD ECC support is pretty much standard on every chip they make, and always has been. Even my shitty 4-core phenom from over ten years ago on an el-cheapo motherboard supported swapping it's regular DIMMs for ECC ones. You're never going to get ECC "for free" but it would be totally possible for everyone to pay the cost once and just move to ECC-only for everything from now on.
Except intel, the company that brought software-locked hardware features to x86, love to price-differentiate.
Having physical memory segments be different logical sizes at runtime depending on the ECC setting does not sound fun.
Having your system’s available memory fluctuate up and down based on how many segments are currently set to ECC also doesn’t sound fun.
Having developers manually turn ECC off for regions where it’s unimportant sounds like a lot of complexity for a relatively rare use case.
There is in-band ECC in some newer Intel designs, but it’s all or nothing. Adding extremely complexity to memory management to selectively disable it sounds like a lot to ask.
I think your reading depends on thinking "application" means "process", while another reading would be that an application is a particular deployed system, where this setting can be altered e.g. at the BIOS level.
It does, but this particular implementation is local to the module, and cannot be used for secondary purposes in addition to error correction, such as storing tag bits.
As a consumer does that matter? I understand server grade hardware wants the extra monitoring/diagnostic gizmos, but will the memory be corrected with the same efficiency as DDR4 ECC or is it an entirely neutered implementation?
I'm not sold on on-die DDR5 ECC providing protecting.
On-die ECC allowed DDR5 to be competitive with DDR4. Is it really protecting your data at rest if the DDR5 die is running at such tolerances that it's correcting single bit errors from internal signalling issues every transaction? It's only single bit ECC, if something else outside of the die(Cosmic Ray, sudden voltage change, sudden temperature change) induces a bit to flip while the internal circuitry causes a different bit to flip your data is now corrupt.
Is there any intuition about how frequently data-at-rest errors occur vs data-in-flight? Would the native DDR5 ECC get me 90% of the way there or is it so minor as to be effectively meaningless?
I assume it is going to take another decade to fully unwind Intel's ECC market segmentation. Trying to get a sense on if I should pay the ECC tax for my next build. Of course noting that as a consumer, I will probably never notice a flipped bit.
Runtime asserts and invariant checks in software can also help a lot with isolating bitflip errors. With a nice addition of also isolating effects of software bugs.
I don't know if it is significant. Runtime checks tend to focus on small but critical part of the data, like size fields. It usually doesn't check bulk data, like decompressed image data, or code, and it also may not be effective if data is in cache. Furthermore, it will only detect errors, not correct them. Also the performance cost is, I think, much higher than the extra RAM chip. Good coding practice for critical path in software, but clearly, it doesn't substitute for dedicated hardware.
I have had defective RAM, and I got quite a bit of corruption before the first crashes, it is hardly noticeable when it is just a pixel changing color in a picture, but it is still something you don't want. ECC would have prevented that.
I know there is software resistant on random bitflips, like for satellites exposed to cosmic rays, but it is a highly specialized field. It is also a field where they use special chips, typically with coarser (and therefore less efficient) dies that are more resistant to radiation. You leave a lot on the table for that.
ECC is better handled in hardware: most of the time it won’t happen, and the hardware can more easily interrupt the processor so the kernel can correct the problem or signal a fault if it’s not a correctable corruption.
Those only help isolate somewhat predictable errors. Which is rare for what ECC is designed to protect against.
If it’s a random, once in several billion reads/writes issue, it can just stop/identify the bad data from further propagating. Sometimes. That data is still lost.
ECC does forward error correction, which is extremely rare for the type of data protection you’re talking about. and if the data is corrupted in RAM (say when initially loaded/read) before the software can apply FEC, there is nothing the software can do.
I thought that the current wave of compiler correctness checking, zero-cost abstractions, JIT compilers and speculative processor behaviour were all about removing those "unnecessary" runtime asserts and invariant checks to get better performance.
But it does not have a means of reporting ECC triggers to the user from my understanding, which is really one of the most important parts.
When ECC starts tripping on a device outside of completely random times is when you should look into what's going wrong. You may have overheating or failing hardware.
Wikipedia: Unlike DDR4, all DDR5 chips have on-die ECC, where errors are detected and corrected before sending data to the CPU. This, however, is not the same as true ECC memory with extra data correction chips on the memory module.
So I'm not sure how this works, because I'm not sure if "true" ECC is better/worse/same as on-die ECC. A casual googling shows on-die to have more advantages.
And the march continues ever onwards and upwards! What will we ever do with such tremendous quantities of insanely fast memory? (run Electron of course! </s>)
Only tangentially related… So nice that the chip shortages seem to have mostly worked their way out of the system.
Used server prices in particular have gotten almost ridiculously cheap lately. Just bought a 4 node EPYC server with 128 cores, 512GB DDR4, 16x NVMe slots with 15.3TB of P5500 storage populated for $2500.
Seems like almost insanity to get so much compute for that price, and yet in 5 years, as always, that machine will be considered slow and inefficient compared to the latest iteration.
It seems like progress continues unhindered by the death of Moore’s law. The hidden toiling of billions in R&D that make this possible is truly a modern marvel of capitalism.
High energy prices definitely play a huge role in the upgrade cycle.
The higher the energy cost, the sooner that upgrading to more efficient compute pays off. But it’s still amazing to me that just the CPUs for this system were selling for $10,800 in 2020.
What will we do? Run local LLMs. Train and fine tune LLMs. Play games full of AI NPCs. Run private local AI home assistants.
AI is the biggest driver for power at the endpoint since when 3D games came out. (I remember when Quake drove an entire generation to upgrade their PCs.)
Or you could just use ClosedAI APIs and all that will lead to…
>Just bought a 4 node EPYC server with 128 cores, 512GB DDR4, 16x NVMe slots with 15.3TB of P5500 storage populated for $2500.
That is a 32 Core EPYC CPU, 128GB DDR4, 4TB NVMe SSD per node for $625. I am assume you mean physical core and threads. But I am surprised you could get an 32 Core EPYC for $625, unless it is not even a Gen 3 EPYC but much older. The 128GB and 4TB together would have costed at least $200. Meaning you need to get the CPU for less than $350 to leave some cost with PSU, fans and Server Case.
Either it is heavily discounted or this is a 2nd hand deal.
>Seems like almost insanity to get so much compute for that price, and yet in 5 years, as always, that machine will be considered slow and inefficient compared to the latest iteration.
Likely not.
That is because DRAM and NAND price has fallen to close or below BOM cost. Your 512GB DDR4 and 15.3TB SSD is less than half if not close to a quarter of its price 18 months prior. The EPYC CPU also has stepper discount due to slower HyperScaler expansion. Basically unless we have two or more node cycle to bring down cost, the current purchase price wont last.
Can you point me in the general direction where I might be able to find some similar machines? EBay? I’m needing to build out a small scrapyard server farm and I have basically unlimited solar energy available so I don’t mind using last-gen stuff.
Apart from eBay, which is one of the good sources, you can map those sellers to their standalone online presences, where prices are much more stable.
Companies decommission servers all the time. Look for the companies that buy them. Data protection laws, however, had made storage devices trickier to get.
The GDDR6X was already so fast that it was practically melting the chips. I hope Samsung and whoever is putting them in GPU and other boards can put a little more thought into heat dissipation.
Have you not noticed that SSD prices have dropped by almost half the last 6-8 months?
I can get the same SSD I bought in February for literally half the cost today.
You can get larger drives. 4TB has pretty high availability. If you want more, you can go server grade U.2 and get 8-32TB SSD - you will pay more though.
Next year. It's always next year. Price is dictated by how much one is willing to pay, which is somewhat tied to how much financial value one can extract from it.
Mine has 128 gb already, so I make liberal use of it and it's rare to have it use more than 32 of that. Some games can use quite a bit and I'd imagine that in a 1tb world there would be a lot more preloading options. But mostly the ram gets used by the Linux kernel as a file cache. With 1 tb I would expect a lot of read operations to be super fast after they are cached. It might make rebooting less desirable cause the system will run slower at first until everything is cached.
What was your motivation for 128GB? VMs are the only reason I sometimes struggle with 32GB, but that’s not been an issue often enough for me to do anything. If I really need some more memory it’s easy enough to spin up some cloud instance for a few hours instead.
> VMs are the only reason I sometimes struggle with 32GB,
It's always annoyed me that the memory allocation for VMs is so static. If I allocate 16GB to a VM, it will fill these 16GB with its own filesystem cache, even when the host could make a better use of that memory (like using it for other VMs). There's virtio-balloon, but it has to be adjusted manually.
I use my setup as a 'pre-cloud' test environment where my costs are fixed.
I slap $400 worth of memory in my workstation and I can load large datasets or piles of VMs without worrying that some mistake on my part is going to generate a huge bill on my part.
It's pretty common for me to run 60-80GB worth of VMs during the working day, so not using could is a pretty massive savings for me.
Not parent, but I do data analysis where more memory is always appreciated. Essentially all tooling now works with bigger than memory datasets, but you eat a performance cost as data gets paged in/out to disk. When you can load a 50GB dataframe directly into pandas, it lets you do things the dumb way rather than having to spend extra brain cycles figuring out how to compartmentalize the problem to stay within your RAM budget.
Same for me. Back around 2015, I had 64 GB in a workstation because I had to develop and test image analysis tools. These would be working with 3D and 4D dense arrays and rather than paging in/out from disk it was a matter of loading chunks in/out from RAM to the GPU's 12 GB of VRAM and assembling larger results to eventually serialize back to disk.
When not working on these tools, it was all just buffer cache for Linux while the actual processes were easily living in 8 GB or so. Even today, I'd comfortably live in 16 GB on a laptop, except once in a while miss the option to just do sloppy large allocations for a single task instead of worrying about chunking, IO, and careful access order.
VMs were the (main) reason actually! I run Fedora on the host, but I write/maintain/package software for different distros and architectures, so having a VM for each target is really handy. I also like to have a VM dedicated to apps/tasks that I want to segregate from the main system, so I end up with quite a few VMs and several of which I want to have running concurrently.
Another reason was to enable me to hack with performance optimizations like mounting freqent read/write files into memory (tmpfs). Putting Chrome's cache there for example can be a noticeable increase in performance.
Another reason was just for the sheer badassness of having so much RAM
I would presume there’s a tipping point where storing media as a SQLite database becomes irresistible, since the hard drive could DMA a giant file of this sort into memory as a giant linear read.
> SQLite is not the perfect application file format for every situation. But in many cases, SQLite is a far better choice than either a custom file format, a pile-of-files, or a wrapped pile-of-files. SQLite is a high-level, stable, reliable, cross-platform, widely-deployed, extensible, performant, accessible, concurrent file format. It deserves your consideration as the standard file format on your next application design.
It's great for AI. Today's large language models are very hungry for RAM. Consumer electronics have been rocking just a few GB of RAM for the last decade because there was no real usecase for 100+GB, but that has changed since GPT3 came along. We'll likely see large language and image models integrated into all sorts of software stacks.
These would be 3DS-RDIMMs: The DRAM dies are stacked using TSVs on top of a per-stack buffer die [1] and those buffer dies are hanging off of the data bus and the module's C/A bus register. So my guess is these will be rather expensive per GB and mostly be bought for in-memory databases, specifically SAP HANA.
Most SAP stuff is just horribly inefficient, especially the stuff that people "customize"... it's like with the cloud in general, a well designed bespoke software beats the "golden standard" by far in resource usage.
That's what people used to say when we got 1GB of RAM :) Turns out, we just fill it with more files of higher resolution/fidelity, rather than optimize for speed/performance, in most use cases.
With 64GB, I mount /tmp and ~/.cache as tmpfs, which speeds up web browsing and code compilation, but with more capacity I would mount /var or even / into RAM.
Granted, NVMe drives are so fast these days that the improvement of using RAM for storage is likely not noticeable. Plus there's the problem of persistence. If you're OK with volatile storage for your use case, then this won't be an issue. Otherwise, you need some way to ensure data is persisted.
Perhaps these RAM advancements will make hybrid DRAM/NAND drives cheaper and more performant.
/var and / are odd choices (unless you are doing that for ephemeral containers), but in some setups I've done that with /var/log (mostly embedded SDCard machines with log shipping, where I'd prefer to avoid non-essential writes). When you do that it's good to set up some swap and start rotating logs as soon as you see swap usage.
On environments with log shipping, having /var/log as a tmpfs makes a lot of sense.