Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is what Intel does, they build up a market (Optane) and then do a rug pull (Depth Cameras). They continue to do this thing where they do a huge push into a new technology, then don't see the uptake and let it die. Instead of building slowly and then at the right time, doing a big push. Optane support was just getting mature in the Linux kernel when they pulled it. And they focused on some weird cost cutting move when marketing it as a ram replacement for semi-idle VMs, ok.

They keep repeating the same mistakes all the way back to https://en.wikipedia.org/wiki/Intel_iAPX_432



The rugpull on Optane was incredibly frustrating. Intel developed a technology which made really meaningful improvements to workloads in an industry that is full of sticky late adopters (RDBMSes). They kept investing until the point where they had unequivocally made their point and the late adopters were just about getting it... and then killed it!

It's hard to understand how they could have played that particular hand more badly. Even a few years on, I'm missing Optane drives because there is still no functional alternative. If they just held out a bit longer, they would have created a set of enterprise customers who would still be buying the things in 2040.


One could see the death of Optane coming from a mile away. It was only kept afloat by Intel, and its main issue that is was really cool tech, but it was a solution looking for a problem.

You need scratch space that's resilient to a power outage? An NVDIMM is faster and cheaper. You need fast storage? Flash keeps getting faster and cheaper. Optane was squeezed from both sides and could never hope to generate the volume needed to cut costs.

So now imagine that you are at Intel deciding what initiatives to fund. The company is in trouble and needs to show some movement out of the red, preferably quickly. It also lost momentum and lost ground to competitors, so it needs to focus. What do you do? You kill all the side projects that will never make much money. And of course you kill a lot of innovation in the process, but how would you justify the alternative?


Isn’t NVMe disks basically same value as optane? Comments saying optane was amazing doesn’t make sense if NVMe is basically as good and there are other NVMe disk manufacturers


They are not really the same.

First, NVMe is a protocol to access block storage. It can be used to access any kind of block device, Optane, SSD, NVDIMM, virtual storage on EC2, etc. So it's true that the protocol is the same (well, not quite - more on this in a bit), but that's like saying a server is the same as an iPhone because they can both speak TCP/IP.

What was the "more in a bit" bit? Persistent memory (PMEM) devices like NVDIMMs and Optane can usually speak two protocols. They can either act as storage, or as memory expansion. But this memory also happens to be non-volatile.

This was sold as a revolution, but it turned out that it's not easy for current operating systems and applications to deal with memory with vastly different latencies. Also it turns out that software is buggy, and being able to lose state by rebooting is useful. And so Optane in memory mode never really caught on, and these devices were mostly used as a storage tier. However: look up MemVerge.

So you are right that it turned out to be a faster SSD, but the original promise was a lot more. And here comes the big problem: because Optane was envisioned as a separate kind of product between RAM and SSD, the big price differential could be justified. If it's just a faster SSD - well, the market has spoken.


Thanks for the great explanation


It’s especially hard to understand because so much of their management had a degree which conferred a mastery of business administration. I mean, it’s almost like you could take any tenured software engineeer at the company, and they would have been in a better position to manage the company more effectively. That’s very surprising, and might suggest that people with MBAs are total idiots who understand everything through GRE-friendly analogy rather than, well, actually understanding anything.


They are a weird company. Their marketing people showed up and invested a significant amount into a buy of optane gear with our OEM a few months before they killed the product. They pulled the rug in themselves in addition to the customers.


Optane was incredible. It's insane that Intel dropped this.


> Even a few years on, I'm missing Optane drives because there is still no functional alternative...... they would have created a set of enterprise customers who would still be buying the things in 2040.

I guess many on HN are software developers looking at Optane.

In reality Optane was simply not cost effective. Optane came at a time when DRAM cost / GB was at its peak, the idea to developers could have slower DRAM that is non-volatile is great, until they realise slower DRAM causes CPU performance regression. Optane Memory, even on its roadmap in future product will always effectively be another layer between DRAM and NAND ( Storage ). And they could barely make profits when DRAM was at its peak. I dont think people realise there is near "4x" price difference between the height of DRAM price in ~2016 ish to ~2023.

In terms of Optane Storage, it was again at NAND's cost /GB peak and it was barely completing or making profits. Most would immediately point out it has lower latency and better QD1 performance. But Samsung showcased with Z-NAND, which is specifically tuned SLC NAND you can get close enough performance, far higher bandwidth and QD32 results, while using much lower power. And has a reliable Roadmap that is alongside the NAND development. Even Samsung stopped development of Z-NAND in 2023.

The truth is the market isn't interested in Optane enough at the price/ performance and feature it was offering. And Intel's execution for Optane, they have either over promised ( as they do in that era ) and fail to deliver on time or they are basically lying about the potential. And fail to bring down cost of fabbing it, which they blame Micron but in reality it is all on Intel.

The industry have also repeatedly stated they are not interested in a technology that is single sourced by either Intel or Micron. Unlike NAND and DRAM.

Intel was giving away Optane and pushing to Facebook and other Hyperscaler. But even then they couldn't even fill the minimum order for Micron and had to pay hundreds of millions per year for empty fabs.


Executives. That everyone on here claims fairly earn their multi million dollar salaries.


I made the mistake early in our startup of spending several months and quite a bit of cash building our first iot product on the Intel Edison platform, only to get zero support on the bugs in the SPI chip and the non-existent (but advertised) microcontroller. We finally gave up and made our own boards based on another SOM (and eventually stopped building boards entirely) and they rather unceremoniously cancelled the Edison in 2017. I guess nobody else was surprised, but I had naively thought the platform did have potential and a huge company like Intel would support the things they sold.


> this thing where they do a huge push into a new technology, then don't see the uptake and let it die.

Do we need a second "killed by google"?

To companies like Intel or Google anything below a few hundred million users is a failure. Had these projects been in a smaller company, or been spun out, they'd still be successful and would've created a whole new market.

Maybe I'm biased — a significant part of my career has been working for German Mittelstand "Hidden Champions" — but I believe you don't need a billion customers to change the world.


Intel's 5G radio department was formed in 2011 by buying another firm and then it was bought by Apple in 2019. Apple announced a 5G modem this year (C1) . It took 14 years to get a viable 5g wireless modem but still doesn't have feature parity with Apple's cellular modems in the other iPhones. So this happens pretty often by Intel.


Until this day, I miss Optane — I work for a timeseries database company focused on finance, the amount of use cases I have that screams “faster than NVMe, slower than RAM” is insane. And these companies have money to throw at these problems.

Which begs the question, why isn’t anyone else stepping into this gap? Is the technology heavily patented?


Yes, and Intel got caught skirting them.


Indeed. Octane/3dxpoint was mind blowing futuristic stuff but it was just gone after 5 years? On the market? Talk about short sighted.


They got caught is what happened.


Caught doing what? Can you provide some context or links to search?


When Energy Conversion Devices went bankrupt, it appears Intel pirated the technology, and never bothered to pay the royalties for the PCM memory in Optane.

Case No. 12-43166 is what killed Optane.

Or, in a manner of speaking, Intel being Intel killed Optane.


The legal risks were at most the last straw. If Optane had a promising future, Intel could have made the investments necessary to make the legal issues go away. If Optane had a promising future, Micron would have helped Intel secure that future. The long-term value of a persistent memory technology capable of taking a big chunk out of both the DRAM and NAND flash markets is huge.

Optane did not have a promising future. The $/GB gap between 3D XPoint memory and 3D NAND flash memory was only going to keep growing. Optane was doomed to only be appealing to the niche of workloads where flash memory is too slow and DRAM is too expensive. But even DRAM was increasing in density faster than 3D XPoint, and flash (especially the latency-optimized variants that are still cheaper than 3D XPoint) is fast enough for a lot of workloads. Optane needed a breakthrough improvement to secure a permanent place in the memory hierarchy, and Intel couldn't come up with one.


> They continue to do this thing where they do a huge push into a new technology, then don't see the uptake and let it die.

Except Intel deliberately made AVX 512 a feature exclusively available to Xeon and enterprise processors in future generations. This backward step artificially limits its availability, forcing enterprises to invest in more expensive hardware.

I wonder if Intel has taken a similar approach with Arc GPUs, which lack support for GPU virtualization (SR-IOV). They somewhat added vGPU support to all built-in 12th-14th Gen chips through the i915 driver on Linux. It’s a pleasure to have graphics-acceleration in multiple VMs simultaneously, through the same GPU.


They go out their way to segment their markets, ECC, AVX, Optane support (only specific server class skus). I hate it, I hate as a home pc user, I hate it as an enterprise customer, I hate as a shareholder.


Every company does this. If you're grandma only uses a web browser, word processor, and excel, does she really want to spend an additional $50 on a feature she'll not use? Same with NPUs. Different consumers want different features for different prices.


Except it hinders adoption, because not having a feature in entry-level products will mean less incentive (and ability) for software developers to use it. Compatibility is so valuable it makes everyone converge on the least common denominator, so when you price-gouge on a software-exposed feature, you might as well bury this feature altogether.


Three fallacies and you are OUT!


They've changed that decision. All upcoming cores (even e-cores) will have AVX10 (-512)

https://www.phoronix.com/news/Intel-AVX10-Drops-256-Bit


Well, Itanium might be a counterexample, they probably tried to make that work for far too long..


Itanium was more of an HP product than an Intel one.


Itanium worked as intended.


So far as killing HP PA-Risc, SGI MIPS, DEC Alpha, and seriously hurting the chance for adoption of Sparc, and POWER outside of their respective parents (did I miss any)?

Thing is, they could have killed it by 1998, without ever releasing anything, that would have killed the other architectures it was trying to compete with. Instead they waited until 2020 to end support.

What the VLIW of Itanium needed and never really got was proper compiler support. Nvidia has this in spades with CUDA. It's easy to port to Nvidia where you do get serious speedups. AVX-512 never offered enough of a speedup from what I could tell, even though it was well supported by at least ICC (and numpy/scipy when properly compiled)


> What the VLIW of Itanium needed and never really got was proper compiler support.

This is kinda under-selling it. The fundamental problem with statically-scheduled VLIW machines like Itanium is it puts all of the complexity in the compiler. Unfortunately it turns out it's just really hard to make a good static scheduler!

In contrast, dynamically-scheduled out-of-order superscalar machines work great but put all the complexity in silicon. The transistor overhead was expensive back in the day, so statically-scheduled VLIWs seemed like a good idea.

What happened was that static scheduling stayed really hard while the transistor overhead for dynamic scheduling became irrelevantly cheap. "Throw more hardware at it" won handily over "Make better software".


No, VLIW is even worse than this. Describing it as a compiler problem undersells the issue. VLIW is not tractable for a multitasking / multi tenant system due to cache residency issues. The compiler cannot efficiently schedule instructions without knowing what is in cache. But, it can’t know what’s going to be in cache if it doesn’t know what’s occupying the adjacent task time slices. Add virtualization and it’s a disaster.


It only works for fixed workloads, like accelerators, with no dynamic sharing.


Yeah, VLIW is still used for stuff like DSP and GPUs, but it doesn't make sense for general computing.


GPUs have long since moved away from VLIW as well


They are still mostly statically scheduled which I think it is the point the parent is making.


>What happened was that static scheduling stayed really hard while the transistor overhead for dynamic scheduling became irrelevantly cheap

Is the latter part true? AFAIK most of modern CPU die area and power consumption goes towards overhead as opposed to the actual ALU operations.


If it's pure TFLOPs you're after, you do want a more or less statically scheduled GPU. But for CPU workloads, even the low-power efficiency cores in phones these days are out of order, and the size of reorder buffers in high-performance CPU cores keeps growing. If you try to run a CPU workload on GPU-like hardware, you'll just get pitifully low utilization.

So it's clearly true that the transistor overhead of dynamic scheduling is cheap compared to the (as-yet unsurmounted) cost of doing static scheduling for software that doesn't lend itself to that approach. But it's probably also true that dynamic scheduling is expensive compared to ALUs, or else we'd see more GPU-like architectures using dynamic scheduling to broaden the range of workloads they can run with competitive performance. Instead, it appears the most successful GPU company largely just keeps throwing ALUs at the problem.


I think OP meant "transistor count overhead" and that's true. There are bazillions of transistors available now. It does take a lot of power, and returns are diminishing, but there are still returns, even more so than just increasing core count. Overall what matters is performance per watt, and that's still going up.


"they could have killed it by 1998, without ever releasing anything"

perhaps Intel really wanted it to work and killing other architectures was only a side effect?


> So far as killing HP PA-Risc, SGI MIPS, DEC Alpha, and seriously hurting the chance for adoption of Sparc, and POWER outside of their respective parents (did I miss any)?

I would argue that it was bound to happen one way or another eventually, and Itanium just happened to be a catalyst for the extinction of nearly all alternatives.

High to very high performance CPU manufacturing (NB: the emphasis is on the manufacturing) is a very expensive business, and back in the 1990's no-one was able (or willing) to invest in the manufacturing and commit to the continuous investment in keeping the CPU manufacturing facilities up to date. For HP, SGI, Digital Equipment, SUN, and IBM, a high performance RISC CPU was the single most significant enabler, yet not their core business. It was a truly odd situation where they all had a critical dependency on CPU's, yet none of them could manufacture them themselves and were all reliant on a third party[0].

Even Motorola that was in some very serious semicondutor business could not meet the market demands[1].

Look at how much it costs Apple to get what they want out of TSMC – it is tens of billions of dollars almost yearly, if not yearly. We can see very well today how expensive it is to manufacture a bleeding-edge, high-performing CPU – look no further than Samsung, GlobalFoundries, the beloved Intel, and many others. Remember the days when Texas Instruments used to make CPU's? Nope, they don't make them anymore.

[0] Yes, HP and IBM used to produced their own CPU's in-house for a while, but then that ceased as well.

[1] The actual reason why Motorola could not meet the market demand was, of course, an entirely different one – the company management did not consider the CPU's to be their core business as they primarily focused on other semiconductor products and on defence, which left the CPU production in an underinvested state. Motorola could have become a TSMC if they could see the future through a silicon dust shroud.


Bad habits are hard to break!


Optane was cancelled because manufacturer sold the fab


Oh? Complete coincidence they got caught not paying ECDL royalties?


?

wdym


When Energy Conversion Devices went bankrupt, it appears Intel pirated the technology, and never bothered to pay the royalties for the PCM memory in Optane.

Case No. 12-43166 is what finally killed Optane.


Being right at the wrong time, is the same as being wrong


I am very disappointed about Optane drives. Perfect case for superfast vertically scalable database. I was going to build a solution based on this but suddenly it is gone for all practical intents and purposes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: