It's not going to happen anytime soon, because you simply cannot cheat physics.
A system that supports OLAP/ad-hoc queries is going to need a ton of IOPs & probably also CPU capacity to do your data transformations. If you want this to also scale beyond the capacity limits of a single node, then you're going to run into distributed joins and network becomes a huge factor.
Now, to support OLTP at the same time, your big, distributed system needs to support ACID, be highly fault-tolerant, etc.
All you end up with is a system that has to be scaled in every dimension. It needs to support the maximum possible workloads you can throw at it, or else a random, expensive reporting query is going to DOS your system and your primary customer-facing system will be unusable at the same time. It is sort of possible, but it's going to cost A LOT of money. You have to have tons and tons of "spare" capacity.
Which brings us to the core of engineering -- anyone can build a system that burns dump trucks full of venture capital dollars to create the one-system-to-rule-them-all. But businesses that want to succeed need to optimize their costs so their storage systems don't break the bank. This is why the current status-quo of specialized systems that do one task well isn't going to change. The current technology paradigm cannot be optimized for every task simultaneously. We have to make tradeoffs.
* a primary transactional DB that I can write fast, with ACID guarantees and a read-after-write guarantee, and allows failover
* one (or more) secondaries that are optimized for analytics and search. This should also tell me how caught up the system is, with the primary.
If they all can talk the same language (SQL) and can replicate from primary with no additional tools/technology (postgres replcation for example), I will take it any day.
It is about operational simplicity and not needing intimately to know multiple technologies. Granted, even if this is "just" postgresql, it really is not and all customizations will have their own tuning and whatnot, but the context is all still postgresql.
Yes, this will not magically solve the CAP theorem, but for most cases we don't need to care too much
A system that supports OLAP/ad-hoc queries is going to need a ton of IOPs & probably also CPU capacity to do your data transformations. If you want this to also scale beyond the capacity limits of a single node, then you're going to run into distributed joins and network becomes a huge factor.
Now, to support OLTP at the same time, your big, distributed system needs to support ACID, be highly fault-tolerant, etc.
All you end up with is a system that has to be scaled in every dimension. It needs to support the maximum possible workloads you can throw at it, or else a random, expensive reporting query is going to DOS your system and your primary customer-facing system will be unusable at the same time. It is sort of possible, but it's going to cost A LOT of money. You have to have tons and tons of "spare" capacity.
Which brings us to the core of engineering -- anyone can build a system that burns dump trucks full of venture capital dollars to create the one-system-to-rule-them-all. But businesses that want to succeed need to optimize their costs so their storage systems don't break the bank. This is why the current status-quo of specialized systems that do one task well isn't going to change. The current technology paradigm cannot be optimized for every task simultaneously. We have to make tradeoffs.