I was surprised to see how few mainframes Big Tech companies that handle a huge amount of transactions (Netflix, meta, google,...) use relative to legacy industries (banking, retail, insurance,...)
This makes me suspect that the real reason why mainframes continue to exist are due to industry inertia, vendor lock-in or even legacy code rather than any performance/cost reasons.
I would rather be without Netflix and Google, than banking and food ... but each to their own..
While some is inertia (mostly doing to rewriting truly large applications are hard and expensive), there is also the the point that most of those industries cannot easily handle "eventually consistent" data..
Not all transactions are created equally, the hardest usually have a set of requirements called ACID.
ACID in the classic RDBMS is not a random choice, but driven by real requirements of their users (the database user, i.e. applications in the business sense - and not the users as people). The ACID properties are REALLY hard to do in scale in a distributed system with high throughput. Think of the rate of transactions in the bitcoin system (500k/day with many, many "servers") vs. visa (500M+/day) - the latter is basically driven by two (!) large mainframes (with 50ish km distance) the last I heard of any technical details.
None of the companies you mention need to have strict ACID, as nobody will complain if different users see slightly different truths - hence scaling writes is faily easy.