Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I worked in a research lab with some finicky bespoke equipment, I remember the boss making a related point. If our experimental device is made up of N components that are each 95% reliable, that sounds pretty good... until you start thinking about how big N is, and how the overall reliability is only 0.95^N. It doesn’t take long before you’re only getting data one day a week or whatever.


This is a point to think about when combining 20 services with “5 9s” of uptime in your favorite cloud provider.


For those curious, .99999 ^ 20 is around 0.9998.


And that’s 1.7 hours of downtime per year.

Usually WiFi/internet is a bigger downtime problem for most users.


Are there any services like that that actually exist?

To be clear, 5 9s is only 5 minutes of downtime each year.


Sounds like the old p=0.05 problem!


OK so I undown'ed you to ask - how?


Imagine you have 20 experiments, each with a 5% chance that the null hypothesis shows a positive result, then there is a good chance that one of them will be positive even if null is true for all of them. That experiment is bunk but you say p=0.05 so all is good!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: