Floating point numbers can be justified by two criteria:
1) The distribution of typical numbers
2) The desired precision across a distribution
1: Floating point numbers suggest an exponential distribution, which comes up very often in science, engineering, etc. Very rarely we have real data neatly packet in a small [-a,a] range.
2: Floating point satisfy the following error metric approximately uniformly: for any -max < x < max, Error = float(x)/x; that is, the relative error is small. This again agrees with real world requirements for data, where we tolerate larger errors for larger numbers.
1) The distribution of typical numbers
2) The desired precision across a distribution
1: Floating point numbers suggest an exponential distribution, which comes up very often in science, engineering, etc. Very rarely we have real data neatly packet in a small [-a,a] range.
2: Floating point satisfy the following error metric approximately uniformly: for any -max < x < max, Error = float(x)/x; that is, the relative error is small. This again agrees with real world requirements for data, where we tolerate larger errors for larger numbers.