When the designers of the early programming languages FORTRAN and ALGOL named one of their numeric data types
REAL, was it simply for convenience, or were they being optimistic?
Just how close is Java's
float type to the real number system of mathematics? Or, for that matter, what about the
int type and the mathematical set of integers (the whole numbers)? We know there are gremlins such as overflows and roundoff errors, but there may be more nasty stuff lurking. What other pitfalls are out there?
Consider the common fractions , , , , , and . In the decimal, or base 10, number system, ...