You often have to deal with very large numbers: the number of protons in the universe, for example, which needs around 79 decimal digits. Clearly there are lots of situations where you need more than the 10 decimal digits you get from a 4-byte binary number. Equally, there are lots of very small numbers: the amount of time in minutes, for example, that it takes the typical car salesman to accept your cash offer on his wonderful 1982 Ford LTD (and only driven 380,000 miles ...). A mechanism for handling both these kinds of numbers is—as you will have guessed from the title of this section—floating-point numbers.
A floating-point representation of a number is a decimal point followed by a fixed number of digits multiplied by a power of 10 to get the number you want. It's easier to demonstrate than explain, so let's take some examples. The number 365 in normal decimal notation would be written in floating-point form as:
Here, the E stands for exponent and is the power of ten that the 0.365 (the mantissa) is multiplied by, to get the required value. That is:
0.365 × 10 × 10 × 10
which is clearly 365.
Now let's look at a smallish number:
This is evaluated as .365 × 10−4, which is .0000365—exactly the time in minutes required by the car salesman to accept your money.
The number of digits in the mantissa of a floating-point number depends on the type of the floating-point number that you are using. The Java type float provides the equivalent ...