In solving a numerical problem on a computer, we do not usually expect to get the exact answer. Some amount of error is inevitable. Rounding errors may occur initially when the data are represented in the finite number system of the computer. Further rounding errors may occur whenever arithmetic operations are used. In some cases, it is possible to have a catastrophic loss of digits of accuracy or a more subtle growth of error as the algorithmic proceeds. In either of these cases, one could end up with a completely unreliable computed solution. To avoid this, we must understand how computational errors occur. To do that, we must be familiar with the type of numbers used by the computer.