Computer Arithmetic 85

4.7.8 IEEE 754 Format

IEEE 754 format of oating-point representation using scienti c notation scheme is adopted by most of the

computers world wide. This scheme offers the 32-bit format for single precision numbers [ Figure 4.27 (a)]

and 64-bit format for double precision numbers [Figure 4.27 (b)]. The base is taken as 2 and the most sig-

ni cant bit is reserved as sign bit.

Figure 4.27 IEEE 754 format (a) Single precision and (b) Double precision

For single precision numbers, 8 bits are allowed for storing the biased exponent in two’s comple-

ment form. For double precision representation, three more bits are allowed for it, making the width of

biased exponent as 11 bits. The mantissa part, in both cases of single and double precision, consists of

the binary representation of normalized fractional part, omitting the leading 1. Thus, the binary point is

assumed to be at the beginning of mantissa.

One important feature of IEEE 754 format is the representation of zero. We have already observed

that in scienti c notation scheme used by us, it is not able to cover a narrow range at either side of 0

(Figure 4.25 ). In IEEE 754 format, all zeros in exponent as well as mantissa eld (for single as well as

double precision schemes) would be taken as either +0 or −0, depending upon the value of the sign bit.

4.8 FLOATING-POINT ARITHMETIC

AND UNIT OPERATIONS

In Section 4.2 through Section 4.6, we have discussed how to perform basic arithmetic operations (addition,

subtraction, multiplication and division) with signed integers, expressed as two’s complement scheme.

These are fundamental arithmetic operations and in computer arithmetic these are known as unit opera-

tions . However, when these unit operations have to be performed using oating-point numbers represented

through scienti c notation, some amount of pre-processing and post-processing would be necessary.

This need is generated from the fact that at the input and output stages, all real numbers would be

using the standard decimal format. Just think of any computer program and the data set used there.

De nitely, during input stage, we do not convert the numbers to their scienti c notation using IEEE 754

format with biased exponent and mantissa. We simply express it in standard decimal form of abcd.pqrs

or at the most a.bcd × 10

n

. Moreover, it would be dif cult for us to interpret the results, if they come out

in the form of 01111010010110101001000010000100, a 32-bit representation. [The reader might have

to spend 10 minutes or more to convert it to our familiar decimal system.]

M04_GHOS1557_01_SE_C04.indd 85M04_GHOS1557_01_SE_C04.indd 85 4/29/11 5:04 PM4/29/11 5:04 PM

Get *Computer Architecture and Organization* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.