Being a numeric type as well,
decimal defines the same arithmetic operators as the integral and floating-point types. The main difference lies in the way rounding is done, as you saw in Chapter 4, “Language Essentials.” Based on the scales of the operands, the scale for the result is determined. For example, if you’re adding two decimal values with different precision, what does the resulting precision look like? For addition, the larger of the two precisions is used, numbers are added, and then rounding is applied.
One more thing on the subject of rounding: In Chapter 4, you saw how the banker’s rounding approach is used in the implementation of
System.Decimal. One characteristic of this is that it minimizes rounding errors. ...