The decimal type is unique to C#. It is a 128-bit data type which can represent values from 1.0 × 10-28 to approximately 7.9 × 1028 with 28 – 29 significant digits, and is particularly suitable for financial or scientific calculations requiring a high level of precision.
According to the C# specification, for decimals with an absolute value smaller than 1.0M,  the value represented is exact to the 28th decimal place. For decimals with an absolute value equal to or greater than 1.0M, the value is exact to 28 or 29 significant figures.
 You put an M or m behind a number or floating point number to indicate that this value should be interpreted as a decimal.
At first sight, it may appear that you can implicitly cast a ...