C# has the following predefined numeric types:
C# type  System type  Suffix  Size  Range 

Integral—signed  

 8 bits  ‒2^{7} to 2^{7}‒1  

 16 bits  ‒2^{15} to 2^{15}‒1  

 32 bits  ‒2^{31} to 2^{31}‒1  


 64 bits  ‒2^{63} to 2^{63}‒1 
Integral—unsigned  


 8 bits  0 to 2^{8}‒1  

 16 bits  0 to 2^{16}‒1  


 32 bits  0 to 2^{32}‒1 


 64 bits  0 to 2^{64}‒1 
Real  



 32 bits  ± (~10^{‒45} to 10^{38}) 


 64 bits  ± (~10^{‒324} to 10^{308}) 


 128 bits  ± (~10^{‒28} to 10^{28}) 
Of the integral types, int
and long
are firstclass citizens and are favored by both C# and the runtime. The
other integral types are typically used for interoperability or when space
efficiency is paramount.
Of the real number types, float
and double
are called floatingpoint
types and are typically used for scientific calculations. The
decimal
type is typically used for
financial calculations, where base10accurate arithmetic and high
precision are required. (Technically, decimal
is a floatingpoint type too, although
it’s not generally referred to as such.)
Integral literals can use decimal or hexadecimal notation; hexadecimal is denoted with the 0x
prefix (for example, 0x7f
is equivalent to 127
). Real
literals may use decimal or exponential notation, such as 1E06
.
By default, the compiler infers a numeric
literal to be either double
or an
integral type:
If the literal contains a decimal point or the exponential
symbol (E
), it is a double
.
Otherwise, the ...
No credit card required