This is the first post in a new series called “Exploring C#”. The purpose of this series is to explore areas of the language, and the .NET framework in general, that are very useful, easily overlooked, or that I perceive are not well known.
Virtually all applications have literals in them. It is best practice to minimize them and to assign them to constant variables, but they are always around.
So what is a literal? MSDN defines it as:
A literal is a source code representation of a value.
An example is in the following line of code:
int a = 15;
The “15” is the literal, in this case it is an integer. What is not commonly known is how the computer interprets that line. The following explanation is an extreme simplification of what actually happens. The computer will look at the literal “15” and assume it is an integer, it then assigns it to the variable “a”. Since “a” is also an integer, then there are no errors.
Let’s look at a more complex example:
double a = 15;
In this case, the computer still looks at the literal “15” and assumes it is an integer. It then goes to assign it to “a”. In this case “a” is a double. Since there is an implicit cast defined from integer to double, then the cast is done automatically and the value 15 is stored in memory as a double.
Next example:
double a = 15.0
The computer looks at “15.0”, and even though it is technically equivalent to 15, the computer will assume that “15.0” is a double and assign it to “a” which is also a double.
Error:
float a = 15.0;
The above will generate a compiler error. The reason for this is that the computer will assume that “15.0” is a double, but when it attempts to assign the value to “a”, it sees that “a” is a float. There isn’t an implicit cast from decimal to float, and because of that, the compiler will generate an error.
Fixed:
float a = 15.0F;
The above will work just fine. The reason is the “F” — “f” could have also been used — at the end of the literal. That tells the computer that the literal should be treated as a float. The value “15.0” is stored in the variable “a” as a float. If the literal were to be outside the valid range of values for a float, then the compiler would have generated an error.
Up until now, it’s hard to make a mistake. Either the variable declaration works as expected, or it doesn’t work at all. That changes with the use of the var keyword:
var age = 12;
Armed with the knowledge from above, you can safely say that the “age” variable is an integer. What if you didn’t want an integer, what if you wanted it to be a long (Int64)?
var age = 12L;
That will cause “age” to now be a long.
var total = 12.98M;
“M” is the postfix used to mean decimal. So in this case “total” would be of type decimal. Below are the common postfixes:
L = long (System.Int64)
F = float (System.Float)
D = double (System.Double)
M = decimal (System.Decimal)
U = uint (System.UInt32)
UL = ulong (System.UInt64)
There are string and character literals too. There isn’t anything tricky about string literals. Character literals are defined using single quotes:
char x = ‘x’;