C# is a strongly typed language. That means that every object you create or use in a C# program must have a specific type (e.g., you must declare the object to be an integer or a string or a Dog or a Button). The type tells the compiler how big the object is and what it can do.

Types come in two flavors: those that are built into the language (intrinsic types) and those you create (classes, structs, and interfaces.

Each type has a name (e.g., int) and a size (e.g., 4 bytes). The size tells you how many bytes each object of this type occupies in memory. (Programmers generally don't like to waste memory if they can avoid it, but with the cost of memory these days, you can afford to be mildly profligate if doing so simplifies your program.

Intrinsic types can't do much. You can use them to add two numbers together, and they can display their values as strings. User-defined types can do a lot more; their abilities are determined by the methods you create.

Objects of an intrinsic type are called variables.

Numeric Types:

Most of the intrinsic types are used for working with numeric values (byte, sbyte, short, ushort, int, uint, float, double, decimal, long, and ulong).

The numeric types can be broken into two sets: unsigned and signed. An unsigned value (byte, ushort, uint, ulong) can hold only positive values. A signed value (sbyte, short, int, long) can hold positive or negative values but the highest value is only half as large. That is, a ushort can hold any value from 0 through 65,535, but a short can hold only -32,768 through 32,767. Notice that 32,767 is nearly half of 65,535 (it is off by one to allow for holding the value zero). The reason a ushort can hold up to 65,535 is that 65,535 is a round number in binary arithmetic (216).

Another way to divide the types is into those used for integer values (whole numbers) and those used for floating-point values ( fractional or rational numbers). The byte, sbyte, ushort, uint, ulong, short, int, and long types all hold whole number values.

The byte and sbyte types are not used very often .

The double and float types hold fractional values. For most uses, float will suffice, unless you need to hold a really big fractional number, in which case you might use a double. The decimal value type was added to the language to support accounting applications.

Typically you decide which size integer to use (short, int, or long) based on the magnitude of the value you want to store. For example, a ushort can only hold values from 0 through 65,535, while a uint can hold values from 0 through 4,294,967,295.

That said, memory is fairly cheap, and programmer time is increasingly expensive; most of the time you'll simply declare your variables to be of type int, unless there is a good reason to do otherwise.

Most programmers choose signed types unless they have a good reason to use an unsigned value. This is, in part, just a matter of tradition.

Suppose you need to keep track of inventory. You expect to house up to 40,000 or even 50,000 copies of each book. A signed short can only hold up to 32,767 values. You might be tempted to use an unsigned short (which can hold up to 65,535 values), but it is easier and preferable to just use a signed int (with a maximum value of 2,147,483,647). That way, if you have a runaway best seller, your program won't break (if you anticipate selling more than 2 billion copies of your book, perhaps you'll want to use a long!).

Remember, the Y2K problem was caused by programmers who couldn't imagine needing a year later than 1999.

It is better to use an unsigned variable when the fact that the value must be positive is an inherent characteristic of the data. For example, if you had a variable to hold a person's age, you would use an unsigned int because an age cannot be negative.

Float, double, and decimal offer varying degrees of size and precision. For most small fractional numbers, float is fine. Note that the compiler assumes that any number with a decimal point is a double unless you tell it otherwise.

Non-Numeric Types: char and bool:

In addition to the numeric types, the C# language offers two other types: char and bool.

The char type is used from time to time when you need to hold a single character. The char type can represent a simple character (A), a Unicode character (\u0041), or an escape character enclosed by single quote marks ('\n').

The one remaining type of importance is bool, which holds a Boolean value. A Boolean value is a one that is either true or false.Boolean values are used frequently in C# programming . Virtually every comparison (e.g., is myDog bigger than yourDog?) results in a Boolean value.

The bool type was named after George Boole (1815-1864), an English mathematician who published An Investigation into the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities and thus created the science of Boolean algebra.

Types and Compiler Errors:

The compiler will help you by complaining if you try to use a type improperly. The compiler complains in one of two ways: it issues a warning or it issues an error.

Programmers talk about design-time, compile-time, and runtime. Design-time is when you are designing the program, compile-time is when you compile the program, and runtime is (surprise!) when you run the program

The earlier you unearth a bug, the better. It is better (and cheaper) to discover a bug in your logic at design-time rather than later. Likewise, it is better (and cheaper) to find bugs in your program at compile-time than at run-time. Not only is it better; it is more reliable. A compile-time bug will fail every time you run the compiler, but a run-time bug can hide. Run-time bugs slip under a crack in your logic and lurk there (sometimes for months), biding their time, waiting to come out when it will be most expensive (or most embarrassing) to you.

It will be a constant theme of this book that you want the compiler to find bugs. The compiler is your friend. The more bugs the compiler finds, the fewer bugs your users will find. A strongly typed language like C# helps the compiler find bugs in your code. Here's how: suppose you tell the compiler that Milo is of type Dog. Sometime later you try to use Milo to display text. Oops, Dogs don't display text. Your compiler will stop with an error:

Dog does not contain a definition for 'showText'

Visual Studio .NET actually finds the error even before the compiler does. When you try to add a method, IntelliSense pops up a list of valid methods to help you.

When you try to add a method that does not exist, it won't be in the list. That is a pretty good clue that you are not using the object properly.

A variable is an object that can hold a value:

int myVariable = 15;

You initialize a variable by writing its type, its identifier, and then assigning a value to that variable. The previous section explained types. In this example, the variable's type is int (which is, as you've seen, a type of integer).

An identifier is just an arbitrary name you assign to a variable, method, class, or other element. In this case, the variable's identifier is myVariable.

You can define variables without initializing them:

int myVariable;

You can then assign a value to myVariable later in your program:

int myVariable;

// some other code here

myVariable = 15; // assign 15 to myVariable

You can also change the value of a variable later in the program. That is why they're called variables; their values vary.

int myVariable;

// some other code here

myVariable = 15; // assign 15 to myVariable

// some other code here

myVariable = 12; // now it is 12

Technically, a variable is a named storage location (i.e., stored in memory) with a type. After the final line of code in the previous example, the value 12 is stored in the named location myVariable.

Using variables:

class Values


static void Main( )


int myInt = 7;

System.Console.WriteLine("Initialized, myInt: {0}", myInt);

myInt = 5;

System.Console.WriteLine("After assignment, myInt: {0}", myInt);



Initialized, myInt: 7

After assignment, myInt: 5

initializes the variable myInt to the value 7, displays that value, reassigns the variable with the value 5, and displays it again.
Definite Assignment:
C# requires definite assignment; that is, variables must be initialized (or assigned to) before they are used. To test this rule, change the line that initializes myInt
int myInt;

Uninitialized variable:
class Values


static void Main( )


int myInt;


("Uninitialized, myInt: {0}",myInt);

myInt = 5;

System.Console.WriteLine("Assigned, myInt: {0}", myInt);



When you try to compile this, the C# compiler will display the following error message:

5.2.cs(6,55): error CS0165: Use of unassigned local

variable 'myInt'

It is not legal to use an uninitialized variable in C#; doing so violates the rule of definite assignment. In this case, "using" the variable myInt means passing it to WriteLine( ).

So does this mean you must initialize every variable? No, but if you don't initialize your variable then you must assign a value to it before you attempt to use it.

Definite assignment:
class Values{

No comments:

Post a Comment