Unit Test Plan in dot net studio

Once you have completed the code for your application, you must test it to ensure that it functions according to specifications. Although syntax errors are detected by the compiler, run-time errors and logical errors might not be revealed without thorough testing of your program.

Most code contains errors when first written. Errors are inevitable, and even very experienced programmers still make mistakes from time to time. Thus, you should expect your code to have bugs when first written. However, a final application that is full of bugs or does not function according to spec is useless.

Thus, you must strive to deliver bug-free final content. You create a bug-free final version from a bug-laden first draft through testing.

Testing and debugging are separate but related activities. Debugging refers to the actual finding and correcting of code errors, whereas testing is the process by which errors are found. Testing is usually broken down by method. The individual methods are tested with a variety of inputs and execution parameters.

This approach is called unit testing because it tests the units of your application. Most applications are too complex to allow every possible variation of input and run conditions to be tested.

Assume that the method that calls DisplayName limits the size of the Name parameter to eight characters and further limits them to the letters of the standard U.S. English alphabet. This means that this method could receive 268, or approximately 200 billion possible strings as input. It is easy to see how testing every possible input value for an application would be an impossible task.

It is equally obvious that such rigorous testing is unnecessary. In the preceding example, it is unlikely that any one string will produce errors that will not be produced by other strings. Thus, you could logically narrow down your input string to a few representative examples that would test the method and provide assurance that the method functioned properly. These examples are called test cases.

How you design your set of test cases is fundamental to your testing success. Designing too few test cases can lead to incomplete testing, which will inevitably cause errors to slip into your final product. Designing too many test cases wastes time, wastes money, and yields redundant results. You must decide on the appropriate test plan for your application based on completeness and coverage of likely scenarios.

Testing Data

Testing the functionality of all possible data points provides a good starting point for your test plan. But to make your application robust, you must test that it can handle different kinds of data. Your application must behave normally and give expected results when data within normal parameters is provided, and it should gracefully and appropriately handle data that is outside of the specified bounds. Thus, to be complete, you must test your application with a variety of data inputs that are normal and extraordinary. Some of the specific types of data conditions you might use to create test cases are described in the following sections.

Normal Data

It is important to test data that is within the normal bounds of program execution. Although it is usually impossible to test the entire range of normal data, your test cases should contain several examples of data that is normal for the program to process. This data should span the normal range of operation and should include the normal minimum values and the normal maximum values.

Boundary Conditions

Special consideration should be given to testing data on the boundaries of normal conditions. This includes the normal minimum and maximum values for program data, and should also include values that are "off by one." For example, to test the maximum boundary, you would include the maximum value, the maximum value minus one, and the maximum value plus one. This approach allows you to verify that simple mistakes (such as using a > operator where a >= was required) have not been made.

Bad Data

You should use a variety of bad data in your test cases to ensure that your program does not crash in response to bad data, or worse, function normally and return inappropriate results. You should test values that are well outside of the normal scope of operation including zero for nonzero values, negative numbers for positive data, and so on.

Data Combinations

Your test cases should include a variety of combinations of the types of data previously mentioned. Errors might not become apparent until the correct combination of data is used. For example, you might have a method that functions fine when each of its parameters is tested at its normal maximum, but fails when all of the parameters are supplied at their normal maximum. Similarly, combinations of known bad data can yield normal looking results.

A combination of bad data returns a seemingly normal result. Thus, you should design your test cases to test a variety of possible data combinations.

To create a unit test plan

Begin by creating test cases that cause every line in the unit being tested to be executed.

Add additional test cases until every possible path of data flow through your unit has been tested.

Add further cases to test variance in the data your unit will process. In addition to testing nominal data, you should give consideration to testing boundary conditions, bad data, and different combinations of bad data, good data, and boundary cases.

Determine the expected result of each test case in advance of the actual test.

Execute the tests and compare observed results with expected results.

related post

DAY 11 OOPS INTRODUCTION

DAY 12 POLYMORPHISM

DAY 13 INHERITANCE AND POLYMORPHISM

DAY 14 EBUGGING TOOLS IN DOT NET

DAY 15 DEBUG AND TRACE IN CLASSES

DAY 16 UNIT TEST PLAN

DAY 17 EXCEPTIONS IN VISUAL STUDIO

DAY 19 ADO.NET INTRODUCTION

DAY 20 DATA ACCESSING IN DOT NET

DAY 21 DATA BASE OBJECTS


No comments:

Post a Comment