Testing presents an interesting anomaly for the software engineer. During earlier software engineering activities, the engineer attempts to build software from an abstract concept to a tangible product. Now comes testing.
The engineer creates a series of test cases that are intended to “demolish” the software that has been built. In fact testing is the one step in the software process that could be viewed (psychologically at least) as destructive rather than constructive.
Software engineers are by their nature constructive people. Testing requires that the developer discard preconceived notions of the “correctness” of software just developed and overcome a conflict of interest that occurs when errors are uncovered.
Beizer describes this situation effectively when he states:
could really concentrate, if only everyone used structured programming, top-down design, decision tables, if programs were written in SQUISH, if we had the right silver bullets, then there would be no bugs. So goes the myth. There are bugs, the myth says because we are bad at what we do; and if we are bad at it, we should feel guilty about it. Therefore, testing and test case design is an admission of failure, which instills a goodly dose of guilt.
Testing ObjectivesWhat are primary objectives when we test software?
In an excellent book on software testing, Glen Myers states a number of rules that can serve well as testing objectives:
1. Testing is a process of executing a program with the intent of error.
2. A good test case is one that has a high probability of finding an as-yet-undiscovered error.
3. A successful test is one that uncovers an as-yet-undiscovered error.
These objectives imply a dramatic change in viewpoint. They move counter to the commonly held view that a successful test is one in which no errors are found. Our objective is to design tests that systematically uncover different classes of errors and to do so with a minimum amount of time and effort.
If testing is conducted successfully (according to the objectives stated previously), it will uncover errors in the software. As a secondary benefit, testing demonstrates that software functions appear to be working according to specification, that behavioral and performance requirements appear to have been met.
In addition, data collected as testing is conducted provide a good indication of software reliability and some indication of software quality as a whole. But testing cannot show the absence of errors,, it can show only that software errors and defects are present. It is important to keep this (rather gloomy) statement in mind as testing is being conducted.
Testing Principles
Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing. Davis suggests a set of testing principles that have been adapted for use in this book:
• All tests should be traceable to customer requirements.
• Tests should be planned long before testing begins.Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code has been generated.
• The Pareto principle applies to software testing.
Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them.
• Testing should begin “in the small” and progress toward testing “in the large.”
The first tests planned and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system .
• Exhaustive testing is not possible.
The number of path permutations for even a moderately sized program is exceptionally large . For this reason, it is impossible to execute every combination of paths during testing. It is possible, however, to adequately cover program logic and to ensure that all conditions in the component-level design have been exercised.
• To be most effective, testing should be conducted by an independent third party.
By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing).
ERROR CHECK LIST FOR INSPECTIONS
WALK THROUGHS IN TESTING
TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE
TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO
VALIDATION TESTING
SYSTEM TESTING
DEBUGGING AND TESTING
DEFECT AMPLIFICATION AND REMOVAL
ITERATIVE SPIRAL MODEL
STANDARD WATER MODEL
CONFIGURATION MANAGEMENT
CONTROLLED TESTING ENVIRONMENT
RISK ANALYSIS PART ONE
RISK ANALYSIS PART TWO
BACK GROUND ISSUES
SOFTWARE REVIEWS PART ONE
SOFTWARE REVIEWS PART TWO
SOFTWARE RELIABILITY
SAFETY ASPECTS
MISTAKE PROOFING
SCRIPT ENVIRONMENT
V MODEL IN TESTING
No comments:
Post a Comment