The design of tests for software and other engineered products can be as challenging as the initial design of the product itself. Yet, for reasons that we have already discussed, software engineers often treat testing as an afterthought, developing test cases that may “feel right” but have little assurance of being complete. Recalling the objectives of testing, we must design tests that have the highest likelihood of finding the most errors with a minimum amount of time and effort.
A rich variety of test case design methods have evolved for software. These methods provide the developer with a systematic approach to testing. More important, methods provide a mechanism that can help to ensure the completeness of tests and provide the highest likelihood for uncovering errors in software.
Any engineered product (and most other things) can be tested in one of two ways:
(1) Knowing the specified function that a product has been designed to perform, tests can be conducted that demonstrate each function is fully operational while at the same time searching for errors in each function;
(2) knowing the internal workings of a product, tests can be conducted to ensure that “all gears mesh,” that is, internal operations are performed according to specifications and all internal components have been adequately exercised. The first test approach is called black-box testing and the second, white-box testing.
When computer software is considered, black-box testing alludes to tests that are conducted at the software interface. Although they are designed to uncover errors, black-box tests are used to demonstrate that software functions are operational, that input is properly accepted and output is correctly produced, and that the integrity of external information (e.g., a database) is maintained. A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software.
White-box testing of software is predicated on close examination of procedural detail. Logical paths through the software are tested by providing test cases that exercise specific sets of conditions and/or loops. The “status of the program” may be examined at various points to determine if the expected or asserted status corresponds to the actual status.
testing presents certain logistical problems. For even small programs, the number of possible logical paths can be very large. For example, consider the 100 line program in the language C. After some basic data declaration, the program contains two nested loops that execute from 1 to 20 times each, depending on conditions specified at input. Inside the interior loop, four if-then-else constructs are required. There are approximately 10 possible paths that may be executed in this program!
To put this number in perspective, we assume that a magic test processor (“magic” because no such processor exists) has been developed for exhaustive testing. The processor can develop a test case, execute it, and evaluate the results in one millisecond. Working 24 hours a day, 365 days a year, the processor would work for 3170 years to test the program. This would, undeniably, cause havoc in most development schedules. Exhaustive testing is impossible for large software systems.
White-box testing should not, however, be dismissed as impractical. A limited number of important logical paths can be selected and exercised. Important data structures can be probed for validity. The attributes of both black- and white-boxing can be combined to provide an approach that validates the software interface and selectively ensures that the internal workings of the software are correct.
21.2
RELATED POST
ERROR CHECK LIST FOR INSPECTIONS
WALK THROUGHS IN TESTING
TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE
TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO
VALIDATION TESTING
SYSTEM TESTING
DEBUGGING AND TESTING
DEFECT AMPLIFICATION AND REMOVAL
ITERATIVE SPIRAL MODEL
STANDARD WATER MODEL
CONFIGURATION MANAGEMENT
CONTROLLED TESTING ENVIRONMENT
RISK ANALYSIS PART ONE
RISK ANALYSIS PART TWO
BACK GROUND ISSUES
SOFTWARE REVIEWS PART ONE
SOFTWARE REVIEWS PART TWO
SOFTWARE RELIABILITY
SAFETY ASPECTS
MISTAKE PROOFING
SCRIPT ENVIRONMENT
V MODEL IN TESTING
No comments:
Post a Comment