In ideal circumstances, a software engineer designs a computer program, a system, or a product with “testability” in mind. This enables the individuals charged with testing to design effective test cases more easily. But what is testability? James Bach2 describes testability in the following manner.
Software testability is simply how easily [a computer program] can be tested. Since testing is so profoundly difficult, it pays to know what can be done to streamline it. Sometimes programmers are willing to do things that will help the testing process and a checklist of possible design points, features, etc., can be useful in negotiating with them.
There are certainly metrics that could be used to measure testability in most of its aspects. Sometimes, testability is used to mean how adequately a particular set of tests will cover the product. It’s also used by the military to mean how easily a tool can be checked and repaired in the field. Those two meanings are not the same as software testability. The checklist that follows provides a set of characteristics that lead to testable software.
The paragraphs that follow are copyright 1994 by James Bach and have been adapted from an Internet posting that first appeared in the newsgroup comp.software-eng. This material is used with permission.
Operability. “The better it works, the more efficiently it can be tested.”
• The system has few bugs (bugs add analysis and reporting overhead to the test process).
• No bugs block the execution of tests.
• The product evolves in functional stages (allows simultaneous development and testing).
Observability. “What you see is what you test.”
• Distinct output is generated for each input.
• System states and variables are visible or queriable during execution.
• Past system states and variables are visible or queriable (e.g., transaction logs).
• All factors affecting the output are visible.
• Incorrect output is easily identified.
• Internal errors are automatically detected through self-testing mechanisms.
• Internal errors are automatically reported.
• Source code is accessible.
Controllability. “The better we can control the software, the more the testing can be automated and optimized.”
• All possible outputs can be generated through some combination of input.
• All code is executable through some combination of input.
• Software and hardware states and variables can be controlled directly by the test engineer.
• Input and output formats are consistent and structured.
• Tests can be conveniently specified, automated, and reproduced.
Decomposability. “By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting.”
• The software system is built from independent modules.
• Software modules can be tested independently.
Simplicity. “The less there is to test, the more quickly we can test it.”
• Functional simplicity (e.g., the feature set is the minimum necessary to meet requirements).
• Structural simplicity (e.g., architecture is modularized to limit the propagation of faults).
• Code simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
Stability. “The fewer the changes, the fewer the disruptions to testing.”
• Changes to the software are infrequent.
• Changes to the software are controlled.
• Changes to the software do not invalidate existing tests.
• The software recovers well from failures.
Understandability. “The more information we have, the smarter we will test.”
• The design is well understood.
• Dependencies between internal, external, and shared components are well understood.
• Changes to the design are communicated.
• Technical documentation is instantly accessible.
• Technical documentation is well organized.
• Technical documentation is specific and detailed.
RELATED POST
ERROR CHECK LIST FOR INSPECTIONS
WALK THROUGHS IN TESTING
TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE
TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO
VALIDATION TESTING
SYSTEM TESTING
DEBUGGING AND TESTING
DEFECT AMPLIFICATION AND REMOVAL
ITERATIVE SPIRAL MODEL
STANDARD WATER MODEL
CONFIGURATION MANAGEMENT
CONTROLLED TESTING ENVIRONMENT
RISK ANALYSIS PART ONE
RISK ANALYSIS PART TWO
BACK GROUND ISSUES
SOFTWARE REVIEWS PART ONE
SOFTWARE REVIEWS PART TWO
SOFTWARE RELIABILITY
SAFETY ASPECTS
MISTAKE PROOFING
SCRIPT ENVIRONMENT
V MODEL IN TESTING
No comments:
Post a Comment