Verification and Validation:
Software testing is one element of a broader topic that is often referred to as verification and validation (V&V). Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. Boehm [BOE81] states this another way:
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
The definition of V&V encompasses many of the activities that we have referred to as software quality assurance (SQA).
Testing does provide the last bastion from which quality can be assessed and, more pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. As they say, “You can’t test in quality. If it’s not there before you begin testing, it won’t be there when you’re finished testing.” Quality is incorporated into software throughout the process of software engineering. Proper application of methods and tools, effective formal technical reviews, and solid management and measurement all lead to quality that is confirmed during testing.
STRATEGIC ISSUES:
What guidelines lead to a successful testing strategy?
Specify product requirements in a quantifiable manner long before testing commences. Although the overriding objective of testing is to find errors, a good testing strategy also assesses other quality characteristics such as portability, maintainability, and usability (Chapter 19). These should be specified in a way that is measurable so that testing results are unambiguous.
State testing objectives explicitly.
The specific objectives of testing should be stated in measurable terms. For example, test effectiveness, test coverage, mean time to failure, the cost to find and fix defects, remaining defect density or frequency of occurrence, and test work-hours per regression test all should be stated within the test plan .
Understand the users of the software and develop a profile for each user category. Use-cases that describe the interaction scenario for each class of user can reduce overall testing effort by focusing testing on actual use of the product.
Build “robust” software that is designed to test itself.
Software should be designed in a manner that uses antibugging techniques. That is, software should be capable of diagnosing certain classes of errors. In addition, the design should accommodate automated testing and regression testing.
Use effective formal technical reviews as a filter prior to testing.
Formal technical reviews can be as effective as testing in uncovering errors. For this reason, reviews can reduce the amount of testing effort that is required to produce high-quality software.
Formal technical reviews can uncover inconsistencies, omissions, and outright errors in the testing
approach. This saves time and also improves product quality.
Develop a continuous improvement approach for the testing process. The test strategy should be measured. The metrics collected during testing should be used as part of a statistical process control approach for software testing.
RELATED POST.
UNIT TESTING PART ONE
UNIT TESTING PART TWO
UNIT TESTING PART THREE
GUI TESTING
WINDOWS COMPLIANCE GUI TESTING PART ONE
WINDOWS COMPLIANCE GUI TESTING PART TWO
WINDOWS COMPLIANCE GUI TESTING PART THREE
WINDOWS COMPLIANCE GUI TESTING PART FOUR VALIDATION TESTING
WINDOWS COMPLIANCE GUI TESTING PART FIVE CONDITION TESTING
WINDOWS COMPLIANCE GUI TESTING PART SIX GENERAL CONDITION TESTING
CONDITION TESTING
TESTING CONDITIONS PART ONE
TESTING CONDITIONS PART TWO
TESTING CONDITIONS PART THREE
TESTING CONDITIONS PART FOUR
SPECIFIC FIELD TESTING
USABILITY TESTING
INTEGRATION TESTING
INTEGRATION TESTING PART ONE
INTEGRATION TESTING PART TWO
INTEGRATION TESTING PART THREE
INTEGRATION TESTING PART FOUR
INTEGRATION TESTING PART FIVE
INTEGRATION TEST STANDARDS
INTEGRATION TEST STANDARDS PART TWO
No comments:
Post a Comment