Criteria for completion of testing

Using statistical modeling and software reliability theory, models of software failures (uncovered during testing) as a function of execution time can be developed.

Version of the failure model called a logarithmic Poisson execution-time model takes the form:

f(t) = (1/p)ln(lo pt +1) (1)

Where

f(t) = cumulative number of failures that are expected to occur once the software has n

has been tested for a certain amount of execution time, t

lo = the initial software failure intensity (failures per unit time) at the beginning of testing

p = the exponential reduction in failure intensity as errors are uncovered and repairs are made.

The instantaneous failure intensity, l(t) can be derived by taking the derivative of f(t),

l(t) = lo(lo pt +1) (2)

Using the relationship noted in equation 2, testers can predict the drop off of errors as testing progresses. If the actual data gathered during testing and logarithmic Poisson execution-time model are responsibly close to one another over a number of data points, the model can be used to predict total testing time required to achieve an acceptable low failure intensity.

Strategic issues

Following issues must be addressed if a successful software strategy is to be implemented


  1. Specify product requirements in a quantifiable manner long before testing commences. Although the overriding objective of testing is to find errors good testing strategy also assesses other quality characteristics such as portability, maintainability , usability .These should be specified in a way that is measurable so that testing results are unambiguous.
  2. State testing objectives explicitly. The specific objectives of testing should be stated in measurable terms for example, test effectiveness, test coverage, meantime to failure, the cost to find and fix defects, remaining defect density or frequency of occurrence, and test work - hours per regression test should all be stated within the test plan.
  3. Understand the users of the software and develop a profile for each user category.use cases , which describe interaction scenario for each class of user can reduce overall testing effort by focussing testing on actual use of the product.
  4. Develop a testing plan that emphasizes "rapid cycle testing".
  5. The feedback generated from the rapid cycle tests can be used to control quality levels and corresponding test strategies.
  6. Build "robust" software that is designed to test itself. Software should be designed in a manner that uses antibugging techniques. that is software should be capable of diagnosing certain classes of errors. In addition, the design should accommodate automated testing regression testing.
  7. Use effective formal technical reviews as a filter prior to testing. formal technical reviews can be as effective as testing in uncovering errors. For this reason, reviews can reduce the amount of testing effort that is required to produce high-quality software.
  8. Conduct formal technical reviews to assess the test strategy and test cases themselves. Formal technical reviews can uncover inconsistencies, omissions, and outright errors in the testing approach. This saves time and improves product quality.
  9. Develop a continuous improvement approach for the testing process. The test strategy should be measured. The metrics collected during testing should be used as part of a statistical process control approach for software testing.

related post


INTEGRATION TESTING PART ONE

INTEGRATION TESTING PART TWO

INTEGRATION TESTING PART THREE

INTEGRATION TESTING PART FOUR

INTEGRATION TESTING PART FIVE

INTEGRATION TEST STANDARDS

INTEGRATION TEST STANDARDS PART TWO

QUALITY TESTING

QUALITY ASSURANCE

QUALITY ASSURANCE PART TWO

QUALITY ASSURANCE SQA

QUALITY OF DESIGN OF TEST CASE

QUALITY MANAGEMENT IN SOFTWARE TESTING

TOOLS FOR QUALITY MANAGEMENT

STATICAL QUALITY ASSURANCE

ISO APPROACH TO QUALITY TESTING

No comments:

Post a Comment