Developing unit test specifications in software testing part two

Step 3 - Negative Testing

Existing test cases should be enhanced and further test cases should be designed to show that the software does not do anything that it is not specified to do. This step depends primarily upon error guessing, relying upon the experience of the test designer to anticipate problem areas.

Suitable techniques:

- Error guessing

- Boundary value analysis

- Internal boundary value testing

- State-transition testing

Step 4 - Special Considerations

Where appropriate, test cases should be designed to address issues such as performance, safety requirements and security requirements. Particularly in the cases of safety and security, it can be convenient to give test cases special emphasis to facilitate security analysis or safety analysis and certification. Test cases already designed which address security issues or safety hazards should be identified in the unit test specification. Further test cases should then be added to the unit test specification to ensure that all security issues and safety hazards applicable to the unit will be fully addressed.

Suitable techniques:

Specification derived tests

Step 5 - Coverage Tests

The test coverage likely to be achieved by the designed test cases should be visualised. Further test cases can then be added to the unit test specification to achieve specific test coverage objectives. Once coverage tests have been designed, the test procedure can be developed and the tests executed.

Suitable techniques:

- Branch testing

- Condition testing

- Data definition-use testing

- State-transition testing

Test Execution

A test specification designed using the above five steps should in most cases provide a

thorough test for a unit. At this point the test specification can be used to develop an actual test procedure, and the test procedure used to execute the tests. For users of AdaTEST or Cantata, the test procedure will be an AdaTEST or Cantata test script.

Execution of the test procedure will identify errors in the unit which can be corrected and the unit re-tested. Dynamic analysis during execution of the test procedure will yield a measure of test coverage, indicating whether coverage objectives have been achieved. There is therefore a further coverage completion step in the process of designing test specifications.

Step 6 - Coverage Completion

Depending upon an organisation's standards for the specification of a unit, there may be no structural specification of processing within a unit other than the code itself. There are also likely to have been human errors made in the development of a test specification. Consequently, there may be complex decision conditions, loops and

branches within the code for which coverage targets may not have been met when tests were executed. Where coverage objectives are not achieved, analysis must be conducted to determine why. Failure to achieve a coverage objective may be due to:

• Infeasible paths or conditions - the corrective action should be to annotate the test

specification to provide a detailed justification of why the path or condition is not

tested. AdaTEST provides some facilities to help exclude infeasible conditions

from Boolean coverage metrics.

• Unreachable or redundant code - the corrective action will probably be to delete the

offending code. It is easy to make mistakes in this analysis, particularly where

defensive programming techniques have been used. If there is any doubt, defensive

programming should not be deleted.

• Insufficient test cases - test cases should be refined and further test cases added to a

test specification to fill the gaps in test coverage.

Ideally, the coverage completion step should be conducted without looking at the actual code. However, in practice some sight of the code may be necessary in order to achieve coverage targets. It is vital that all test designers should recognise that use of the coverage completion step should be minimised. The most effective testing will come from analysis and specification, not from experimentation and over dependence upon the coverage completion step to cover for sloppy test design.

Suitable techniques:

- Branch testing

- Condition testing

- Data definition-use testing

- State-transition testing

related post


DAY 1 MICROSOFT DOT NET FRAME WORK

DAY 2 MICROSOFT DOT NET BASE CLASS LIBRARY

DAY 3 MICROSOFT DOT NET CLASSES AND STRECTURES

DAY 4 METHODS IN FRAME WORK

DAY 5 INPUT VALIDATIONS IN DOT NET PART ONE

DAY 6 INPUT VALIDATIONS IN DOT NET PART TWO

DAY 7 DATA TYPES IN DOT NET

DAY 8 DATA TYPES IN DOT NET PART TWO

DAY 9 IMPLEMENTING PROPERTIES IN DOT NET

DAY 10 DELEGATES AND EVENTS

DAY 11 OOPS INTRODUCTION

DAY 12 POLYMORPHISM

DAY 13 INHERITANCE AND POLYMORPHISM

DAY 14 EBUGGING TOOLS IN DOT NET

DAY 15 DEBUG AND TRACE IN CLASSES

DAY 16 UNIT TEST PLAN

DAY 17 EXCEPTIONS IN VISUAL STUDIO

DAY 19 ADO.NET INTRODUCTION

DAY 20 DATA ACCESSING IN DOT NET

DAY 21 DATA BASE OBJECTS

No comments:

Post a Comment