AUTOMATED TESTING PROCESS

The testing process has these for steps :

Creating a testplan (if you are using the testplan editor)

Recording a test frame.

Creating testcases

Running testcases and interpreting their results.

Creating a testplan

Descriptions of individual tests and groups of tests. As many levels of description can be used.

Statements that link the test descriptions in the plan to the 4Test routines, called testcases, that accomplish the actual work of testing.

Recording a test frame

Next, record a test frame, which contains descriptions, called window declarations, of each of the GUI objects in your application. A window declaration specifies a logical, cross-platform name for a GUI object, called the identifier, and maps the identifier to the object’s actual name, called the tag. In addition, the declaration indicates the type of the object, called its class.

Creating testcases

The 4Test commands in a collectively perform three distinct actions :

Drive the application to the state to be tested.

verify the state (this is the heart of the testcase).

Return the application to its original state.

The powerful object-oriented recorder can be used to automatically capture these 4Test commands to interact with the application, or to white the 4Test code manually if one is comfortable with programming languages. For maximum ease and power, these two approaches can be combined, recording the basic test case and then extending it using 4Test’s flow of control features.

Running testcases and interpreting results

Next, run one or more testcases, either by running a collection of scripts, called a suite, or, if you are using the testplan editor, by running specific portions of the testplan. As each testcase runs, statistics are written to a results file. The results file and its associated comparison tools allow you to quickly pinpoint the problems in your application.

7.2

RELATED POST


ERROR CHECK LIST FOR INSPECTIONS

WALK THROUGHS IN TESTING

TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE

TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO

VALIDATION TESTING

SYSTEM TESTING


DEBUGGING AND TESTING

DEFECT AMPLIFICATION AND REMOVAL

ITERATIVE SPIRAL MODEL

STANDARD WATER MODEL

CONFIGURATION MANAGEMENT


CONTROLLED TESTING ENVIRONMENT

RISK ANALYSIS PART ONE


RISK ANALYSIS PART TWO

BACK GROUND ISSUES

SOFTWARE REVIEWS PART ONE

SOFTWARE REVIEWS PART TWO

SOFTWARE RELIABILITY

SAFETY ASPECTS

MISTAKE PROOFING

SCRIPT ENVIRONMENT

V MODEL IN TESTING

No comments:

Post a Comment