Showing posts with label Test Automation. Show all posts
Showing posts with label Test Automation. Show all posts

AUTOMATED TESTING PROCESS

The testing process has these for steps :

Creating a testplan (if you are using the testplan editor)

Recording a test frame.

Creating testcases

Running testcases and interpreting their results.

Creating a testplan

Descriptions of individual tests and groups of tests. As many levels of description can be used.

Statements that link the test descriptions in the plan to the 4Test routines, called testcases, that accomplish the actual work of testing.

Recording a test frame

Next, record a test frame, which contains descriptions, called window declarations, of each of the GUI objects in your application. A window declaration specifies a logical, cross-platform name for a GUI object, called the identifier, and maps the identifier to the object’s actual name, called the tag. In addition, the declaration indicates the type of the object, called its class.

Creating testcases

The 4Test commands in a collectively perform three distinct actions :

Drive the application to the state to be tested.

verify the state (this is the heart of the testcase).

Return the application to its original state.

The powerful object-oriented recorder can be used to automatically capture these 4Test commands to interact with the application, or to white the 4Test code manually if one is comfortable with programming languages. For maximum ease and power, these two approaches can be combined, recording the basic test case and then extending it using 4Test’s flow of control features.

Running testcases and interpreting results

Next, run one or more testcases, either by running a collection of scripts, called a suite, or, if you are using the testplan editor, by running specific portions of the testplan. As each testcase runs, statistics are written to a results file. The results file and its associated comparison tools allow you to quickly pinpoint the problems in your application.

7.2

RELATED POST


ERROR CHECK LIST FOR INSPECTIONS

WALK THROUGHS IN TESTING

TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE

TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO

VALIDATION TESTING

SYSTEM TESTING


DEBUGGING AND TESTING

DEFECT AMPLIFICATION AND REMOVAL

ITERATIVE SPIRAL MODEL

STANDARD WATER MODEL

CONFIGURATION MANAGEMENT


CONTROLLED TESTING ENVIRONMENT

RISK ANALYSIS PART ONE


RISK ANALYSIS PART TWO

BACK GROUND ISSUES

SOFTWARE REVIEWS PART ONE

SOFTWARE REVIEWS PART TWO

SOFTWARE RELIABILITY

SAFETY ASPECTS

MISTAKE PROOFING

SCRIPT ENVIRONMENT

V MODEL IN TESTING

Test Automation Check points and Control Points

Control Points

In any given automation tool the overall control of AUT is by Object identification technique. By this unique feature the tool recognizes the Application as an medium to interrogate with the Tester supplied inputs and tests the mobility of the Business Logistics.

Invoking this object identification technique the test tool does have certain control features that checks the application at various given point of time. Innumerous criteria, myriads of object handlers, plenty of predefined conditions are the features that determine the so-called object based features of the Functional Check points. Each tester has a different perspective of defining the Control points.

If. … Else:

1. Before we start the “if else” construct the nature of the control point is commented along side.

For e.g.,

# Home Page Validation

If ( == “0”)

print (“Successfully Launched”);

else

print (“Operation Unsuccessful”);

2. For all Data Table operation the return-code of the Open function should be handled in the “if else” construct.

Check Points

1. Any checkpoints should not be a component of X & Y Co-ordinate dependant. In practical terms if there is a Check point that is defined on X,Y Parameters then the usability of the control point wouldn’t make any sense for the application to test. The following are some of the criteria which denotes the do’s and don’t’s of checkpoints.

S.No

Check Point

Include

Exclude

1

Text Check

Capture Text,

Position of the Text, Font & Font Size, Text area,

2

Bitmap Check

only the picture

Window or Screen that holds the picture, x-y co-ordinates,

3

Web Check

URL Check, orphan page

Avoid any text validation

1. As a case study, the WinRunner automation tool is mentioned here as examples for creating check points. Usage of OBJ_CHECK_INFO or WIN_CHECK_INFO can be avoided and inculcate the idea of creating always the GUI Check point with Multiple Property. The advantages are to identify every small object with its clause, properties and its relativity with the previous versions.

This not only enables the Regression comparisons but also it gives you the flexibility of defining the GUI Checks in all Physical state of the Object.


RELATED POST


ERROR CHECK LIST FOR INSPECTIONS

WALK THROUGHS IN TESTING

TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE

TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO

VALIDATION TESTING

SYSTEM TESTING


DEBUGGING AND TESTING

DEFECT AMPLIFICATION AND REMOVAL

ITERATIVE SPIRAL MODEL

STANDARD WATER MODEL

CONFIGURATION MANAGEMENT


CONTROLLED TESTING ENVIRONMENT

RISK ANALYSIS PART ONE


RISK ANALYSIS PART TWO

BACK GROUND ISSUES

SOFTWARE REVIEWS PART ONE

SOFTWARE REVIEWS PART TWO

SOFTWARE RELIABILITY

SAFETY ASPECTS

MISTAKE PROOFING

SCRIPT ENVIRONMENT

V MODEL IN TESTING

Script Environment in Software Testing

The basic idea of setting the Test Bed is that the Test Suite must be potable and can readily be ran in any environment given the initial conditions. For this to happen, the automation tool supports a lot of functions to evolve a generic methodology where we can wrap up the entire built-ins to run before the Test Suite start executing the Script. In other word the fashion of organizing the Test Scripts remain in the developer’s mind to harbinger the issues and hurdles that can be avoided with little or less of programming.

Common Functions that get into Initialization Script are

1. Usage of built-in commands to keep the test path dynamically loaded. Options to rule out the possibility of Test Path Definitions

2. Close all the object files and data files in the Initialization Script

3. Connection to the database should be done in the Inits Script

4. Always Unload and Load the Object library, and it should be done only in Inits Script.

5. Define all the “public” Variables in the Inits Script

6. Establish the db connections in the Inits Test Script

Test Scripts Elements:

Prior to the development of Test Scripts the fashion of arranging the Test Scripts needs a proper planning. Lets look at few inputs on arranging the Test ware.

Test Ware

Test Repository

Test Suite

Should contain Sub Folders, Exception Handlers, Global Object files, Set Data file, Driver Scripts, Initialization & Termination scripts

Driver Script

Object Checks, Bit Map Checks, Text Check, Web Check, User defined Functions, Global test Report Folder

Driven Script

GUI/Bit/Text Check, External Libraries, I/O Handlers

5.1

RELATED POST


ERROR CHECK LIST FOR INSPECTIONS

WALK THROUGHS IN TESTING

TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE

TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO

VALIDATION TESTING

SYSTEM TESTING


DEBUGGING AND TESTING

DEFECT AMPLIFICATION AND REMOVAL

ITERATIVE SPIRAL MODEL

STANDARD WATER MODEL

CONFIGURATION MANAGEMENT


CONTROLLED TESTING ENVIRONMENT

RISK ANALYSIS PART ONE


RISK ANALYSIS PART TWO

BACK GROUND ISSUES

SOFTWARE REVIEWS PART ONE

SOFTWARE REVIEWS PART TWO

SOFTWARE RELIABILITY

SAFETY ASPECTS

MISTAKE PROOFING

SCRIPT ENVIRONMENT

V MODEL IN TESTING

TEST TOOL AUTOMATION BEST PRACTICES

Definition of Tests

As a prime entry point defining the test needs a idea to classify the scripts into finer elements of functions each contributing the various aspects of automation Techniques.

Looking into this perspective the elements of the automation Script would require the Record Play Back techniques, details of the application as better understood as Objects in tools, execution of Business Logic using loop constructs, and the test data accessibility for either Batch Process or any Back end operations. Ultimately we need this entire salient features to function at the right point of time getting the right inputs. To satisfy these criteria we require a lot of planning before we start automating the Test Scripts.

Test Recorder

In automation tools the Test Recorder is of two modes Object based and Action Mode. It requires a meticulous but yet a simplified approach on which mode to use. Though it is inevitable to avoid Action mode, it is still used for many at TE Based applications. As a best practice the object based is widely accepted and mandatory mode of operation in test automation. To the extent possible we will avoid Action based functions and stick on the object mode of operation.

Generic Test Environment Options

Some common Settings we need to set in General Options:

1. Default Recording Mode is Object mode

2. Synch Point time is 10 seconds as default

1. When Test Execution is in Batch Mode ensure all the options are set off so that the Batch test runs uninterrupted

2. In the Text Recognition if the Application Text is not recognizable then the Default Font Group is set. The Text Group is identified with a User Defined Name and then include in the General Option.

Test Properties

1. Every Script before recording ensure that the Test properties is in Main Test with the defaults

2. Do not entertain any Parameters for Main Test

3. It is not a good practice to load the Object library from the Test Options (if any). Rather the Object library is loaded from the Script using the suitable tool commands. This would actually avoid the hidden settings in the Script and also the ease of Setting the Object library Load and Unload can be better done dynamically in the Test Script rather than doing it manually every time the Test Suite is ran.

4. Ensure the Add-ins is correct from the Add-ins tab.

4.2

RELATED POST


ERROR CHECK LIST FOR INSPECTIONS

WALK THROUGHS IN TESTING

TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE

TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO

VALIDATION TESTING

SYSTEM TESTING


DEBUGGING AND TESTING

DEFECT AMPLIFICATION AND REMOVAL

ITERATIVE SPIRAL MODEL

STANDARD WATER MODEL

CONFIGURATION MANAGEMENT


CONTROLLED TESTING ENVIRONMENT

RISK ANALYSIS PART ONE


RISK ANALYSIS PART TWO

BACK GROUND ISSUES

SOFTWARE REVIEWS PART ONE

SOFTWARE REVIEWS PART TWO

SOFTWARE RELIABILITY

SAFETY ASPECTS

MISTAKE PROOFING

SCRIPT ENVIRONMENT

V MODEL IN TESTING

Automating Testing Analysis

When planning a test one have the opportunity to choose if one should perform the test manually or if one should automate the test.we will discuss benefits and problems with automating tests, but we will start with the possibilities that lie at hand.

The Possibilities with Tools:

· Requirements testing: formal review, no automated tools are used.

  • Design testing: major testing technique is the formal review, but there are some possibilities to use automated tools.

  • Testing in the small (unit testing):

Test Data Generator, to generate files and test data input

Program Logic Analyzers, checks code

Test Coverage Tools, checks what parts have been tested (run)

Test Drivers, executes the program to be tested. Simulates. Runs with accurate data and input

Test Comparators, compares test case to expected outcome

  • Testing in the large (Integration testing, System testing, Acceptance testing):

1. Test data generators, to generate files and test data input

2. Instrumenters, to measure and report test coverage during execution

3. Comparators, to compare outputs, files and source programs

4. Capture Playback systems, to capture on-line test data and play it back

5. Simulators and test beds, to simulate complex and real-time environments

6. Debugging aids, to aid in determining what is wrong and getting it repaired

7. Testing information systems, to maintain test cases and report on testing activities

8. File aids, to extract test data from existing files for use in testing

System charters and documents, to provide automatic system documentation

  • Code-based; generates tests that check that code does what code does. Does not check that it does what it should.

  • Interface-based; generates tests based on well-defined interfaces, such as GUI. Can create tests that visit every button, check-box or menu on a site.

  • Specification-based; generates both input and expected output from specifications list.

General Considerations:

The reason for automating testing is of course to make the test process more efficient. Not all parts of testing may be more efficiently performed automated. The more intellectual tasks, such as designing test cases, are often best performed manually, although these parts sometimes qualify for automation.

Another characteristic that makes a test case suitable for automation is that it is performed repeatedly. Even when automating tests, using one of many possible automation tools, most errors are often found manually since the test case run is tested manually before executed.

While on the subject of comparing actual outcome to expected outcome, it is important to point out that even though the actual outcome might be as expected, it does not mean that the application passes the test, or that it is free from errors tested for. The difference here is that a poorly defined manual test may very well find many important deficiencies or errors which a likewise poorly defined automated test will not, since it can only perform actions specified and verify given expected outcome.

That is the differences between human testers and comparators or test execution tools. While tools only do what they are specified to do and compares only the specified outcome, human testers performing the test manually are able to adjust their test case to unforeseen events, such as an error dialog box, and also to check outcome in many more ways than tools. When a human tester performs a test, almost any test, he simultaneously can be said to be performing a usability test since he has to navigate, wait, view and understand the application being tested. These positive side effects are never achieved when automating testing.

Automated testing, on the other hand, ensures that a test is always being run in the exact same way, which is often important to reproduce errors or when performing regression tests.

Benefits:

Among the most obvious benefits is the possibility to run the same test on new versions of a program. A program may then be tested to check for new bugs, on existing features, that may have been introduced when adding new features or correcting deficiencies. This is regression testing. The same applies to re-testing, i.e. testing the functionality of a debugged feature. If a test has been created previously it will most likely be very easy to run it again, which is not only a benefit, but as we stated earlier, it might also be a must for automation of some tests to be at all worth considering.

Load testing, for instance, is sometimes possible but not very recommendable to do manually. Applying a load of hundreds of users might be done manually but would require a massive administrative effort. Applying the same load by simulating the users using a tool decreases the required effort immensely.

Problems:

What we believe to be most important to remember is that the manual testing process must be well structured, with necessary and consistent documentation and consisting of tests good at finding errors and deficiencies, before considering automating. Without meeting these requirements, automation will most likely cause more problems than it solves.

Do not automate because you want to find additional errors that you did not find manually. Most tools are, as we stated earlier, re-test tools, meaning they execute a test that has in fact already been run. This means that most errors that can be found by this test already have been found. Despite this, there are tests that might still benefit from automation in this respect, and we have already mentioned load testing, for web applications.

Automated testing is not the same as automatically creating the test scripts. In order to receive long time value by using tools, tests and test scripts need to be maintainable. By automatically creating the scripts, using capture tools, one builds in a certain amount of inflexibility. Actions taken by the user creates a strict order in the script, and if the tested application is modified, the captured sequence may not be valid anymore. This often generates unacceptable maintenance costs .

This time we will point out that the fact that a test is passed does not mean that a program is free of errors. Automating a test like this means that the error is preserved to future releases.

Evaluation of tools:

Benefits

  • Running the same tests on later versions of the application

  • Running tests on a more frequent basis

  • Shortened time to market for later releases

  • The tests are being run in the exact same manner every time

Problems

  • A well structured test organization on the manual level is needed before automating

  • In order to create and perform relevant tests, the tools need to be fully understood

  • When an automated test is run, most of the faults have already been found

  • It can be hard to distinguish whether the faults lay in the tool or the application

  • Automatically created scripts comes hand in hand with low maintainability

If and which tools are of interest therefore depends on the following:

  • Size of the test project

  • Size of the application

  • Technology used by in the application

  • Stage of the test process

  • Resources available

  • Types of test

RELATED POST


ERROR CHECK LIST FOR INSPECTIONS

WALK THROUGHS IN TESTING

TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE

TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO

VALIDATION TESTING

SYSTEM TESTING


DEBUGGING AND TESTING

DEFECT AMPLIFICATION AND REMOVAL

ITERATIVE SPIRAL MODEL

STANDARD WATER MODEL

CONFIGURATION MANAGEMENT


CONTROLLED TESTING ENVIRONMENT

RISK ANALYSIS PART ONE


RISK ANALYSIS PART TWO

BACK GROUND ISSUES

SOFTWARE REVIEWS PART ONE

SOFTWARE REVIEWS PART TWO

SOFTWARE RELIABILITY

SAFETY ASPECTS

MISTAKE PROOFING

SCRIPT ENVIRONMENT

V MODEL IN TESTING

Automated Testing Tools

Automation of testing is the state of the art technique where in number of tools will help in testing program automatically. Programmers can use any tool to test his/her program and ensure the quality. There are number of tools are available in the market. Some of the tools which helps the programmer are:

  1. Static analyser

  2. Code Auditors

  3. Assertion processors

  4. Test file generators

  5. Test Data Generators

  6. Test Verifiers

  7. Output comparators.

Programmer can select any tool depending on the complexity of the program.

39
RELATED POST


INTEGRATION TESTING PART ONE

INTEGRATION TESTING PART TWO

INTEGRATION TESTING PART THREE

INTEGRATION TESTING PART FOUR

INTEGRATION TESTING PART FIVE

INTEGRATION TEST STANDARDS

INTEGRATION TEST STANDARDS PART TWO

QUALITY TESTING

QUALITY ASSURANCE

QUALITY ASSURANCE PART TWO

QUALITY ASSURANCE SQA

QUALITY OF DESIGN OF TEST CASE

QUALITY MANAGEMENT IN SOFTWARE TESTING

TOOLS FOR QUALITY MANAGEMENT

STATICAL QUALITY ASSURANCE

ISO APPROACH TO QUALITY TESTING