Walk throughs,Inspections of software testing part two

This post is continuation with previous post Walk tyhroughs,Inspections of software testing part one.

1.System Requirements Review

This review is an examination of the initial progress during the problem definition stage and of the convergence on a complete system configuration. Test planning and test documentation are begun at this review.

2 • System Design Review

This review occurs when the system definition has reached a point where major system modules can be identified and completely specified along with the corresponding test requirements. The requirements for each major subsystem are examined along with the preliminary test plans. Tools required for verification support are identified at this stage.

3• Preliminary Design Review

This review is a formal technical review of the basic design approach for each major subsystem or module. The revised requirements and preliminary design specifications for each major subsystem and all test plans, procedures and documentation are reviewed at this stage. Development and verification tools are further identified at this stage. Changes in requirements will lead to an examination of the test requirements to maintain consistency.

4• Final Design Review

This review occurs just prior to the beginning of the construction stage. The complete and detailed design specifications for each module and all draft test plans and documentation are examined. Again, consistency with previous stages is reviewed, with particular attention given to determining if test plans and documentation reflect changes in the design specifications at all levels.

5• Final Review

This review determines through testing that the final coded subsystem conforms to the final system specifications and requirements. It is essentially the subsystem acceptance test.

Rules should be followed for all reviews:

1. The product is reviewed, not the producer.
2. Defects and issues are identified, not corrected.
3. All members of the reviewing team are responsible for the results of the review.

Reviews are conducted to utilize the variety of perspectives and talents brought together in a team. The main goal is to identify defects within the stage or phase of the project where they originate, rather than in later test stages; this is referred to as “stage containment.”

As reviews are generally greater than 65 percent efficient in finding defects, and testing is often less than 30 percent efficient, the advantage is obvious. In addition, since defects identified in the
review process are found earlier in the life cycle, they are less expensive to correct.

Another advantage of holding reviews is not readily measurable. That is, reviews are an efficient method of educating a large number of people on a specific product/project in a relatively short period of time. Semiformal reviews are especially good for this, and indeed, are often held for just that purpose.

In addition to learning about a specific product/project, team members are exposed to a variety of approaches to technical issues, a cross-pollination effect. Finally, reviews provide training in and enforce the use of standards, as non conformance to standards is considered a defect and reported as such.

Related Posts

Parallel testing technique
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Walkthroughs,Inspections of software testing

Walkthroughs and inspections are formal manual techniques that are a natural evolution of desk checking. Both procedures require a team, usually directed by a moderator. The team includes the developer, but the remaining members and the moderator should not be directly involved in the development effort.

Both techniques are based on a reading of the product (e.g., requirements, specifications, or code) in a formal meeting environment with specific rules for evaluation. The difference between inspection and walkthrough lies in the conduct of the meeting. Both methods require preparation and study by the team members, and scheduling and coordination by the team moderator.

Inspection involves a step-by-step reading of the product, with each step checked against a predetermined list of criteria. These criteria include checks for historically common errors. Guidance for developing the test criteria can be found elsewhere. The developer is usually required to narrate the reading product. The developer finds many errors just by the simple act of reading aloud.

Walkthroughs differ from inspections in that the programmer does not narrate a reading of the product by the team, but provides test data and leads the team through a manual simulation of the system. The test data is walked through the system, with intermediate results kept on a blackboard or paper.

The test data should be kept simple given the constraints of human simulation. The purpose of the walkthrough is to encourage discussion, not just to complete the system simulation on the test data. Most errors are discovered through questioning the developer's decisions at various stages, rather than through the application of the test data.

At the problem definition stage, walkthrough and inspection can be used to determine if the requirements satisfy the testability and adequacy measures as applicable to this stage in the development. If formal requirements are developed, formal methods, such as correctness techniques, may be applied to ensure adherence with the quality factors.

Walkthroughs and inspections should again be performed at the preliminary and detailed design stages. Design walkthroughs and inspection will be performed for each module and module interface. Adequacy and testability of the module interfaces are very important. Any changes that result from these analysis will cause at least a partial repetition of the verification at both stages and between the stages. A reexamination of the problem definition and requirements may also be required.

Finally, the walkthrough and inspection procedures should be performed on the code produced during the construction stage. Each module should be analyzed separately and as integrated parts of the finished software.

Related Posts

White and black box testing
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

White and Black box Testing Examples

White-Box Testing

White-box testing assumes that the path of logic in a unit or program is known. White-box testing consists of testing paths, branch by branch, to produce predictable results. The following are white-box testing techniques:
  1. • Statement Coverage : Execute all statements at least once.
  2. • Decision Coverage : Execute each decision direction at least once.
  3. • Condition Coverage : Execute each decision with all possible outcomes at least once.
  4. • Decision/Condition Coverage :Execute all possible combination's of condition outcomes in each decision. Treat all iterations as two-way conditions exercising the loop zero times and one time.
  5. • Multiple Condition Coverage : Invoke each point of entry at least once.
Black-Box Testing

Black-box testing focuses on testing the function of the program or application against its specification. Specifically, this technique determines whether combinations of inputs and operations produce expected results.

When creating black-box test cases, the input data used is critical. Three successful techniques for managing the amount of input data required include:

Equivalence Partitioning

An equivalence class is a subset of data that is representative of a larger class. Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive testing of each value of the larger class.

Boundary Analysis

A technique that consists of developing test cases and data that focus on the input and output boundaries of a given function.

Error Guessing

Test cases can be developed based upon the intuition and experience of the tester.(142.3)

Related Posts

Parallel testing technique
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Parallel Testing Techniques

Parallel testing is used to determine that the results of the new application are consistent with the processing of the previous application or version of the application.

What are the objectives of parallel testing ?
  1. Conduct redundant processing to ensure that the new version or application performs correctly.
  2. Demonstrate consistency and inconsistency between two versions of the same application system.
How to Use Parallel Testing ?

Parallel testing requires that the same input data be run through two versions of the same application. Parallel testing can be done with the entire application or with a segment of the application.

Sometimes a particular segment, such as the day-to-day interest calculation on a savings account, is so complex and important that an effective method of testing is to run the new logic in parallel with the old logic.

If the new application changes data formats, then the input data will have to be modified before it can be run through the new application. This also makes it difficult to automatically check the results of processing through a tape or disk file compare.

The more difficulty encountered in verifying results or preparing common input, the less attractive the parallel testing technique becomes.

What are the Parallel Test Examples ?
  1. • Operate a new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.
  2. • Run the old version of the application system to ensure that the operational status of the old system has been maintained in the event that problems are encountered in the new application.
When to Use Parallel Testing ?

Parallel testing should be used when there is uncertainty regarding the correctness of processing of the new application, and the old and new versions of the application are similar.

In applications like payroll, banking, and other heavily financial applications where the results of processing are similar, even though the methods may change significantly – for example, going from batch to online banking – parallel testing is one of the more effective methods of ensuring the integrity of the new application.

Related Posts

Control software testing technique
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Control Software Testing Technique

Controls include data validation, file integrity, audit trail, backup and recovery, documentation, and the other aspects of systems related to integrity. Control testing techniques are designed to ensure that the mechanisms that oversee the proper functioning of an application system work.

What are the objectives of control testing technique ?

Control is a management tool to ensure that processing is performed in accordance with the intents of management. The objectives are
  1. Data is accurate and complete
  2. Transactions are authorized
  3. An adequate audit trail of information is maintained.
  4. The process is efficient, effective, and economical
  5. The process meets the needs of the user
How to Use Control Testing?

Control can be considered a system within a system. The term “system of internal controls” is frequently used in accounting literature to describe the totality of the mechanisms that ensure the integrity of processing.

Controls are designed to reduce risks; therefore, in order to test controls the risks must be identified. The individual designing the test then creates the risk situations in order to determine whether the controls are effective in reducing them to a predetermined acceptable level of risk.

What are the control testing examples ?

Control-oriented people frequently do control testing. Like error-handling, it requires a negative look at the application system to ensure that those “what-can-go-wrong” conditions are adequately protected. Error-handling is a subset of controls oriented toward the detection and correction of erroneous information. Control in the broader sense looks at the totality of the system.

Examples of testing that might be performed to verify controls include:
  1. Determine that there is adequate assurance that the detailed records in a file equal the control total. This is normally done by running a special program that accumulates the detail and reconciles it to the total.
  2. Determine that the manual controls used to ensure that computer processing is correct are in place and working.
When to use control testing ?

Control testing should be an integral part of system testing. Controls must be viewed as a system within a system, and tested in parallel with other system tests. Knowing that approximately 50 percent of the total development effort goes into controls, a proportionate part of testing should be allocated to evaluating the adequacy of controls.

Related Posts


Manual support testing
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Manual Support Testing

What are the objectives of manual support testing ?
Manual support involves all the functions performed by people in preparing data for, and using data from, automated applications. The objectives of testing the manual support systems are:
  1. • Verify that the manual support procedures are documented and complete
  2. • Determine that manual support responsibility has been assigned.
  3. • Determine that the manual support people are adequately trained
  4. • Determine that the manual support and the automated segment are properly interfaced
How to Use Manual Support Testing ?

Manual testing involves the evaluation of the adequacy of the process and the execution of the process. The process itself can be evaluated in all segments of the systems development life cycle.

To test people processing requires testing the interface between people and the application system. That is entering the transactions, and then getting the results back from that processing, making additional actions based on the information received, until all aspects of the manual computer interface have been adequately tested.

The manual support testing should occur without the assistance of the systems personnel. The manual support group should operate using the training and procedures provided them by the systems personnel. But the systems personnel should evaluate the results to determine if the tests have been adequately performed.

What are the manual support test examples ?

The process of conducting manual support testing can include the following types of tests:
  1. Providing input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it into the computer.
  2. Output reports are prepared from the computer based on typical conditions, and the users are then asked to take the necessary action based on the information contained in computer reports.
  3. Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.
When to use manual support testing ?

Extensive manual support testing is best done during the installation phase so that the clerical people do not become involved with the new system until immediately prior to its entry into operation. This avoids the confusion of knowing two systems and not being certain which rules to follow.

During the maintenance and operation phases, manual support testing may only involve providing people with instructions on the changes and then verifying with them through questioning that they properly understand the new procedures.(137.6)

Related Posts

Intersystem software testing technique
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Intersystem Sopftware Testing Technique

Application systems are frequently interconnected to other application systems. The interconnection may be data coming into the system from another application, leaving for another application, or both. Frequently multiple applications, sometimes called cycles or functions are involved.

What are the objectives of inter system testing technique ?

Many problems exist in inter system testing. One is that it is difficult to find a single individual having jurisdiction over all of the systems below the level of senior management. In addition, the process is time-consuming and costly. The objectives of intersystem testing include:
  1. • Determine that proper parameters and data are correctly passed between applications
  2. • Ensure that proper coordination and timing of functions exists between the application systems
  3. • Determine that documentation for the involved systems is accurate and complete
How to Use Intersystem Testing ?

Intersystem testing involves the operation of multiple systems in the test. Thus, the cost may be expensive, especially if the systems have to be run through several iterations. The process is not difficult; in that files or data used by multiple systems are passed from one another to verify that they are acceptable and can be processed properly. However, the problem can be magnified during maintenance when two or more of the systems are undergoing internal changes concurrently.

One of the best testing tools for inter system testing is the integrated test facility. This permits testing to occur during a production environment and thus the coupling of systems can be tested at minimal cost.

What are the inter system test examples ?
  1. Developing a representative set of test transactions in one application for passage to another application for processing verification.
  2. Entering test transactions in a live production environment using the integrated test facility so that the test conditions can be passed from application to application to application, to verify that the processing is correct.
  3. Manually verifying that the documentation in the affected systems is updated based upon the new or changed parameters in the system being tested.
When to Use Intersystem Testing ?

Inter system testing should be conducted whenever there is a change in parameters between application systems. The extent and type of testing will depend on the risk associated with those parameters being erroneous. If the integrated test facility concept is used, the inter system parameters can be verified after the changed or new application is placed into production.(138)

Related Posts

Error handling testing
TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

What is Error Handling Testing ?

Manual systems can deal with problems as they occur, but automated systems must pre program error-handling. In many instances the completeness of error handling affects the usability of the application. Error-handling testing determines the ability of the application system to properly process incorrect transactions.

What are its Objectives ?

Errors encompass all unexpected conditions. In some systems, approximately 50 percent of the programming effort will be devoted to handling error conditions. Specific objectives of error-handling testing include:
  1. Determine that all reasonably expected error conditions are recognizable by the application system.
  2. Determine that the accountability for processing errors has been assigned and that the procedures provide a high probability that the error will be properly corrected.
  3. Determine that reasonable control is maintained over errors during the correction process.
How to Use Error Handling Testing ?

It requires a group of knowledgeable people to anticipate what can go wrong with the application system. The other forms of testing involve verifying that the application system conforms to requirements. Error-handling testing uses exactly the opposite concept.

A successful method for developing test error conditions is to assemble, for a half-day or a day, people knowledgeable in information technology, the user area, and auditing or error tracking.These individuals are asked to brainstorm what might go wrong with the application.

The totality of their thinking must then be organized by application function so that a logical set of test transactions can be created. Without this type of synergistic interaction on errors, it is difficult to develop a realistic body of problems prior to production.

Error-handling testing should test the introduction of the error, the processing of the error, the control condition, and the reentry of the condition properly corrected. This requires error handling testing to be an iterative process in which errors are first introduced into the system,then corrected, then reentered into another iteration of the system to satisfy the complete error-handling cycle.

What are Error-Handling Test Examples ?
  1. Produce a representative set of transactions containing errors and enter them into the system to determine whether the application can identify the problems.
  2. Through iterative testing, enter errors that will result in corrections, and then reenter those transactions with errors that were not included in the original set of test transactions.
  3. Enter improper master data, such as prices or employee pay rates, to determine if errors that will occur repetitively are subjected to greater scrutiny than those causing single error results.

When to Use Error-Handling Testing ?

Error testing should occur throughout the system development life cycle. At all points in the developmental process the impact from errors should be identified and appropriate action taken to reduce those errors to an acceptable level. Error-handling testing assists in the error management process of systems development and maintenance. Some organizations use auditors, quality assurance, or professional testing personnel to evaluate error processing.

Related Posts

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

What is Manual Support Testing ?

Systems commence when transactions originate and conclude with the use of the results of processing. The manual part of the system requires the same attention to testing, as does the automated segment. Although the timing and testing methods may be different, the objectives of manual testing remain the same as testing the automated segment of the application system.

What are the objectives of manual support testing ?

Manual support involves all the functions performed by people in preparing data for, and using data from, automated applications. The objectives of testing the manual support systems are:
  1. • Verify that the manual support procedures are documented and complete
  2. • Determine that manual support responsibility has been assigned.
  3. • Determine that the manual support people are adequately trained
  4. • Determine that the manual support and the automated segment are properly interfaced
How to Use Manual Support Testing ?

Manual testing involves the evaluation of the adequacy of the process first, and then, second, the execution of the process. The process itself can be evaluated in all segments of the systems development life cycle. The execution of the process can be done in conjunction with normal systems testing. Rather than prepare and enter test transactions, the system can be tested having the actual clerical and supervisory people prepare, enter, and use the results of processing from the application system.

Manual testing normally involves several iterations of the process. To test people processing requires testing the interface between people and the application system. This means entering transactions, and then getting the results back from that processing, making additional actions based on the information received, until all aspects of the manual computer interface have been adequately tested.

The manual support testing should occur without the assistance of the systems personnel. The manual support group should operate using the training and procedures provided them by the systems personnel. The systems personnel should evaluate the results to determine if the tests have been adequately performed.

What are the Manual Support Test Examples ?

The process of conducting manual support testing can include the following types of tests:
  1. • Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it into the computer.
  2. • Output reports are prepared from the computer based on typical conditions, and the users are then asked to take the necessary action based on the information contained in computer reports.
  3. • Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.
When to Use Manual Support Testing ?
Verification that the manual systems function properly should be conducted throughout the systems development life cycle. This aspect of system testing should not be left to the latter stages of the life cycle to test.

Extensive manual support testing is best done during the installation phase so that the clerical people do not become involved with the new system until immediately prior to its entry into operation. This avoids the confusion of knowing two systems and not being certain which rules to follow. During the maintenance and operation phases, manual support testing may only involve providing people with instructions on the changes and then verifying with them through questioning that they properly understand the new procedures.

Related Posts

Regression testing
UNIT TESTING

UNIT TESTING PART ONE

UNIT TESTING PART TWO

UNIT TESTING PART THREE

GUI TESTING

WINDOWS COMPLIANCE GUI TESTING PART ONE

WINDOWS COMPLIANCE GUI TESTING PART TWO

WINDOWS COMPLIANCE GUI TESTING PART THREE

WINDOWS COMPLIANCE GUI TESTING PART FOUR VALIDATION TESTING

WINDOWS COMPLIANCE GUI TESTING PART FIVE CONDITION TESTING

WINDOWS COMPLIANCE GUI TESTING PART SIX GENERAL CONDITION TESTING