Test Reports IN Software Testing

:

  1. Functional Testing Status

  2. Functions Working Timeline

  3. Expected verses Actual Defects Detected Timeline

  4. Defects Detected verses Corrected Gap Timeline

  5. Average Age of Detected Defects by Type

  6. Defect Distribution

  7. Relative Defect Distribution

  8. Testing Action

Functional Testing Status

This report will show percentages of the functions which have been:

  1. Fully Tested

  2. Tested with Open Defects

  3. Not Tested

Functions Working Timeline

This Report will show the actual plan to have all functions working verses the current status of functions working.

An ideal format could be a line graph.

Expected verses Actual Defects Detected

This report will provide an analysis between the number of defects being generated against the expected number of defects expected from the planning stage.

Defects Detected verses Corrected Gap

This report, ideally in a line graph format, will show the number of defects uncovered verses the number of defects being corrected and accepted by the testing group.

If the gap grows too large, the project may not be ready when originally planned.

Average Age Detected by Type. This report will show the average days of outstanding defects by step (Sev 1, Sev 2, etc..). In the planning stage, it is beneficial to determine the acceptable open days by defect type.

Defect Distribution

This report will show the defect distribution by function or module. It can also items such as numbers of tests completed.

Relative Defect Distribution

This report will take the previous report (Defect Distribution) and normalize the level of defects. An example would be one application might be more in depth than another, and would probably have a higher level of defects. However, when normalized over the number of functions or lines of code, would show a more accurate level of defects.

Testing Actions

This report can show many different things, including possible shortfalls in testing. Examples of data to show might be number of sev 1 defects, tests that are behind schedule, and other information that would present an accurate testing picture.

Final Test Report

Objectives:

  1. Define Scope of Testing

  2. Present Test Results

  3. Draw Conclusions/Recommendations

Two Goals:

Provide Data for Determination of Readiness by Customer Provide Data for Long Term Detection of Problem Functions.

RELATED POST

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Test Metrics in software testing

Metrics are the most important responsibility of the Test Team. Metrics allow for deeper understanding of the performance of the application and its behavior. The fine tuning of the application can be enhanced only with metrics. In a typical QA process, there are many metrics which provide information.

The following can be regarded as the fundamental metric:

  1. Functional or Test Coverage Metrics.

  2. Software Release Metrics.

  3. Software Maturity Metrics.

  4. Reliability Metrics.

  5. Mean Time To First Failure (MTTFF).

  6. Mean Time Between Failures (MTBF).

  7. Mean Time To Repair (MTTR).

Functional or Test Coverage Metric. It can be used to measure test coverage prior to software delivery. It provides a measure of the percentage of the software tested at any point during testing.

It is calculated as follows:

Function Test Coverage = FE/FT

Where,

FE is the number of test requirements that are covered by test cases that were executed against the software

FT is the total number of test requirements

Software Release Metrics

The software is ready for release when:

1. It has been tested with a test suite that provides 100% functional coverage, 80% branch coverage, and 100% procedure coverage.

2. There are no level 1 or 2 severity defects.

3. The defect finding rate is less than 40 new defects per 1000 hours of testing

4. Stress testing, configuration testing, installation testing, Naïve user testing, usability testing, and sanity testing have been completed

Software Maturity Metric

Software Maturity Index is that which can be used to determine the readiness for Release of a software system. This index is especially useful for assessing release readiness when changes, additions, or deletions are made to existing software systems. It also provides an historical index of the impact of changes. It is calculated as follows:

SMI = Mt - ( Fa + Fc + Fd)/Mt

Where

SMI is the Software Maturity Index value

Mt is the number of software functions/modules in the current release

Fc is the number of functions/modules that contain changes from the previous release

Fa is the number of functions/modules that contain additions to the previous release

Fd is the number of functions/modules that are deleted from the previous release

Reliability Metrics

Reliability is calculated as follows:

Reliability = 1 - Number of errors (actual or predicted)/Total number of lines of executable code

This reliability value is calculated for the number of errors during a specified time interval.

Three other metrics can be calculated during extended testing or after the system is in production. They are:

MTTFF (Mean Time to First Failure)

MTTFF = The number of time intervals the system is operable until its first failure (functional failure only).

MTBF (Mean Time Between Failures)

MTBF = Sum of the time intervals the system is operable

MTTR (Mean Time To Repair)

MTTR = sum of the time intervals required to repair the system

The number of repairs during the time period .

RELATED POST

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Defect Classification in software testing

The defects can be classified as follows:

Critical: There is s functionality block. The application is not able to proceed any further.

Major: The application is not working as desired. There are variations in the functionality.

Minor: There is no failure reported due to the defect, but certainly needs to be rectified.

Cosmetic: Defects in the User Interface or Navigation.

Suggestion: Feature which can be added for betterment.

Defect Priority

The priority level describes the time for resolution of the defect. The priority level would be classified as follows:

Immediate: Resolve the defect with immediate effect.

At the Earliest: Resolve the defect at the earliest, on priority at the second level.

Normal: Resolve the defect.

Later: Could be resolved at the later stages.

RELATED POST

DEFECT TRACKING

Related Posts

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Defect Tracking IN software testing

Defects

The purpose of testing is to find defects. A defect is a variance from a desired product attribute. Two categories of defects are

1.Variance from product specifications

The product built varies from the product specified. For example, the specifications may say that x is to be added to y to produce z. If the algorithm in the built Varies from that specification, it is considered to be defective.

2.Variance from customer/user expectation

This variance is something that the user wanted that in the built product, but also was not specified to be included in the built product. The missing piece may be a specification or requirement, or the method by which the requirement was implemented may be unsatisfactory.

Defects are recorded for 4 major purposes:

  1. To correct the defect

To report staus of the application

To gather statistis used to develop defect expectations in future applications

To improve the software development process

For example, a defect log could include

  • Defect ID number

  • Descriptive defect name and type

  • Source of defect-test case or other source

  • Defect severity

  • Defect priority

  • Defect status (e.g. open, fixed, closed, user error, design, and so on) – more robust tools provide a status history for the defect

  • Date and time tracking for either the most recent status change, or for each change in the status history

  • Detailed description, including the steps necessary to reproduce the defect

  • Component or program where defect was found

  • Screen prints, logs, etc. that will aid the developer in resolution process

  • Stage of origination

  • Person assigned to research and/or correct the defect

Severity Versus Priority

The severity of a defect should be assign objectively by the test team based on pre-defined severity descriptions. For example a “severity one” defect may be defined as one that causes data corruption, a system crash, security violations, etc. In large projects, it may also be necessary to assign a priority to the defect which determine the order in which defects should be fixed.

The priority assigned to a defect is usually more subjective based upon input from users regarding which defects are most important to them, and therefore should be fixed first.

It is recommended that severity levels be defined at the start of the project so that they are consistently assign and understood by the team. This foresight can help test teams avoid the common disagreements with development teams about the criticality of a defect.

What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes.

A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

  1. · Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

· Bug identifier (number, ID, etc.)

· Current bug status (e.g., 'Released for Retest', 'New', etc.)

· The application name or identifier and version

· The function, module, feature, object, screen, etc. where the bug occurred

· Environment specifics, system, platform, relevant hardware specifics

· Test case name/number/identifier

· One-line bug description

· Full bug description

· Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

Names and/or descriptions of file/data/messages/etc. used in test

· File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

· Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

· Was the bug reproducible?

· Tester name

· Test date

· Bug reporting date

· Name of developer/group/organization the problem is assigned to

· Description of problem cause

· Description of fix

· Code section/file/module/class/method that was fixed

· Date of fix

· Application version that contains the fix

· Tester responsible for retest

· Retest date

· Retest results

· Regression testing requirements

· Tester responsible for regression tests

· Regression testing results

A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

RELATED POST

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

Software Test Design

Like test analysis, it is a relatively large piece of work. Unlike test analysis, however, the focus of test design is not to assimilate information created by others, but rather to implement procedures, techniques, and data sets that achieve the test’s objective(s).

The outputs of the test analysis phase are the foundation for test design. Each requirement or design construct has had at least one technique (a measurement, demonstration, or analysis) identified during test analysis that will validate or verify that requirement. The tester must now implement the intended technique.

Software test design, as a discipline, is an exercise in the prevention, detection, and elimination of bugs in software. Preventing bugs is the primary goal of software testing.

Diligent and competent test design prevents bugs from ever reaching the implementation stage. Test design, with its attendant test analysis foundation, is therefore the premiere weapon in the arsenal of developers and testers for limiting the cost associated with finding and fixing bugs.

RELATED POST


SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

What a test plan should contain

A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.

A test plan should ideally be organisation wide, being applicable to all of an organisations software developments.

The objective of each test plan is to provide a plan for verification, by testing the software, the software produced fulfils the functional or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this generally means the Functional Specification.

The first consideration when preparing the Test Plan is who the intended audience is – i.e. the audience for a Unit Test Plan would be different, and thus the content would have to be adjusted accordingly.

You should begin the test plan as soon as possible. Generally it is desirable to begin the master test plan as the same time the Requirements documents and the Project Plan are being developed.

Test planning can (and should) have an impact on the Project Plan. Even though plans that are written early will have to be changed during the course of the development and testing, but that is important, because it records the progress of the testing and helps planners become more proficient.


What to consider for the Test Plan:


1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary


RELATED POST

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

The Process of software testing part three

What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product.

The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

  1. Title
  2. Identification of software including version/release numbers
  3. Revision history of document including authors, dates, approvals
  4. Table of Contents
  5. Purpose of document, intended audience
  6. Objective of testing effort
  7. Software product overview
  8. Relevant related document list, such as requirements, design documents, other test plans, etc.
  9. Relevant standards or legal requirements
  10. Traceability requirements
  11. Relevant naming conventions and identifier conventions
  12. Overall software project organization and personnel/contact-info/responsibilties
  13. Test organization and personnel/contact-info/responsibilities
  14. Assumptions and dependencies
  15. Project risk analysis
  16. Testing priorities and focus
  17. Scope and limitations of testing
  18. Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
  19. Outline of data input equivalence classes, boundary value analysis, error classes
  20. Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
Test environment setup and configuration issues
  1. Test data setup requirements
  2. Database setup requirements
  3. Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
  4. Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
  5. Test automation - justification and overview
  6. Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version control

  1. Problem tracking and resolution - tools and processes
  2. Project test metrics to be used
  3. Reporting requirements and testing deliverables
  4. Software entrance and exit criteria
  5. Initial sanity testing period and criteria
  6. Test suspension and restart criteria
  7. Personnel allocation
  8. Personnel pre-training needs
  9. Test site/location
  10. Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
  11. Relevant proprietary, classified, security, and licensing issues.
  12. Open issues
SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique

The Process of software testing part two

Test Spec/Outline - Why ?

● Avoids omissions and tunnel vision. Better test coverage.

● Concisely communicates testing effort: what can be, should be, and will be tested.

● Makes testing visible and respected.

● Efficiency: avoids redundancy. Organizes similar test case candidates so no overlaps.

● Knowledge capture: build in coverage elements as experience them over time.

Test Cases - What ?

● Derived from Test Spec, Functional Spec., and Detailed Design.

● Detailed to lowest level of complexity.

● Standard info: objective, setup steps, repro path, expected results.

● Test Case execution logged as pass or fail.

● Every failed Test Case must have an associated bug reported.

Test Cases - Why?

● Makes program failure obvious. How tester know whether bug exists? (Expected result)

● Best of breed: test case has been refined. Highest likelihood finding error (vs adhoc).

● Test coverage: how many cases have passed, failed, or not yet been executed.

Bug Report - What?

● Derived from planned Test Cases, or Adhoc Testing.

● Every bug must have an associated Test Case.

● Standard Info: Test Case Info., plus actual results, problem, comments, etc.

Bug Report - Why?

● Communicate bug for reproducibility, resolution, and regression.

● Track bug status (open, resolved, closed).

● Ensure bug is not forgotten, lost or ignored.

● Used to back create test case where none existed before.

Weekly Status Reports - What?

● Test Case coverage (pass, fail, not yet executed).

● Bug status summary (severity, priority, etc.)

● Task status (schedule).

● Issue list of closed and open action items.

● Risk list identifying potential problems.

● Other project metrics.

Weekly Status Report - Why?

● Communicate project status.

● Visibility for new and existing issues.

● Visibility for new and existing risks.

● Record of events.

Exit Reports - What?

● Test Release Report: Certification as to extent of coverage, and assessment of project’s readiness for release.

● Milestone Exit Report: After each phase of testing completed, this report lists test cases executed, pass/fail status, & bug find status.

Exit Reports - Why?

● Consensus that all predecessor tasks are completed and milestone can be exited.

● Act of organizing report forces critical analysis of all project components.

● Reality check if on schedule, within budget, and within acceptable quality tolerances.

● Visibility of project status to senior mgmt.


RELATED POST

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique