The purpose of testing is to find defects. A defect is a variance from a desired product attribute. Two categories of defects are
1.Variance from product specifications
The product built varies from the product specified. For example, the specifications may say that x is to be added to y to produce z. If the algorithm in the built Varies from that specification, it is considered to be defective.
2.Variance from customer/user expectation
This variance is something that the user wanted that in the built product, but also was not specified to be included in the built product. The missing piece may be a specification or requirement, or the method by which the requirement was implemented may be unsatisfactory.
Defects are recorded for 4 major purposes:
To correct the defect
To report staus of the application
● To gather statistis used to develop defect expectations in future applications
● To improve the software development process
For example, a defect log could include
Defect ID number
Descriptive defect name and type
Source of defect-test case or other source
Defect status (e.g. open, fixed, closed, user error, design, and so on) – more robust tools provide a status history for the defect
Date and time tracking for either the most recent status change, or for each change in the status history
Detailed description, including the steps necessary to reproduce the defect
Component or program where defect was found
Screen prints, logs, etc. that will aid the developer in resolution process
Stage of origination
Person assigned to research and/or correct the defect
Severity Versus Priority
The severity of a defect should be assign objectively by the test team based on pre-defined severity descriptions. For example a “severity one” defect may be defined as one that causes data corruption, a system crash, security violations, etc. In large projects, it may also be necessary to assign a priority to the defect which determine the order in which defects should be fixed.
The priority assigned to a defect is usually more subjective based upon input from users regarding which defects are most important to them, and therefore should be fixed first.
It is recommended that severity levels be defined at the start of the project so that they are consistently assign and understood by the team. This foresight can help test teams avoid the common disagreements with development teams about the criticality of a defect.
What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes.
A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:
· Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
· Bug identifier (number, ID, etc.)
· Current bug status (e.g., 'Released for Retest', 'New', etc.)
· The application name or identifier and version
· The function, module, feature, object, screen, etc. where the bug occurred
· Environment specifics, system, platform, relevant hardware specifics
· Test case name/number/identifier
· One-line bug description
· Full bug description
· Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
Names and/or descriptions of file/data/messages/etc. used in test
· File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
· Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
· Was the bug reproducible?
· Tester name
· Test date
· Bug reporting date
· Name of developer/group/organization the problem is assigned to
· Description of problem cause
· Description of fix
· Code section/file/module/class/method that was fixed
· Date of fix
· Application version that contains the fix
· Tester responsible for retest
· Retest date
· Retest results
· Regression testing requirements
· Tester responsible for regression tests
· Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.
SOFTWARE QUALITY ASSURANCE AND CONTROL
SOFTWARE QUALITY AND COST ASPECT
STABLE PROCESS OF SOFTWARE TESTING
STABLE PROCESS OF SOFTWARE TESTING PART TWO
DEFECTS IN SOFTWARE TESTING
REDUCTION OF DEFECTS IN SOFTWARE TESTING
SOFTWARE TESTING AND EFFECTING FACTORS
SCOPE OF SOFTWARE TESTING
TESTING LIFE CYCLE PART ONE
TESTING LIFE CYCLE PART TWO
TESTING LIFE CYCLE PART THREE
SOFTWARE TESTING AND CONSTRAINTS WITH IN IT
TESTING CONSTRAINTS PART TWO
LIFE CYCLE TESTING
Independent Software Testing
Testing verification and validation
Functional and structural testing
Static and dynamic testing
V model testing
Eleven steps of V model testing
Execution testing technique
Recovery Testing technique
Operation testing technique
Compliance software testing technique
Security testing technique