Risk Analysis in Software testing part two

You can consider going through RISK analysis part one also before going through this for the sake of continuaiton.

Who performs the software risk analysis? Typically, everyone involved with the software development lifecycle. The users, business analysts, developers, and software testers are all involved in conducting risk analysis.

However, it is not always possible to have everyone's input, especially the users. In that case, the testers should conduct the software risk analysis as early as possible in the software development life cycle. Typically, risk analysis is done in the requirements stage of the software development life cycle.

Two indicators have been proposed as indicators of risk: the expected impact of failure and the likelihood of failure. Let's talk about these in turn.

Expected Impact Indicator

The software team should ask the question, "What would be the impact on the user if this feature or attribute failed to operate correctly?" Impact is usually expressed in terms of money or the cost of failure. For each requirement or feature, it is possible to assign a value in terms of the expected impact of the failure of that requirement or feature. Assign a value of high, medium, or low for each requirement as a measure of the expected impact.

You should concentrate your focus only on those features and attributes that directly impact the user, not necessarily on the testing effort. If you run into the situation where every feature or requirement is ranked the same, then limit the number of values each user can assign. Let's look at the expected impact and likelihood of failure for a hypothetical Login system:

Expected Impact and Likelihood of Failure for the Login Functionality

The requirement that the "UserId shall be 4 characters" has a low expected impact of failure because there is not much of an impact to a user if the userid is more or less than 4 characters. The same reasoning can be applied to the requirement that the "Password shall be 5 characters." However, the requirement that the "System shall validate each userID and password for uniqueness" has a high impact of failure because there could be multiple users with the same userID and password. If the developer does not code for this, security is at risk.

Likelihood of Failure Indicator

As part of the risk analysis process, the software team should assign an indicator for the relative likelihood of failure of each requirement or feature. Assign H for a relatively high likelihood of failure, M for medium, and L for low.

When the software team assigns a value for each feature, they should be answering the question, "Based on our current knowledge of the system, what is the likelihood that this feature or attribute will fail or fail to operate correctly?" At this point, Craig and I differ in that he argues that complexity is a systemic characteristic and should be included as part of the likelihood indicator.

My argument is that complexity should be an indicator on its own. Furthermore, severity should also be considered. Four indicators provide more granularity and detail than just the two typical indicators. In Table 2, I have shown that if the prioritization is the same between two different requirements then it is not possible to discern which requirement is more risky. If we have three or more indicators we are in a better position to evaluate risk.

Complexity Indicator

Something that is complex is intricate and complicated. The argument here is that the greater the complexity of the feature, the greater the risk. More interfaces means that there will be more risk involved with each interface as well as the overall system.


Thus, the analysis can be used for test planning. An excessively complex module will require a prohibitive number of test steps; that number can be reduced to a practical size by breaking the module into smaller, less-complex sub-modules." There are other measures of complexity that can be used for risk analysis.


Severity Indicator

Harshness of failure indicates how much damage there will be to the user community and also implies that there will be some suffering on the part of the user if the failure is realized. This suffering could be in the form of money, emotional stress, poor health, death, etc. Consider the following case of a software failure that resulted in deaths.

Severity is different from expected impact in that expected impact does not consider the suffering imposed on the user but merely considers the effect of the failure. Therefore, I argue that the greater the severity, the higher the risk. Assign a value of H for high, M for medium, or L for low for each requirement based on its severity.

The Method of Risk Analysis

At this point, the software team should assign a number to each high, medium, or low value for likelihood, expected impact, complexity, and severity indicators. It is possible to use a range of 1 to 3 with 3 being the highest or 1 to 5 with 5 being the highest. If you use the 1 to 5 range, there will be more detail. To keep the technique simple, let's use a range of 1 to 3 with 3 for high, 2 for medium, and 1 for low.

Next, the values assigned to likelihood of failure, expected impact, complexity, and severity should be added together. If a value of 3 for high, 2 for medium, and 1 for low is used, then 9 risk priority levels are possible ( i.e., 12, 11, 10, 9, 8, 7, 6, 5, 4).

Conclusion

Risk analysis should be done early in the software development lifecycle. While there are many indicators of risk, I propose that expected impact, likelihood of failure, complexity, and severity should all be considered as good indicators of risk.

Risk analysis allows you to prioritize those requirements that should be tested first. The process allows the test team to set expectations about what can be tested within the project deadline. The risk analysis method presented here is flexible and easy to adopt. Many different indicators can be used. It is also possible to use different rankings rather than one through three. The higher the scale, the more granular the analysis.

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique
Here i am adding the further topics list on software testing subject and the topics may be scattered and you can find under different groups.

MAJOR SYSTEM FAILURES IN THE HISTORY

WHAT IS A SOFTWARE BUG ?

ROLE OF A TESTER

SOFTWARE TESTING INTRODUCTION PART ONE

TESTING INTRODUCTION PART TWO

TESTING INTRODUCTION PART THREE

TESTING INTRODUCTIONS PART FOUR

SOFTWARE TESTING FUNDAMENTALS

SOFTWARE TESTING FUNDAMENTALS PART TWO

SOFTWARE TESTING FUNDAMENTALS PART THREE

No comments:

Post a Comment