Showing posts with label RISK ANALYSIS. Show all posts
Showing posts with label RISK ANALYSIS. Show all posts

Managing Risk in Software Project

Plan how to manage the project’s risks: The Risk Management Plan documents how risks will be managed. It is a subset of the project plan and is written before the project begins.
Identify risks: One simple approach is to get representatives of all the affected groups in a room and have a workshop. Circulate a provisional list to excite attention. Get their ideas down onto large sheets of paper you can blu-tack to the walls. Circulate a revised list after the meeting.

Repeat the process half-way through the project, and identify how many have not occurred, and how many unforeseen ones had occurred.

Risk Alerts are the triggers used to identify when a risk is imminent. Typical test-related triggers are:
  1. Reduction in the number of lines of code per bug found.
  2. Finding an unacceptably-high number of priority-1 and -2 bugs in a build.
  3. Finding an unacceptably-high number of bugs in a component.
  4. Late arrival of signed-off specifications for use as a baseline.
  5. Failure of performance tests to achieve targets.
  6. Growing code complexity.
  7. Growing code turmoil.
Monitoring such risks is easier when an alerting system is in place. The existence of a risk log allows the test team to identify priorities and provides a good basis for deciding the mix of tests to be planned.

Risks can be grouped by sources and by kinds and a risk kind is for example that something doesn’t work, that it works too late, too slowly, at the wrong time, or that it has unintended side-effects. These groups are sensitive to risk drivers in that a driver can change a whole group of risks.

The failure of the project to use an appropriate development method was having a knock-on effect throughout the whole of the project. It was a source of risks and a major driver. Here are some more are
  1. Use of an inappropriate (unrelated to the risk) method or process.
  2. Lack of customer involvement:Apart from the obvious need for a sufficient set of requirements
  3. There is the need for feedback to users of (fragments of) the proposed solution.
  4. Dissimilarity to previous projects: If “we’ve never done anything as (big/complex/different) as this before” is an issue, then beware.
  5. Project complexity: This is relative to the experience of an organization. What might exhaust some organizations will be run-of-the-mill to others.
  6. • Requirements volatility:If such changes aren’t allowed for, the project will soon deteriorate.(56)
Related :

How do we test a software ?
SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT


Risk Analysis in Software testing part two

You can consider going through RISK analysis part one also before going through this for the sake of continuaiton.

Who performs the software risk analysis? Typically, everyone involved with the software development lifecycle. The users, business analysts, developers, and software testers are all involved in conducting risk analysis.

However, it is not always possible to have everyone's input, especially the users. In that case, the testers should conduct the software risk analysis as early as possible in the software development life cycle. Typically, risk analysis is done in the requirements stage of the software development life cycle.

Two indicators have been proposed as indicators of risk: the expected impact of failure and the likelihood of failure. Let's talk about these in turn.

Expected Impact Indicator

The software team should ask the question, "What would be the impact on the user if this feature or attribute failed to operate correctly?" Impact is usually expressed in terms of money or the cost of failure. For each requirement or feature, it is possible to assign a value in terms of the expected impact of the failure of that requirement or feature. Assign a value of high, medium, or low for each requirement as a measure of the expected impact.

You should concentrate your focus only on those features and attributes that directly impact the user, not necessarily on the testing effort. If you run into the situation where every feature or requirement is ranked the same, then limit the number of values each user can assign. Let's look at the expected impact and likelihood of failure for a hypothetical Login system:

Expected Impact and Likelihood of Failure for the Login Functionality

The requirement that the "UserId shall be 4 characters" has a low expected impact of failure because there is not much of an impact to a user if the userid is more or less than 4 characters. The same reasoning can be applied to the requirement that the "Password shall be 5 characters." However, the requirement that the "System shall validate each userID and password for uniqueness" has a high impact of failure because there could be multiple users with the same userID and password. If the developer does not code for this, security is at risk.

Likelihood of Failure Indicator

As part of the risk analysis process, the software team should assign an indicator for the relative likelihood of failure of each requirement or feature. Assign H for a relatively high likelihood of failure, M for medium, and L for low.

When the software team assigns a value for each feature, they should be answering the question, "Based on our current knowledge of the system, what is the likelihood that this feature or attribute will fail or fail to operate correctly?" At this point, Craig and I differ in that he argues that complexity is a systemic characteristic and should be included as part of the likelihood indicator.

My argument is that complexity should be an indicator on its own. Furthermore, severity should also be considered. Four indicators provide more granularity and detail than just the two typical indicators. In Table 2, I have shown that if the prioritization is the same between two different requirements then it is not possible to discern which requirement is more risky. If we have three or more indicators we are in a better position to evaluate risk.

Complexity Indicator

Something that is complex is intricate and complicated. The argument here is that the greater the complexity of the feature, the greater the risk. More interfaces means that there will be more risk involved with each interface as well as the overall system.


Thus, the analysis can be used for test planning. An excessively complex module will require a prohibitive number of test steps; that number can be reduced to a practical size by breaking the module into smaller, less-complex sub-modules." There are other measures of complexity that can be used for risk analysis.


Severity Indicator

Harshness of failure indicates how much damage there will be to the user community and also implies that there will be some suffering on the part of the user if the failure is realized. This suffering could be in the form of money, emotional stress, poor health, death, etc. Consider the following case of a software failure that resulted in deaths.

Severity is different from expected impact in that expected impact does not consider the suffering imposed on the user but merely considers the effect of the failure. Therefore, I argue that the greater the severity, the higher the risk. Assign a value of H for high, M for medium, or L for low for each requirement based on its severity.

The Method of Risk Analysis

At this point, the software team should assign a number to each high, medium, or low value for likelihood, expected impact, complexity, and severity indicators. It is possible to use a range of 1 to 3 with 3 being the highest or 1 to 5 with 5 being the highest. If you use the 1 to 5 range, there will be more detail. To keep the technique simple, let's use a range of 1 to 3 with 3 for high, 2 for medium, and 1 for low.

Next, the values assigned to likelihood of failure, expected impact, complexity, and severity should be added together. If a value of 3 for high, 2 for medium, and 1 for low is used, then 9 risk priority levels are possible ( i.e., 12, 11, 10, 9, 8, 7, 6, 5, 4).

Conclusion

Risk analysis should be done early in the software development lifecycle. While there are many indicators of risk, I propose that expected impact, likelihood of failure, complexity, and severity should all be considered as good indicators of risk.

Risk analysis allows you to prioritize those requirements that should be tested first. The process allows the test team to set expectations about what can be tested within the project deadline. The risk analysis method presented here is flexible and easy to adopt. Many different indicators can be used. It is also possible to use different rankings rather than one through three. The higher the scale, the more granular the analysis.

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique
Here i am adding the further topics list on software testing subject and the topics may be scattered and you can find under different groups.

MAJOR SYSTEM FAILURES IN THE HISTORY

WHAT IS A SOFTWARE BUG ?

ROLE OF A TESTER

SOFTWARE TESTING INTRODUCTION PART ONE

TESTING INTRODUCTION PART TWO

TESTING INTRODUCTION PART THREE

TESTING INTRODUCTIONS PART FOUR

SOFTWARE TESTING FUNDAMENTALS

SOFTWARE TESTING FUNDAMENTALS PART TWO

SOFTWARE TESTING FUNDAMENTALS PART THREE

Risk Analysis in Software testing

Risk management is tricky because the process involves subjective thinking on the part of individuals in the organization. Identification of risks is generally based on an individual's experience and knowledge of the system. Since experience and knowledge are unique to each individual, it is important to employ a wide range of individuals on the risk management team.

Risk management also involves an assessment of the risk tolerance level in the organization. Companies that are more tolerant of risk will be less likely to develop a risk management approach. However, in some industries like the medical industry, there is little tolerance for risk.

While risk management can be applied to any type of industry, Yamini discusses a software risk management technique: risk analysis.

What is Risk Analysis?

Risk analysis is part of an organization's overall risk management strategy. One definition of risk analysis is the "process of exploring risks on the list, determining, and documenting their relative importance." It is a method used to assess the probability of a bad event; and it can be done by businesses as part of disaster recovery planning as well as part of the software development lifecycle.

The analysis usually involves assessing the expected impact of a bad event such as a hurricane or tornado. Furthermore, an assessment of the likelihood of that bad event's occurrence is also taken.


Proposed methods of risk analysis include different indicators. Since one method might not fit perfectly for your project, I suggest pooling together a multitude of expert insight to see if one, or a combination of many methods, works well for you.


The method adopted here modifies Rick Craig and Stefan Jaskiel's work in Systematic Software Testing, presenting a method to complete software risk analysis using other indicators than "expected impact" and "likelihood of failure." Before we do a risk analysis, however, we must understand what is meant by the term "risk."

Definitions of Risk

Risk is the probability that a loss will occur, "a weighted pattern of possible outcomes and their associated consequences." It indicates "the probability that a software project will experience undesirable events, such as schedule delays, cost overruns, or outright cancellation. Risk is proportional to size and inversely proportional to skill and technology levels." Thus, the larger the project the greater the risk.

These definitions indicate that risk involves possible outcomes and consequences of those outcomes. Potential outcomes include both negative and positive outcomes. Negative outcomes such as undesirable events can occur, and when they occur, there will be a loss to someone. The loss can occur in terms of money, lives, or damage to property.

Risk reduction strategies differ based on the level of maturity of the organization. The more mature the organization, the less likely it will be to take risks. Thus, the more mature the organization, the more likely it is for a software team in that organization to do risk analysis of software. This leads us to the justification for doing a risk analysis.

Why Perform a Risk Analysis?

In the medical industry, risk analysis is done for the following reasons:

  1. Risk analysis is required by law.

  2. Identification of device design problems prior to distribution eliminates costs associated with recalls.
  3. Risk analysis offers a measure of protection from product liability damage awards.
  4. Regulatory submissions checklists (PMA and 510k) used by the FDA now call for inclusion of risk analysis.
  5. It is the right thing to do.
Some of these reasons could also apply to software risk analysis and disaster recovery planning in that risk analysis offers protection from product liability damages. Also, it is cheaper to fix a software defect if found in the development stage rather than if a customer finds the defect.

The risk analysis process "provides the foundation for the entire recovery planning effort." Similarly, in software development, risk analysis provides the foundation for the entire test planning effort. It should be included as an integral part of the test plan as a method to guide the test team in determining the order of testing.

The argument here is that testing reduces risks associated with software development and the software risk analysis allows us to prioritize those features and requirements with the highest risk. Testing high-risk items first reduces the overall risk of the software release significantly. Risk analysis also allows the test team to set expectations about what can be tested in the given amount of time.


RELATED POST

SOFTWARE QUALITY ASSURANCE AND CONTROL

SOFTWARE QUALITY AND COST ASPECT

STABLE PROCESS OF SOFTWARE TESTING

STABLE PROCESS OF SOFTWARE TESTING PART TWO


DEFECTS IN SOFTWARE TESTING

REDUCTION OF DEFECTS IN SOFTWARE TESTING

SOFTWARE TESTING AND EFFECTING FACTORS

SCOPE OF SOFTWARE TESTING

TESTING LIFE CYCLE PART ONE

TESTING LIFE CYCLE PART TWO

TESTING LIFE CYCLE PART THREE

SOFTWARE TESTING AND CONSTRAINTS WITH IN IT

TESTING CONSTRAINTS PART TWO

LIFE CYCLE TESTING

TEST METRICS

Independent Software Testing

Test Process

Testing verification and validation

Functional and structural testing

Static and dynamic testing

V model testing

Eleven steps of V model testing

Structural testing

Execution testing technique

Recovery Testing technique


Operation testing technique


Compliance software testing technique

Security testing technique
Here i am adding the further topics list on software testing subject and the topics may be scattered and you can find under different groups.

MAJOR SYSTEM FAILURES IN THE HISTORY

WHAT IS A SOFTWARE BUG ?

ROLE OF A TESTER

SOFTWARE TESTING INTRODUCTION PART ONE

TESTING INTRODUCTION PART TWO

TESTING INTRODUCTION PART THREE

TESTING INTRODUCTIONS PART FOUR

SOFTWARE TESTING FUNDAMENTALS

SOFTWARE TESTING FUNDAMENTALS PART TWO

SOFTWARE TESTING FUNDAMENTALS PART THREE