Testing Life Cycle part two

This article is in continuation with testing life cycle part one.

The following activities should be performed at each phase of the life cycle:

1 • Analyze the structures produced at this phase for internal testability and adequacy.

2 • Generate test sets based on the structure at this phase.

In addition, the following should be performed during design and programming:

1 • Determine that the structures are consistent with structures produced during previous phases.

2 • Refine or redefine test sets generated earlier.

Throughout the entire life cycle, neither development nor verification is a straight-line activity. Modifications or corrections to a structure at one phase will require modifications or re-verification of structures produced during previous phases.

Requirements

The verification activities that accompany the problem definition and requirements analysis phase of software development are extremely significant. The adequacy of the requirements must be thoroughly analyzed and initial test cases generated with the expected (correct) responses.

Developing scenarios of expected system use helps to determine the test data and anticipated results. These tests form the core of the final test set. Generating these tests and the expected behavior of the system clarifies the requirements and helps guarantee that they are testable.

Vague requirements or requirements that are not testable leave the validity of the delivered product in doubt. Late discovery of requirements inadequacy can be very costly. A determination of the criticality of software quality attributes and the importance of validation should be made at this stage. Both product requirements and validation requirements should be established.

Design

Organization of the verification effort and test management activities should be closely integrated with preliminary design. The general testing strategy – including test methods and test evaluation criteria – is formulated, and a test plan is produced. If the project size or criticality warrants, an independent test team is organized. In addition, a test schedule with observable milestones is constructed. At this same time, the framework for quality assurance and test documentation should be established.

During detailed design, validation support tools should be acquired or developed and the test procedures themselves should be produced. Test data to exercise the functions introduced during the design process, as well as test cases based upon the structure of the system, should be generated. Thus, as the software development proceeds, a more effective set of test cases is built.

In addition to test organization and the generation of test cases, the design itself should be analyzed and examined for errors. Simulation can be used to verify properties of the system structures and subsystem interaction; the developers should verify the flow and logical structure of the system, while the test team should perform design inspections using design walk troughs.

Missing cases, faulty logic, module interface mismatches, data structure inconsistencies, erroneous I/O assumptions, and user interface inadequacies, are items of concern. The detailed design must prove to be internally coherent, complete, and consistent with the preliminary design and requirements.

95.1

The previous post is regarding C SHARP programming of Microsoft dot net and you can have a glance at that here.

c sharp complete complete course part one

This is the c sharp programming language learning course which is a widely used in dot net platform.

This course is first part which consists of fifteen days and next course will be published soon.

All your comments regarding this is welcome.

Here is the list of course.

VISUAL STUDIO INTRODUCTION

C SHARP INTRODUCTION

C SHARP OUT LOOK

DOT NET AND C SHARP

C SHARP APPLICATION STRICTURE

OOPS INTRODUCTION

OOPS AND C SHARP

IDE AND C SHARP

INSTANTIATING OBJECTS IN C SHARP

CLASSES AND OBJECTS IN C SHARP

OPERATORS IN C SHARP

SWITCH AND ITERATION IN C SHARP

BRANCHING IN C SHARP

CONSTANTS AND STRING

STATIC AND INSTANCE MEMBERS IN DOT NET

If you are interested in ASP PROGRAMMING YOU CAN CHECK HERE.

The previous post is regarding SOFTWARE TESTING LIFE CYCLE IN THIS BLOG.

Testing Life Cycle part one

Often, testing after coding is the only method used to determine the adequacy of the system. When testing is constrained to a single phase and confined to the later stages of development, severe consequences can develop.

It is not unusual to hear of testing consuming 50 percent of the development budget. All errors are costly, but the later in the life cycle that the error discovered is made, the more costly the error.

An error discovered in the latter parts of the life cycle must be paid for four different times.

The first cost is developing the program erroneously, which may include writing the wrong specifications, coding the system wrong, and documenting the system improperly.

Second, the system must be tested to detect the error.

Third, the wrong specifications and coding must be removed and the proper specifications, coding, and documentation added.

Fourth, the system must be retested to determine that it is now correct.

If lower cost and higher quality systems are the information services goals, verification must not be isolated to a single phase in the development process, but rather, incorporated into each phase of development.

One of the most prevalent and costly mistakes on systems development projects today is to defer the activity of detecting and correcting problems until late in the project. A major justification for an early verification activity is that many costly errors are made before coding begins.

Studies have shown that the majority of system errors occur in the design phase. These numerous studies show that approximately two-thirds of all detected system errors can be attributed to errors made during the design phase. This means that almost two-thirds of the errors must be specified and coded into programs before they can be detected.

The recommended testing process is presented in Table 1-5 as a life cycle chart showing the verification activities for each phase. The success of conducting verification throughout the development cycle depends upon the existence of clearly defined and stated products at each development stage.

The more formal and precise the statement of the development product, the more amenable it is to the analysis required to support verification. Many of the new system development methodologies encourage firm products even in the early development stages.

The recommended test process involves testing in every phase of the life cycle. During the requirements phase, the emphasis is upon validation to determine that the defined requirements meet the needs of the organization. During the design and program phases, the emphasis is on verification to ensure that the design and programs accomplish the defined requirements.

During the test and installation phases, the emphasis is on inspection to determine that the implemented system meets the system specification. During the maintenance phases, the system will be retested to determine that the changes work and that the unchanged portion continues to work.

93
PREVIOUS POST

ASP PROGRAMMING COMPLETE SERIES PART TWO

ASP.NET Programming Complete part one

This is the list of ASP.NET and related posts where you can learn the concept completely topic by topic.

Here you can find every thing regarding ASP.NET right from basics to most advanced concepts.

You can also have a look at complete concepts like

Interview Questions and Answers on concept of ASP.NET and Frame work here.

Here is the ASP.NET concepts.

ASP.NET INTRODUCTION

BUILDING FORMS WITH WEB CONTROLS PART ONE

BUILDING FORMS WITH WEB CONTROLS PART TWO

TABLE CONTROL AND EVENTS IN ASP.NET

JDBC AND ODBC

WEB CONTROLS IN ASP.NET

CUSTOM CONTROLS IN ASP.NET

VALIDATING USER INPUTS

DEBUGGING ASP.NET PAGES

DEBUGGING PART TWO

DATA BINDING IN ASP.NET PART ONE

DATA BINDING IN ASP.NET PART TWO

DATA BINDING IN ASP.NET AND XML PART ONE

DATA BINDING IN ASP.NET AND XML PART TWO

WORKING WITH DATA GRIDS IN ASP

USING TEMP LETS IN ASP.NET

DATA GRID CONTROL IN ASP

SQL SERVER AND ASP.NET

STORED PROCEDURES IN SQL SERVER

ASP.NET CONFIGURATION

DEVELOPMENTS OF BUSINESS OBJECTS FOR ASP

ADO.NET INTRODUCTION

ADO.NET OBJECT MODEL

COMMUNICATION WITH DATA BASE SOURCES


FRAME WORK AND ASP.NET QUESTIONS AND ANSWERS

This list of posts gives the large number of asp.net interview questions and answers.

The basic interview questions and answers are here.

Going through one by one will give the corect idea as they were framed in a order.

Further comments are welcome.

Here is the list of posts where you can read all the questions and answers regarding asp.net.


ASP.NET INTERVIEW QUESTIONS AND ANSWERS PART ONE

ASP.NET INTERVIEW QUESTIONS AND ANSWERS PART TWO

ASP.NET INTERVIEW QUESTIONS AND ANSWERS PART THREE

ASP.NET INTERVIEW QUESTIONS AND ANSWERS PART FOUR

ASP.NET INTERVIEW QUESTIONS AND ANSWERS PART FIVE

ADO.NET INTERVIEW QUESTIONS AND ANSWERS PART ONE

ADO.NET INTERVIEW QUESTIONS AND ANSWERS PART TWO

You can also learn the concept of frame work concept in detail with questions and answers in the following place.

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART ONE

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART TWO

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART THREE

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART FOUR

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART FIVE

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART SIX

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART SEVEN

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART EIGHT

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART NINE

MICROSOFT DOT NET FRAME WORK QUESTIONS AND ANSWERS PART TEN


Questions and Answers dot net interview part one

This is the list of interview questions and answers for microsoft dot net technology.

This series includes questions and detailed answers on basic concepts of dot net like the functioning and architecture of the new technology.

Here is the list.

DOT NET BASIC CONCEPTS PART ONE

DOT NET BASIC CONCEPTS PART TWO

DOT NET BASIC CONCEPTS PART THREE

C SHARP BASICS

ADO.NET BASICS

ADO.NET BASICS PART TWO

ADO.NET BASICS PART THREE

ADO.NET QUESTIONS AND ANSWERS PART FOUR

ADO.NET QUESTIONS AND ANSWERS PART FIVE

ADO.NET PART SIX

DOT NET REAL TIME QUESTIONS

ASP.NET BASIC QUESTIONS PART ONE

ASP.NET INTERVIEW QUESTIONS PART TWO

FAQ'S ON DATA SET

IIS

EVENT HANDLING QUESTIONS AND ANSWERS

BOX CONTROLS

WINFORM APPLICATIONS

DOT NET REAL TIME QUESTIONS AND ANSWERS PART ONE

DOT NET REAL TIME QUESTIONS AND ANSWERS PART TWO

DOT NET REAL TIME QUESTIONS AND ANSWERS PART THREE

DOTNET COMPLETE COURSE PART TWO

DOTNET FUNDAMENTALS PART ONE

Are you new to Microsoft dot net and looking for dot net subject covered in a systematic order ?

Here is the list of Microsoft dot net lessons which can be learned one by one.

If you have all ready learned the technology and looking for clear cut clarity regarding a specific topic then browse for the particular topic find it.

DAY 1 MICROSOFT DOT NET FRAME WORK

DAY 2 MICROSOFT DOT NET BASE CLASS LIBRARY

DAY 3 MICROSOFT DOT NET CLASSES AND STRECTURES

DAY 4 METHODS IN FRAME WORK

DAY 5 INPUT VALIDATIONS IN DOT NET PART ONE

DAY 6 INPUT VALIDATIONS IN DOT NET PART TWO

DAY 7 DATA TYPES IN DOT NET

DAY 8 DATA TYPES IN DOT NET PART TWO

DAY 9 IMPLEMENTING PROPERTIES IN DOT NET

DAY 10 DELEGATES AND EVENTS

DAY 11 OOPS INTRODUCTION

DAY 12 POLYMORPHISM

DAY 13 INHERITANCE AND POLYMORPHISM

DAY 14 EBUGGING TOOLS IN DOT NET

DAY 15 DEBUG AND TRACE IN CLASSES

DAY 16 UNIT TEST PLAN

DAY 17 EXCEPTIONS IN VISUAL STUDIO

DAY 19 ADO.NET INTRODUCTION

DAY 20 DATA ACCESSING IN DOT NET

DAY 21 DATA BASE OBJECTS


Scope of Software Testing

The scope of testing is the extensiveness of the test process. A narrow scope may be limited to determining whether or not the software specifications were correctly implemented. The scope broadens as more responsibilities are assigned to software testers.

Among the broader scope of software testing are these responsibilities:

1 . Software testing can compensate for the fact that the software development process does not identify the true needs of the user, and thus test to determine whether or not the user’s needs have been met regardless of the specifications.

2 . Finding defects early in the software development process when they can be corrected at significantly less cost than detected later in the software development process.

3 . Removing defects of all types prior to software going into a production state when it is significantly cheaper than during operation.

4 . Identifying weaknesses in the software development process so that those processes can be improved and thus mature the software development process. Mature processes produce software more effectively and efficiently.

In defining the scope of software testing each IT organization must answer the question, “Why are we testing?

92.2

RELATED POST

SOFTWARE TESTING EFFECTING FACTORS

Software Testing Effecting factors

People Relationships:

The word “testing” conjures up a variety of meanings depending upon an individual’s frame of reference. Some people view testing as a method or process by which they add value to the development cycle; they can even enjoy the challenges and creativity of testing.

Other people
feel that testing tries a person’s patience, fairness, ambition, credibility, and capability. Testing can actually affect a person’s mental and emotional health if you consider the office politics and interpersonal conflicts that are often times present.

Some attitudes that have shaped a negative view of testing and testers are:

1 • Testers hold up implementation.


2 • Giving testers less time to test will reduce the chance that they will find defects.


3 • Letting the testers find problems is an appropriate way to debug.


4 • Defects found in production are the fault of the testers.


5 • Testers do not need training; only programmers need training.



Although testing is a process, it is very much a dynamic one in that the process will change somewhat with each application under test. There are several variables that affect the testing process including :

the development process itself, software risk, customer/user participation,
the testing process, a tester’s skill set, use of tools, testing budget and resource constraints, management support, and morale and motivation of the testers. It is obvious that the people side of software testing has long been ignored for the more process-related issues of test planning, test tools, defect tracking, and so on.

Surviving the Top Ten Challenges of Software Testing, the challenges have been identified as:

1 • Training in testing

2 • Relationship building with developers


3 • Using tools


4 • Getting managers to understand testing


5 • Communicating with users about testing


6 • Making the necessary time for testing


7 • Testing “over the wall” software


8 • Trying to hit a moving target


9 • Fighting a lose-lose situation


10 • Having to say “no”

Testers should perform a self-assessment to identify their own strengths and weaknesses as they relate to people-oriented skills. They should also learn how to improve the identified weaknesses, and build a master plan of action for future improvement.

Essential testing skills include test planning, using test tools (automated and manual), executing tests, managing defects, risk analysis, test measurement, designing a test environment, and designing effective test cases. Additionally, a solid vocabulary of testing is essential. A tester needs to understand what to test, who performs what type of test, when testing should be performed, how to actually perform the test, and when to stop testing.

91.8

RELATED POST

DEFECT REDUCTION IN SOFTWARE TESTING

Defect reduction-software testing

Level 1 Ad Hoc

Ad hoc means unstructured, inconsistent levels of performance. At the ad hoc level, tasks are not performed the same way by different people or different groups. For example, one system development group may use part of the system development methodology, but improvise other parts; another group may select different parts of the same system development methodology to use, and decide not to perform tasks done by a previous group.

At this level, management manages people and jobs. Management will establish goals or objectives for individuals and teams, and manage to those objectives and goals with minimal concern about the means used to achieve the goals. This level is normally heavily schedule driven, and those that meet the schedules are rewarded.

Since there are not standards against which to measure deliverables, people’s performance is often dependent upon their ability to convince management that the job they have done is excellent. This causes the environment to be very political. Both management and staff become more concerned with their personal agenda than with meeting their organization’s mission.

The emphasis needed to move from Level 1 to Level 2 is discipline and control. The emphasis is on getting the work processes defined, training the people in the work processes, implementing sufficient controls to assure compliance to the work processes, and producing products that meet predefined standards.

Level 2 Control

There are two major objectives to be achieved at Level 2. The first is to instill discipline in the culture of the information organization so that through the infrastructure, training, and leadership of management individuals will want to follow defined processes. The second objective is to reduce variability in the processes by defining them to a level that permits relatively constant outputs. At this level, processes are defined with minimal regard to skills needed to perform the process AND with minimal regard to the impact on other processes.

At Level 2, the work processes are defined; management manages those processes, and uses validation and verification techniques to check compliance to work procedures and product standards. Having the results predefined through a set of standards enables management to measure people’s performance against meeting those standards. Education and training are an important component of Level 2, as is building an infrastructure that involves the entire staff in building and improving work processes.

The emphasis that needs to be put into place to move to Level 3 is defining and building the information group’s core competencies.

Level 3 Core Competency

At this level, an information organization defines its core competencies and then builds an organization that is capable of performing those core competencies effectively and efficiently.

The more common core competencies for an information services organization include system development, maintenance, testing, training, outsourcing, and operation. The information group must decide if it wants core competencies in fields such as communication, hardware and software selection, contracting, and so forth.

Once the core competencies are determined, then the processes defined at Level 2 must be reengineered to drive the core competencies. In addition, the tasks are analyzed to determine what skills are needed to perform those processes. Next, a staff must be retrained, recruited, motivated, and supported to perform those core competencies in an effective and efficient manner.

It is the integration of people and processes, coupled with managers with people management skills, which are needed to maintain and improve those core competencies. Lots of mentoring occurs at this level, with the more experienced people building skills in the less experienced. It is also a level that is truly customer focused – both the information organization and the customer know the information group’s core competencies.

The managerial emphasis that is needed to move to Level 4 is quantitative measurement. Measurement is only a practical initiative when the processes are stabilized and focused on achieving management’s desired results.

Level 4 Predictable


This level has two objectives. The first is to develop quantitative standards for the work processes based on performance of the Level 3 stabilized processes. The second objective is to provide management the dashboards and skill sets needed to manage quantitatively. The result is predictable work processes. Knowing the normal performance of a work process, management can easily identify problems through variation from the quantitative standards to address problems quickly to keep projects on schedule and budget.

This level of predictability is one that uses measurement to manage as opposed to using measurement to evaluate individual performance. At this level, management can become coaches to help people address their day-to-day challenges in performing work processes in a predictable manner.

Management recognizes that obstacles and problems are normal in professional activities, and through early identification and resolution, professional work processes can be as predictable as manufacturing work processes. The management emphasis that is needed to move to Level 5 is one of desiring to be world class. World-class means doing the best that is possible, given today’s technology.


Level 5 Innovative

At Level 5, the information organization wants to be a true leader in the industry. At this level, the organization is looking to measure itself against the industry through benchmarking, and then define innovative ways to achieve higher levels of performance. Innovative approaches can be achieved through benchmarking other industries, applying new technology in an innovative way, reengineering the way work is done, and by constantly studying the literature and using experts to identify potential innovations. This level is one in which continuous learning occurs, both in individuals and the organization.



89.8

RELATED POST

DEFECTS IN SOFTWARE TESTING

Defects in Software Products

Software design defects that most commonly cause bad decisions by automated decision making applications include:

1• Designing software with incomplete or erroneous decision-making criteria. Actions have been incorrect because the decision-making logic omitted factors that should have been included. In other cases, decision-making criteria included in the software were appropriate, either at the time of design or later, because of changed circumstances.

2• Failing to program the software as intended by the customer (user), or designer, resulting in logic errors often referred to as programming errors.

3• Omitting needed edit checks for determining completeness of output data. Critical data elements have been left blank on many input documents, and because no checks were included, the applications processed the transactions with incomplete data.


Data Defects

Input data is frequently a problem. Since much of this data is an integral part of the decision making process, its poor quality can adversely affect the computer-directed actions. Common problems are:

1• Incomplete data used by automated decision-making applications. Some input documents prepared by people omitted entries in data elements that were critical to the application but were processed anyway. The documents were not rejected when incomplete data was being used. In other instances, data needed by the application that should have become part of IT files was not put into the system.

2• Incorrect data used in automated decision-making application processing. People have often unintentionally introduced incorrect data into the IT system.

3• Obsolete data used in automated decision-making application processing. Data in the IT files became obsolete due to new circumstances. The new data may have been available but was not put into the computer.

Finding Defects

All testing focuses on discovering and eliminating defects or variances from what is expected.

Testers need to identify these two types of defects:

1• Variance from Specifications – A defect from the perspective of the builder of the product.

2• Variance from what is Desired – A defect from a user (or customer) perspective.

Typical software system defects include:

1• IT improperly interprets requirements

IT staff misinterprets what the user wants, but correctly implements what the IT people believe is wanted.

1• Users specify the wrong requirements

2.The specifications given to IT are erroneous.

3• Requirements are incorrectly recorded

4.IT fails to record the specifications properly.

5• Design specifications are incorrect

6.The application system design does not achieve the system requirements, but the design as specified is implemented correctly.

7• Program specifications are incorrect The design specifications are incorrectly interpreted, making the program specifications inaccurate; however, it is possible to properly code the program to achieve the specifications.

8• Errors in program coding The program is not coded according to the program specifications.

9• Data entry errors

Data entry staff incorrectly enters information into your computers.

10• Testing errors

Tests either falsely detect an error or fail to detect one.

11• Mistakes in error correction

Your implementation team makes errors in implementing your solutions.

12• The corrected condition causes another defect

In the process of correcting a defect, the correction process itself injects additional defects into the application system.

85
RELATED POST

STABLE PROCESS IN SOFTWARE TESTING

india independence day

Lets get together and remind our 1 billion country men and women, the sacrifices that went into creating what we today call OUR INDIA - one of the fastest growing nations in the world and the superpower of tomorrow.

Millions of our fore fathers who fought for India's freedom - many whose names no one remembers, millions of those unsung heroes who gave their lives passionately to protect the soil you stand on today! This post is dedicated to those great Indians, whose blood still flows in you and me. Its because of these hero's that India sees a new dawn, an era of prosperity.

It's now time we let the world know that India has arrived. Lets tell everyone around us how much we love India. Lets go that extra mile and do that something special for India.

Jai Hind!
Saare Jahan Se Achha, Hindustan Hamara.



Thanks for this inspiring words from people'sforever.org

What do you say dear fellow Indians ?

Stable Process Software Testing part TWo

This post is in continuation with Stable testing process part one.

For eliminating special causes of variation:

Work to get very timely data so that special causes are signaled quickly – use early
warning indicators throughout your operation.

1.Immediately search for the cause when the control chart gives a signal that a special cause has occurred. Find out what was different on that occasion from other occasions.

2• Do not make fundamental changes in that process.

3• Instead, seek ways to change some higher-level systems to prevent that special cause from recurring.

Common causes of variation are typically due to a large number of small random sources of variation. The sum of these sources of variation determines the magnitude of the process’s inherent variation due to common causes; the process’s control limits and current process
capability can then be determined. Figure illustrates an out of control process.



common causes of variation:

1• Process inputs and conditions that regularly contribute to the variability of process outputs.

2• Common causes contribute to output variability because they themselves vary.

3• Each common cause typically contributes a small portion to the total variation in process outputs.

4• The aggregate variability due to common causes has a “nonsystematic,” randomlooking
appearance.

5• Because common causes are “regular contributors,” the “process” or “system” variability is defined in terms of them.



For reducing common causes of variation:

1• Talk to lots of people including local employees, other managers, and staff from various functions.

2• Improve measurement processes if measuring contributes too much to the observed variation.

3.Identify and rank categories of problems by Pareto analysis (a ranking from high to low of any occurrences by frequency).

4• Stratify and desegregate your observations to compare performance of sub-processes.

5• Investigate cause-and-effect relations. Run experiments (one factor and multifactor).

82.2

Privacy Policy

This website/blog uses third-party advertising companies to serve ads when visiting this site. These third parties may collect and use information (but not your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. If you would like more information about this practice and to know your choices about not having this information used by these companies, you can visit Google's Advertising and Privacy page.

If you wish to opt out of Advertising companies tracking and tailoring advertisements to your surfing patterns you may do so at Network Advertising Initiative.

Google uses the Doubleclick DART cookie to serve ads across it's Adsense network and you can get further information regarding the DART cookie at Doubleclick as well as opt out options at Google's Privacy Center.

Privacy:
I respect your privacy and I am committed to safeguarding your privacy while online at this site www.physicspractice.com. The following discloses how I gather and disseminate information for this Blog.

RSS Feeds and Email Updates

If a user wishes to subscribe to my RSS Feeds or Email Updates (powered by Feedburner), I ask for contact information such as name and email address. Users may opt-out of these communications at any time. Your personal information will never be sold or given to a third party. (You will never be spammed by me - ever)

Log Files and Stats

Like most blogging platforms I use log files, in this case Statcounter. This stores information such as internet protocol (IP) addresses, browser type, internet service provider (ISP), referring, exit and visited pages, platform used, date/time stamp, track user’s movement in the whole, and gather broad demographic information for aggregate use. IP addresses etc. are not linked to personally identifiable information.

Cookies

A cookie is a piece of data stored on the user’s computer tied to information about the user. This blog doesn't use cookies. However, some of my business partners use cookies on this site (for example - advertisers). I can't access or control these cookies once the advertisers have set them.

Links

This Blog contains links to other sites. Please be aware that I am not responsible for the privacy practices of these other sites. I suggest my users to be aware of this when they leave this blog and to read the privacy statements of each and every site that collects personally identifiable information. This privacy statement applies solely to information collected by this Blog.

Advertisers

I use outside ad companies to display ads on this blog. These ads may contain cookies and are collected by the advertising companies and I do not have access to this information. I work with Google Adsense. Please check the advertisers websites for respective privacy policies.

Contact Information

If you have any questions or concerns please contact me d_vsuesh[at the rate of]yahoo[dot]co[dot]in.


© 2007 Tesingcorner - The articles are copyrighted to D V Suresh and can only be reproduced given the author's permission.

All product names are trademarks of their respective companies.Every effort is made to ensure content integrity. Use information on this site at your own risk. Information furnished in the blog is collected from various sites. This blog does not host any files on its server. Please report any broken links in comment. .
The Blog www.testingcorner.net no way affiliated with any software testing,Sun Java,Microsoft Dot Net companies.

C Sharp Edge

.............................................
The ability to build generic types and generic members. Using generics, you are able to build very efficient and type-safe code that defines numerous “placeholders” specified at the time you interact with the generic item.

Support for anonymous methods, which allow you to supply an inline function anywhere a delegate type is required.

Numerous simplifications to the delegate/event model, including covariance, contravariance,
and method group conversion.

The ability to define a single type across multiple code files (or if necessary, as an in-memory representation) using the partial keyword.

Support for strongly typed queries (a la LINQ, or Language Integrated Query) used to interact With various forms of data.

Support for anonymous types, which allow you to model the “shape” of a type rather than its behavior.

The ability to extend the functionality of an existing type using extension methods.

Inclusion of a lambda operator (=>), which even further simplifies working with .NET delegate types.

A new object initialization syntax, which allows you to set property values at the time of object creation .

10
RELATED POST

XML OVER VIEW

XML INTRODUCTION DAY 5

1. XML shall be straightforwardly usable over the Internet.

This does not mean that XML should only be used over the Internet, but rather that it should be lightweight and easily usable over the Internet.

2. XML shall support a wide variety of applications.

The idea here is that XML should not be application specific. It can be used over the Internet or in a traditional client/server application. There is no specific technology behind XML, so any technology should be able to use it.

3. It shall be easy to write programs that process XML documents.

Unable to gain wide acceptance for various reasons, many technologies come and go. A major Barrier to wide acceptance is a high level of difficulty or complexity.

The designers of XML wanted to ensure that it would gain rapid acceptance by making it easy for programmers to write XML parsers.

4. XML documents should be human-legible and reasonably clear.

Because XML is text-based and follows a strict but simple formatting methodology, it is extremely easy for a human to get a true sense of what a document means. XML is designed to describe the structure of its contents.

5. XML documents shall be easy to create.

XML documents can be created in a simple text-editor. Now that’s easy!There are other XML guidelines, but since this only is an introduction to XML, these will do for now. The important thing to remember is that XML is simply a file format that can be used for two or more entities to exchange information.

XML documents are hierarchical: they have a single (root) element, which may contain other elements, which may in turn contain other elements, and so on. Documents typically look like a tree structure with branches growing out from the center and finally terminating at some point with content. Elements are often described as having parent and child relationships, in which the parent contains the child element.


XML documents must be properly structured and follow strict syntax rules in order to work correctly. If a document is lacking in either if these areas, the document can’t be parsed.

There are two types of structures in every XML document: logical and physical. The logical structure is the framework for the document and the physical structure is the actual data.

An XML document may consist of three logical parts: a prolog (optional), a document element, and an epilog (optional). The prolog is used to instruct the parser how to interpret the document element. The purpose of the epilog is to provide information pertaining to the preceding data.

60.81
RELATED POST

DATA BASE CREATION FOR DOT NET

Stable Process Software Testing part one

The amount of variation in a process is quantified with summary statistics; typically, the standard deviation is used. A process is defined as stable if its parameters (i.e., mean and standard deviation) remain constant over time; it is then said to be in a state of statistical control. Figure illustrates a stable process. Such a process is predictable, i.e., we can predict, within known limits and with a stated degree of belief, future process values.

Accepted practice uses a prediction interval three standard deviation distances in width around the population mean (µ ± 3) in establishing the control limits.

Continuous process improvement through the use of quantitative methods and employee involvement sets quality management apart from other attempts to improve productivity.

Continuous process improvement is accomplished by activating teams and providing them with quantitative methods such as SPC techniques and supporting them as they apply these tools. We will further discuss the concept of variation, common and special causes of variation, and QAI’s Continuous Improvement Strategy.

The natural change occurring in organizational life moves systems and processes towards increasing variation. Statistical methods help us collect and present data in ways that facilitate the evaluation of current theories and the formation of new theories. These tools are the only methods available for quantifying variation. Since the key to quality is process consistency, variation (the lack of consistency) must be understood before any process can be improved.

Statistical methods are the only way to objectively measure variability. There is no other way!

Variation is present in all processes.

The cumulative effect of sources of variation in a production process is shown in the table.

One of the challenges in implementing quality management is to get those working in the process thinking in terms of sources of variation. How much of the observed variation can be attributed to measurements, material, machines, methods, people and the environment?

Consistency in all the processes from conception through delivery of a product or service is the cornerstone of quality. Paradoxically, the route to quality is not just the application of SPC and the resulting control charts. Managers must change the way they manage. They must use statistical methods in making improvements to management processes as well as all other processes in the organization.

Special causes of variation are not typically present in the process. They occur because of special or unique circumstances. If special causes of variation exist, the process is unstable or unpredictable. Special causes must be eliminated to bring a process into a state of statistical control. A state of statistical control is established when all special causes of variation have been eliminated.

SUMMARY:

Process inputs and conditions that sporadically contribute to the variability of process outputs.

1• Special causes contribute to output variability because they themselves vary.

2.Each special cause may contribute a “small” or “large” amount to the total variation in process outputs.

3• The variability due to one or more special causes can be identified by the use of control charts.
• Because special causes are “sporadic contributors,” due to some specific circumstances, the “process” or “system” variability is defined without them.

80
RELATED POST

SOFTWARE QUALITY TESTING

Software Quality Factors

In defining the scope of testing, the risk factors become the basis or objective of testing. The objectives for many tests are associated with testing software quality factors. The software quality factors are attributes of the software that, if they are wanted and not present, pose a risk to the success of the software, and thus constitute a business risk.

For example, if the software is not easy to use, the resulting processing may be incorrect. The definition of the software quality factors and determining their priority enables the test process to be logically constructed.

When software quality factors are considered in the development of the test strategy, results from testing successfully meet your objectives.

The primary purpose of applying software quality factors in a software development program is to improve the quality of the software product. Rather than simply measuring, the concepts are based on achieving a positive influence on the product, to improve its development.

Identify Important Software Quality Factors

The following Figure illustrates the Diagram of Software Quality Factors .

The basic tool used to identify the important software quality factors is the Software Quality
Factor Survey form . The formal definitions of each of the eleven software quality factors are provided on that form.

RELATED POST

COST OF SOFTWARE QUALITY

Creating data base for dot net Day 4

................................................................
The first step in building a database with SQL Server is to actually create the database. That’s
right. SQL Server is a piece of software that runs on a computer, or server. Once the SQL Server
software is installed you can create a database (or databases) with the SQL Server software that is then managed by that SQL Server software.

Many people refer to SQL Server as a database, which it is, sort of. SQL Server is actually an application, a Relational Database Management System (RDBMS), which can contain multiple databases.

Here we are going to create the Music database. You’ll start by creating the database using Enterprise Manager and perform the following steps:

1. Expand the SQL Server Group item, if it isn’t already expanded, in the Enterprise Manager tree. Once expanded you should see a list of SQL Servers that are registered with Enterprise Manager.

2. Right-click the SQL Server in which you want to create the Music database.

3. Select NewDatabase.

4. You see the Database Properties dialog box, shown in Figure 4-1. On the General tab, enter Music in the Name field. The Database Properties dialog box allows you to control other features of your database such as file growth, maximum database size, transaction log files, and so on. For the sake of brevity, accept the defaults.


31

RELATED POST

DATA BASE ACCESS IN DOT NET
INTRODUCTION TO DATA BASE PROGRAMMING

Security Dot Net Data Base Access Day 3


Probably the most overlooked aspect of database design is security, when it should be a major consideration. By not securing your database and thereby your data, you are asking for trouble.

Not all data is intended for everyone’s eyes, and not everyone should have the ability to manipulate your data and definitely not your database’s structure. The majority of your database users will only need and should only be granted read (or select) access.

When designing your database, you should establish a list of policies and users for your database.

A database user is anyone who needs access to any part of your database. The highest level of user is the database administrator who will have access to all database objects and data.

Other users will only have access to certain objects and data. The average end user will only
have access to certain objects, but should never have the ability to alter your database structure.

It never ceases to amaze us how many organizations have one “global” user that has complete control of the database. This is typically a bad scenario, not because people are intentionally malicious, but because they are people and no one is perfect.

The impact of a simple mistake can take hours and even days to reverse, if reversal is possible at all.

Policies are basically rules that define which actions a user can perform on your database.

Most RDMSs enable you to assign a separate set of policies, or rights, for each object in your database. User rights generally fall into one of six different categories:

SELECT enables the user to view data.

INSERT enables the user to create new data.

UPDATE enables the user to modify existing data.

DELETE enables the user to delete data.

EXECUTE enables the user to execute a stored procedure.

ALTER enables the user to alter database structure.

Each user in a database should have a unique user name and password combination. This will enable your RDMS to enforce the security policies you have established for the user.

28

RELATED POST

NORMALIZATION IN DOT NET DATA BASE
ASP.NET SECURITY ASPECTS

COST OF SOFTWARE QUALITY

When calculating the total costs associated with the development of a new application or system, three cost components must be considered. The Cost of Quality, as seen in Figure, is all the costs that occur beyond the cost of producing the product “right the first time.” Cost of Quality is a term used to quantify the total cost of prevention and appraisal, and costs associated with the production of software.

The Cost of Quality includes the additional costs associated with assuring that the product delivered meets the quality goals established for the product. This cost component is called the Cost of Quality, and includes all costs associated with the prevention, identification, and correction of product defects.

The three categories of costs associated with producing quality products are:

Prevention Costs

Money required to prevent errors and to do the job right the first time. These normally require up-front costs for benefits that will be derived months or even years later. This category includes money spent on establishing methods and procedures, training workers, acquiring tools, and planning for quality. Prevention money is all spent before the product is actually built.

Appraisal Costs

Money spent to review completed products against requirements. Appraisal includes the cost of inspections, testing, and reviews. This money is spent after the product is built but before it is shipped to the user or moved into production.

Failure Costs

The costs associated with defective products that have been delivered to the user or moved into production. Some failure costs involve repairing products to make them meet requirements.

Others are costs generated by failures such as the cost of operating faulty products, damage incurred by using them, and the costs associated with operating a Help Desk.

The Cost of Quality will vary from one organization to the next. The majority of costs associated with the Cost of Quality are associated with the identification and correction of defects. To minimize production costs, the project team must focus on defect prevention. The goal is to optimize the production process to the extent that rework is eliminated and inspection is built into the production process.

The IT quality assurance group must identify the costs within these three categories, quantify them, and then develop programs to minimize the totality of these three costs. Applying the concepts of continuous testing to the systems development process can reduce the cost of quality.
66.6

RELATED POST

QUALITY ASSURANCE AND CONTROL
SOFTWARE TESTING ECONOMICS

QUALITY ASSURENCE AND CONTROL

Quality Assurance is a planned and systematic set of activities necessary to provide adequate confidence that products and services will conform to specified requirements and meet user needs. Quality assurance is a staff function, responsible for implementing the quality policy defined through the development and continuous improvement of software development processes.

It is an activity that establishes and evaluates the processes that produce products. If there is no need for process, there is no role for quality assurance. For example,quality assurance activities in an IT environment would determine the need for, acquire, or help install:

System development methodologies

Estimation processes

System maintenance processes

Requirements definition processes

Testing processes and standards

Once installed, quality assurance would measure these processes to identify weaknesses, and then correct those weaknesses to continually improve the process.


Quality Control

Quality control activities focus on identifying defects in the actual products produced. These activities begin at the start of the software development process with reviews of requirements, and continue until all application testing is complete.

It is possible to have quality control without quality assurance. For example, a test team may be in place to conduct system testing at the end of development, regardless of whether that system is produced using a software development methodology.

Both quality assurance and quality control are separate and distinct from the internal audit function. Internal Auditing is an independent appraisal activity within an organization for the review of operations, and is a service to management. It is a managerial control that by measuring and evaluating the effectiveness of other controls.

The following statements help differentiate quality control from quality assurance:

Quality control relates to a specific product or service.

Quality control verifies whether specific attribute(s) are in, or are not in, a specific product or service.

Quality control identifies defects for the primary purpose of correcting defects.

Quality control is the responsibility of the team/worker.

Quality control is concerned with a specific product.

65.3

RELATED POST


TESTING PROCESS PART THREE

WHAT TEST PLAN SHALL HAVE ?

SOFTWARE RELIABILITY

TEST DESIGN

DEFECT CLASSIFICATION

DEFECT TRACKING

TEST METRICS

TEST REPORTS

CHANGE REQUEST MANAGEMENT

UNIT TEST SPECIFICATIONS

UNIT TEST SPECIFICATIONS PART TWO

FUNCTIONAL FLOW MATRIX PART ONE

FUNCTIONAL FLOW MATRIX PART TWO

PROGRAM INSPECTION AND REVIEWS

CODE INSPECTION IN SOFTWARE TESTING

ASP.NET DATA NORMALIZATION day two

First Normal Form (FNF): This rule states that a column cannot contain multiple values. If you further inspect t_bands for FNF compliance, you should come to the conclusion that the albums and members fields, band_albums and band_members, should be broken down into smaller, discrete elements. The band_members and band_albums columns are currently defined such that if a band has multiple members or have released multiple albums, then band_members and band_albums columns will contain multiple values.

Second Normal Form (SNF): This rule states that every non-key column must depend on the entire key, not just the primary key. Because you are using band_id as your primary key, you are in good shape with respect to SNF.

Third Normal Form (TNF): This rule is very similar to the SNF rule and states thatall nonkey columns must not depend on any other nonkey columns. A table must also comply with SNF to be in TNF. OK, you pass this test too!


There are three other normalization rules that aren’t covered here. Generally, if your tables are in Third Normal Form, they probably conform to the other rules.

To fully optimize your tables, you should take some additional measures. It’s a good idea to break your t_bands table into several tables and link them to t_bands via foreign keys.

Also, you should create a t_music_types table that holds all the possible music types. The t_bands table should have a foreign key to the primary key of the t_music_types table.

This is generally good practice for two reasons: (1) it ensures that your band’s music type falls into the music type domain and (2) it is easier to maintain. For example, if you change your mind and want to refer to “R&B” as “Rhythm & Blues,” you won’t have to change every instance of “R&B” in the band_music_type_title column—you only need to change the music type title in the t_music_types table. You could also do the same thing for the band_record_company_title and contact_business_state fields.

At this point, your database contains three tables: (1) t_bands, (2) t_music_types, and (3) t_record_companies. Figure 3-2 shows a diagram of our new database design:

In the diagram, t_bands is linked to t_music_types via a foreign key to music_type_id and linked to t_record_companies via a foreign key to record_company_id.

This new relationship between the tables is called one-to-many. In a one-to-many relationship,
each entry in the contact type table may be referenced by one or many contacts.

You now have three tables and have met your current requirements. However, what about bands and albums? Currently, you are storing all of the band’s albums and members in a single
column, band_albums and band_members, respectively. Currently, if you wanted to retrieve a list of a band’s members or albums, you would need to retrieve the data in the band_members or band_albums column and parse it. This is not the optimal approach.


The best approach for this situation is to further normalize your database by creating two new tables. The first is a table to store all albums (for example, t_albums) and a second that stores all band members (for example, t_band_members). The tables t_albums and t_band_members will have foreign keys to the t_bands table. Figure shows the new database diagram.

You could certainly modify your table design further. But at some point you need to start considering performance. Performance can be adversely impacted if, on a regular basis, you need to join multiple tables with a lot of data. We recommend that you keep the number of tables in your database to a minimum while following the normalization rules as closely as possible. You will soon learn that database design is as much art as it is science.
26.47
RELATED POST

VISUAL STUDIO INTRODUCTION

C SHARP INTRODUCTION

C SHARP OUT LOOK

DOT NET AND C SHARP

C SHARP APPLICATION STRICTURE

OOPS INTRODUCTION

OOPS AND C SHARP

IDE AND C SHARP

INSTANTIATING OBJECTS IN C SHARP

CLASSES AND OBJECTS IN C SHARP

OPERATORS IN C SHARP

SWITCH AND ITERATION IN C SHARP

BRANCHING IN C SHARP

CONSTANTS AND STRING

ASP.NET DATA BASE PROGRAMMING day one

.......................................................................
The Microsoft .NET Architecture is split into three essential areas:


The .NET platform, which includes the .NET infrastructure and tools to build and operate a new generation of Web services and applications. The core of the .NET platform is the .NET Framework, which includes C#, VB .NET, ASP.NET, and ADO.NET.

.NET products and services, which include Microsoft Windows, MSN.NET, personal subscription services, Microsoft Office .NET, Microsoft Visual Studio .NET, and Microsoft bCentral for .NET.

Third-party .NET services, which are services created by a vast range of partners and developers who now have the opportunity to produce corporate and vertical services built on the .NET platform.

The .NET platform contains all of the building blocks for creating .NET products and services and integrating third-party .NET solutions. Microsoft is using components of the .NET
platform to extend the platform itself and to build additional .NET products. For example, as a developer you will be very impressed or possibly amazed that the entire ASP.NET platform is actually built on C#, which is a new .NET language! Additionally, large portions of the Visual Studio .NET code base are built on a combination of C++, C#, and VB .NET.

One of the most common themes heard throughout the development community concerns the stability of the .NET products and services. Compared with prior shifts in technology, such as when Microsoft moved from a 16-bit architecture to a 32-bit architecture or from DOS to Windows, this round is much more bearable.

Next-generation Web Services

Microsoft’s core piece of the .NET solution is Web services. Web services are small, specific, reusable chunks of application logic that can be easily shared across the Internet using open standards such as XML and HTTP. Solution providers, application developers, and end users will be able to rent, lease, or purchase the use of these solutions as needed and integrate them to solve specific problems. Examples of Web services include calendars, notifications, currency conversions, and user authentication and identity services.

Microsoft’s first entry into this space is the use of the Microsoft Passport User Identity Service, which provides a single authentication mechanism for any Web site or application. A user can register with the Passport service and then be seamlessly validated from any participant Passport site without the need for an additional login procedure. This service can be embedded for use as an authentication mechanism by any Web-connected application.


Web services enable you to outsource the generic portions of application development that oday are commonly developed over and over each time a new application is built. Some people have compared it to building with Legos. From a relatively generic set of components, in a very short period you can build a complex, robust product that is great fun to use!

VISUAL STUDIO INTRODUCTION

C SHARP INTRODUCTION

C SHARP OUT LOOK

DOT NET AND C SHARP

C SHARP APPLICATION STRICTURE

OOPS INTRODUCTION

OOPS AND C SHARP

IDE AND C SHARP

INSTANTIATING OBJECTS IN C SHARP

CLASSES AND OBJECTS IN C SHARP

OPERATORS IN C SHARP

SWITCH AND ITERATION IN C SHARP

BRANCHING IN C SHARP

CONSTANTS AND STRING