Test approaches, or test techniques used, will differ throughout the process. Depending on the stage of the test process, type of test etc. the scope and manner of test will change.
Not all test approaches, or techniques, are supposed to actually test an application. Walkthroughs and Inspections are techniques that are used to decrease the risk of future errors to occur. The purpose is to find problems and see what is missing. Walkthrough is a review process in which a designer leads one or more through a segment of design or code, he or she has written. Inspection is a formal evaluation technique involving detailed examination by a person or group other than the author to detect faults and problems .
White-Box testing (fig 2.3) focuses on finding faults in the code and architecture of the application. The code and architecture are known to the tester and the tests should be designed and executed in a manner that guarantees full coverage even though some areas are believed to be less important or less executed during actual running of the application.
Black-Box testing ,on the other hand, may be done without knowing how things are done, but instead concentrating on what should be done. The approach is often used during functionality testing. Cases are based on specifications and requirements of the application or function to be tested. Valid and invalid inputs are tested and actual outcome is compared to expected outcome based on requirements.
The White-box approach is to an higher extent used early in the development and testing process, before there are any visible functions to test, while the Black-box approach is used later when functions are visible and can be tested.
To completely test a web application one needs to combine the two approaches; White-box and Black-box testing. The Gray-box testing approach takes into account all components making up a complete web application, including the environment on witch it resides. Gray-box testing is integral to web application testing because of the numerous components, both software and hardware, that make up the complete application.
· Prioritize testing features that are necessary parts of the product.
· Prioritize testing features that affect the largest number of users.
· Prioritize testing features that are chosen frequently by users.
What these features are, differ from application to application and they are not always obvious. These purposes present different needs of prioritizing. A site for business transactions, for
instance an Internet banking service, has security requirements that must be fulfilled for us users to feel confident in the application, or we will not use it. A promotional site, on the other hand, has no apparent need of high security in that sense.
When we see an error on the client side, we are seeing the symptom of an error—not the error itself.
Errors may be environment-dependent and may not appear in different environments.
Errors may be in the code or in the configuration.
Errors may reside in any of several layers (Client/Server/Network).
Examining the two classes of operating environments—static versus dynamic—demands different approaches
The goal was to achieve an easy to follow methodology with quantifiable measures, for any tester to follow when testing any type of web site or application.
Within the process of software development, the element of testing has become an area where more and more resources are being spent. But time is often short, creating a need to prioritize. To be able to do this prioritization, there is a need to identify the factors that constitute the foundation for what to test.
Static Web Sites
Static with Form-Based interactivity
Sites with Dynamic Data Access
Dynamically Generated Sites
Web-Based Software Applications
The categories are:
Internal: For instance an Intranet. Within the responsible company.
Public: Possible future customers.
UNIT TESTING PART ONE
UNIT TESTING PART TWO
UNIT TESTING PART THREE
WINDOWS COMPLIANCE GUI TESTING PART ONE
WINDOWS COMPLIANCE GUI TESTING PART TWO
WINDOWS COMPLIANCE GUI TESTING PART THREE
WINDOWS COMPLIANCE GUI TESTING PART FOUR VALIDATION TESTING
WINDOWS COMPLIANCE GUI TESTING PART FIVE CONDITION TESTING
WINDOWS COMPLIANCE GUI TESTING PART SIX GENERAL CONDITION TESTING
TESTING CONDITIONS PART ONE
TESTING CONDITIONS PART TWO
TESTING CONDITIONS PART THREE
TESTING CONDITIONS PART FOUR
SPECIFIC FIELD TESTING
INTEGRATION TESTING PART ONE
INTEGRATION TESTING PART TWO
INTEGRATION TESTING PART THREE
INTEGRATION TESTING PART FOUR
INTEGRATION TESTING PART FIVE
INTEGRATION TEST STANDARDS
INTEGRATION TEST STANDARDS PART TWO