When planning a test one have the opportunity to choose if one should perform the test manually or if one should automate the test.we will discuss benefits and problems with automating tests, but we will start with the possibilities that lie at hand.
· Requirements testing: formal review, no automated tools are used.
Design testing: major testing technique is the formal review, but there are some possibilities to use automated tools.
Testing in the small (unit testing):
Test Data Generator, to generate files and test data input
Program Logic Analyzers, checks code
Test Coverage Tools, checks what parts have been tested (run)
Test Drivers, executes the program to be tested. Simulates. Runs with accurate data and input
Test Comparators, compares test case to expected outcome
Testing in the large (Integration testing, System testing, Acceptance testing):
1. Test data generators, to generate files and test data input
2. Instrumenters, to measure and report test coverage during execution
3. Comparators, to compare outputs, files and source programs
4. Capture Playback systems, to capture on-line test data and play it back
5. Simulators and test beds, to simulate complex and real-time environments
6. Debugging aids, to aid in determining what is wrong and getting it repaired
7. Testing information systems, to maintain test cases and report on testing activities
8. File aids, to extract test data from existing files for use in testing
System charters and documents, to provide automatic system documentation
Code-based; generates tests that check that code does what code does. Does not check that it does what it should.
Interface-based; generates tests based on well-defined interfaces, such as GUI. Can create tests that visit every button, check-box or menu on a site.
Specification-based; generates both input and expected output from specifications list.
The reason for automating testing is of course to make the test process more efficient. Not all parts of testing may be more efficiently performed automated. The more intellectual tasks, such as designing test cases, are often best performed manually, although these parts sometimes qualify for automation.
Another characteristic that makes a test case suitable for automation is that it is performed repeatedly. Even when automating tests, using one of many possible automation tools, most errors are often found manually since the test case run is tested manually before executed.
While on the subject of comparing actual outcome to expected outcome, it is important to point out that even though the actual outcome might be as expected, it does not mean that the application passes the test, or that it is free from errors tested for. The difference here is that a poorly defined manual test may very well find many important deficiencies or errors which a likewise poorly defined automated test will not, since it can only perform actions specified and verify given expected outcome.
That is the differences between human testers and comparators or test execution tools. While tools only do what they are specified to do and compares only the specified outcome, human testers performing the test manually are able to adjust their test case to unforeseen events, such as an error dialog box, and also to check outcome in many more ways than tools. When a human tester performs a test, almost any test, he simultaneously can be said to be performing a usability test since he has to navigate, wait, view and understand the application being tested. These positive side effects are never achieved when automating testing.
Automated testing, on the other hand, ensures that a test is always being run in the exact same way, which is often important to reproduce errors or when performing regression tests.
Among the most obvious benefits is the possibility to run the same test on new versions of a program. A program may then be tested to check for new bugs, on existing features, that may have been introduced when adding new features or correcting deficiencies. This is regression testing. The same applies to re-testing, i.e. testing the functionality of a debugged feature. If a test has been created previously it will most likely be very easy to run it again, which is not only a benefit, but as we stated earlier, it might also be a must for automation of some tests to be at all worth considering.
Load testing, for instance, is sometimes possible but not very recommendable to do manually. Applying a load of hundreds of users might be done manually but would require a massive administrative effort. Applying the same load by simulating the users using a tool decreases the required effort immensely.
What we believe to be most important to remember is that the manual testing process must be well structured, with necessary and consistent documentation and consisting of tests good at finding errors and deficiencies, before considering automating. Without meeting these requirements, automation will most likely cause more problems than it solves.
Do not automate because you want to find additional errors that you did not find manually. Most tools are, as we stated earlier, re-test tools, meaning they execute a test that has in fact already been run. This means that most errors that can be found by this test already have been found. Despite this, there are tests that might still benefit from automation in this respect, and we have already mentioned load testing, for web applications.
Automated testing is not the same as automatically creating the test scripts. In order to receive long time value by using tools, tests and test scripts need to be maintainable. By automatically creating the scripts, using capture tools, one builds in a certain amount of inflexibility. Actions taken by the user creates a strict order in the script, and if the tested application is modified, the captured sequence may not be valid anymore. This often generates unacceptable maintenance costs .
This time we will point out that the fact that a test is passed does not mean that a program is free of errors. Automating a test like this means that the error is preserved to future releases.
Running the same tests on later versions of the application
Running tests on a more frequent basis
Shortened time to market for later releases
The tests are being run in the exact same manner every time
A well structured test organization on the manual level is needed before automating
In order to create and perform relevant tests, the tools need to be fully understood
When an automated test is run, most of the faults have already been found
It can be hard to distinguish whether the faults lay in the tool or the application
Automatically created scripts comes hand in hand with low maintainability
If and which tools are of interest therefore depends on the following:
Size of the test project
Size of the application
Technology used by in the application
Stage of the test process
Types of test
ERROR CHECK LIST FOR INSPECTIONS
WALK THROUGHS IN TESTING
TESTING FOR SPECIALIZED ENVIRONMENTS PART ONE
TESTING FOR SPECIALIZED ENVIRONMENTS PART TWO
DEBUGGING AND TESTING
DEFECT AMPLIFICATION AND REMOVAL
ITERATIVE SPIRAL MODEL
STANDARD WATER MODEL
CONTROLLED TESTING ENVIRONMENT
RISK ANALYSIS PART ONE
RISK ANALYSIS PART TWO
BACK GROUND ISSUES
SOFTWARE REVIEWS PART ONE
SOFTWARE REVIEWS PART TWO
V MODEL IN TESTING