Skip to main content

Levels of Software Testing

  Levels of Software Testing

There are various testing levels one of which is unit testing in which the smallest testable part of an application is testing for correctness. In integration testing we check the system when we linking the various modules. 

In system testing we check the system as a whole from customers’ viewpoint. Acceptance testing tries to check whether the system is acceptable by most of the users. Alpha testing is carried out at developer’s site and beta is at customer’s site. A Testers workbench is a virtual environment used to verify the correctness or soundness of a design or model. 11 step testing process is a experience based practical approach for solution to a test assignment. 

  UNIT TESTING -

       Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually. Unit testing involves only those characteristics that are vital to the performance of the unit under test. 

This encourages developers to modify the source code without immediate concerns about how such changes might affect the functioning of other units or the program as a whole. Once all of the units in a program have been found to be working in the most efficient and error-free manner possible, larger components of the program can be evaluated by means of integration testing. Unit testing can be time-consuming and tedious. 

It demands patience and thoroughness on the part of the development team. Rigorous documentation must be maintained. Unit testing must be done with an awareness that it may not be possible to test a unit for every input scenario that will occur when the program is run in a real-world environment.

INTEGRATION TESTING -

Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by Design Different Integration strategies.

There are two approaches in integration testing they are as follows - Top-down integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. The integration process is performed in a series of five steps: 

 1. The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth-or breadth first), subordinate stubs are replaced one at a time with actual modules. 

3. Tests are conducted as each modules are integrated

 4. On completion of each set of tests, another stub is replaced with real module 

5. Regression testing may be conducted to ensure that new errors have not been introduced The process continues from step2 until the entire program structure is built. Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing; therefore, no significant data can flow upward in the program structure.  


Bottom up Integration

 Modules are integrated from the bottom to top, in this approach processing required for modules subordinate to a given level is always available and the needs for subs is eliminated. A bottom-up integration strategy may be implemented with the following steps: 1. Low-level modules are combined into clusters that perform a specific software sub function.

2. A driver is written to coordinate test case input and output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure. As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top-down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.

SYSTEM TESTING -

Once the entire system has been built then it has to be tested against the “System Specification” to check if it delivers the features required. It is still developer focused, although specialist developers known as systems testers are normally employed to do it. In essence System Testing is not about checking the individual parts of the design, but about checking the system as a whole. In effect it is one giant component. 

System testing can involve a number of specialist types of tests to see if all the functional and non- functional requirements have been met. In addition to functional requirements these may include the following types of testing for the non-functional requirements:

 Performance - Are the performance criteria met? 

Volume - Can large volumes of information be handled? 

Stress - Can peak volumes of information be handled?

 Documentation - Is the documentation usable for the system? Robustness - Does the system remain stable under adverse circumstances? 

There are many others, the needs for which are dictated by how the system is supposed to perform. 

ACCEPTANCE TESTING  -

Acceptance Testing checks the system against the “Requirements”. It is similar to systems testing in that the whole system is checked but the important difference is the change in focus. Systems Testing checks that the system that was specified has been delivered. Acceptance Testing checks that the system delivers what was requested. 

The customer and not the developer should always do acceptance testing. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. 

ALPHA TESTING & BETA TESTING 

The alpha test conducted at the developer’s site by a customer software is used in a natural setting with the developer “Looking over the shoulder” of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by the end user(S) of the software. Unlike alpha testing the developer is generally not present; therefore the beta test is “live”. 

Application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real/imagined) that are encountered during beta testing and reports these to the developer at regular intervals. Because of problems reported during beta test, the software developer makes modification and then prepares for release of the software product to the entire customer base. 


STATIC VS. DYNAMIC TESTING 

Software can be tested either by running the programs and verifying each step of its execution against expected results or by statically examining the code or the document against its stated requirement or objective. In general, software testing can be divided into two categories, viz. Static and dynamic testing. 

Static testing is a non-execution-based testing and carried through by mostly human effort. In static testing, we test, design, code or any document through inspection, walkthroughs and reviews. Many studies show that the single most cost-effective defect reduction process is the classic structural test; the code inspection or walk-through. 

Code inspection is like proof reading and developers will be benefited in identifying the typographical errors, logic errors and deviations in styles and standards normally followed. Dynamic testing is an execution based testing technique. Program must be executed to find the possible errors. Here, the program, module or the entire system is executed (run) and the output is verified against the expected result. Dynamic execution of tests is based on specifications of the program, code and methodology.  


TESTERS WORKBENCH 

A Testers workbench is a virtual environment used to verify the correctness or soundness of a design or model (e.g., a software product). The term has its roots in the testing of electronic devices, where an engineer would sit at a lab bench with tools of measurement and manipulation, such as oscilloscopes, multimeters, soldering irons, wire cutters, and so on, and manually verify the correctness of the device under test. 

In the context of software or firmware or hardware engineering, a test bench refers to an environment in which the product under development is tested with the aid of a collection of testing tools. Often, though not always, the suite of testing tools is designed specifically for the product under test. A test bench or testing workbench has four components. 

1. INPUT: The entrance criteria or deliverables needed to perform work 

2. PROCEDURES TO DO: The tasks or processes that will transform the input into the output

 3. PROCEDURES TO CHECK: The processes that determine that the output meets the standards.

 4. OUTPUT: The exit criteria or deliverables produced from the workbench

11-STEPS OF TESTING PROCESS We introduce 11 step testing process that will take you through identifying, testing and solving your test assignment. This process is based on experience. 

Step 1: Asses Development Plan and Status This first step is a prerequisite to building the Verification, Validation, and Testing (VV&T )Plan used to evaluate the implemented software solution. During this step, testers challenge the completeness and correctness of the development plan. Based on the extensiveness and completeness of the Project Plan the testers can estimate the amount of resources they will need to test the implemented software solution. 

Step 2: Develop the Test Plan Forming the plan for testing will follow the same pattern as any software planning process. The structure of all plans should be the same, but the content will vary based on the degree of risk the testers perceive as associated with the software being developed. 

Step 3: Test Software Requirements Incomplete, inaccurate, or inconsistent requirements lead to most software failures. The inability to get requirement right during the requirements gathering phase can also increase the cost of implementation significantly. Testers, through verification, must determine that the requirements are accurate, complete, and they do not conflict with another.

Step 4: Test Software Design This step tests both external and internal design primarily through verification techniques. The testers are concerned that the design will achieve the objectives of the requirements, as well as the design being effective and efficient on the designated hardware. 

Step 5: Program (Build) Phase Testing The method chosen to build the software from the internal design document will determine the type and extensiveness of the testers needed. As the construction becomes more automated, less testing will be required during this phase. However, if software is constructed using the waterfall process, it is subject to error and should be verified. Experience has shown that it is significantly cheaper to identify defects during the construction phase, than through dynamic testing during the test execution step.

 Step 6: Execute and Record Result This involves the testing of code in a dynamic state. The approach, methods, and tools specified in the test plan will be used to validate that the executable code in fact meets the stated software requirements, and the structural specifications of the design. 

Step 7: Acceptance Test Acceptance testing enables users to evaluate the applicability and usability of the software in performing their day-to-day job functions. This tests what the user believes the software should perform, as opposed to what the documented requirements state the software should perform. 

Step 8: Report Test Results Test reporting is a continuous process. It may be both oral and written. It is important that defects and concerns be reported to the appropriate parties as early as possible, so that corrections can be made at the lowest possible cost. 

Step 9: The Software Installation Once the test team has confirmed that the software is ready for production use, the ability to execute that software in a production environment should be tested. This tests the interface to operating software, related software, and operating procedures. 

Step 10: Test Software Changes While this is shown as Step 10, in the context of performing maintenance after the software is implemented, the concept is also applicable to changes throughout the implementation process. Whenever requirements changes, the test plan must change, and the impact of that change on software systems must be tested and evaluate.

Step 11: Evaluate Test Effectiveness Testing improvement can best be achieved by evaluating the effectiveness of testing at the end of each software test assignment. While this assessment is primarily performed by the testers, it should involve the developers, users of the software, and quality assurance professionals if the function exists in the IT organization. 


Comments