Repository logo
 

An integrated method for improving testing effectiveness and efficiency

Date

2000

Authors

Stringfellow, Catherine V., author
Von Mayrhauser, Anneliese, advisor
Bieman, James M., committee member
Zimmerman, Donald E., committee member
France, Robert B., committee member

Journal Title

Journal ISSN

Volume Title

Abstract

The aim of testing is to find errors and to find them as early as possible. Specifically, system testing should uncover more errors before release to reduce the number of errors found in post-release. System testing should also prevent the release of the products that would result in discovery of many post-release errors. Studies indicate that post-release errors cost more to fix than errors found earlier in the life cycle. The effectiveness and efficiency of system testing depends on many factors, not only the expertise and quality of the tester and the techniques they employ. This dissertation develops and integrated method using various techniques that will improve testing effectiveness and efficiency. Some of these techniques already exist, but are applied in a new or different way. The integrated method enables root cause analysis of post-release problems by tracing these problems to one or more factors that influence system testing efficiency. Development defect data help to identify which parts of the software should be tested more intensely and earlier because they were fault-prone in development. Based on assessment results, testers can develop testing guidelines to make system test more effective. A case study applies this evaluation instrument to existing project data from large software product (medical record system). Successive releases of the software product validate the method. During system testing, testers may need to determine quantitatively whether to continue testing or to stop, recommending release. Early stopping could decrease the cost of testing, but has the disadvantage of possibly missing problems that would have been detected, had system testing continued. Testers need to evaluate the efficiency of currently used methods and to improve the efficiency of future testing efforts. This dissertation develops empirical techniques to determine stopping points during testing. It proposed a new selection method for software reliability growth model(s) that can be used to make release decision. The case study compares and evaluates these techniques on actual test result data from industry. Quality assessment of multiple releases of the same product forms the basis of longitudinal decisions, such as re-engineering. Techniques using data from prior releases help to identify parts of the system that are consistently problematic. This information aids in developing additional testing guidelines for future releases of the product. This dissertation adds to a study that adapted a reverse architecting technique to identify fault relationships among system components based on whether they are involved in the same defect fix. The case study applies this technique to identify those parts of the software that need to be tested more. Results of the case study demonstrate that the integrated method can improve the effectiveness and efficiency of system test. The method identified problematic software components using data from prior releases and development. Results of prioritizing show that fault-prone components tested earlier reveal more defects earlier. Development should, therefore, have more time to fix these defects before release. The method was also able to estimate remaining defect content. The estimates were used to make release decisions. Based on data from post-release and interviews with the test manager, the method recommended the right release decisions.

Description

Department Head: Stephen B. Seidman.

Rights Access

Subject

Citation

Associated Publications