Evaluating predictive quality models derived from software measures: Lessons learned

Filippo Lanubile, Giuseppe Visaggio
1997 Journal of Systems and Software  
1 This paper describes an empirical comparison of several modeling techniques for predicting the quality of software components early in the software life cycle. Using software product measures, we built models that classify components as high-risk, i.e., likely to contain faults, or low-risk, i.e., likely to be free of faults. The modeling techniques evaluated in this study include principal component analysis, discriminant analysis, logistic regression, logical classification models, layered
more » ... eural networks, and holographic networks. These techniques provide a good coverage of the main problemsolving paradigms: statistical analysis, machine learning, and neural networks. Using the results of independent testing, we determined the absolute worth of the predictive models and compare their performance in terms of misclassification errors, achieved quality, and verification cost. Data came from 27 software systems, developed and tested during three years of project-intensive academic courses. A surprising result is that no model was able to effectively discriminate between components with faults and components without faults.
doi:10.1016/s0164-1212(96)00153-7 fatcat:pzllwytbkvf4hipgn6hdvcyjua