Filters








17,350 Hits in 2.7 sec

Correction of "A Comparative Study to Benchmark Cross-project Defect Prediction Approaches" [article]

Steffen Herbold, Alexander Trautsch, Jens Grabowski
2017 arXiv   pre-print
Unfortunately, the article "A Comparative Study to Benchmark Cross-project Defect Prediction Approaches" has a problem in the statistical analysis which was pointed out almost immediately after the pre-print  ...  While the problem does not negate the contribution of the the article and all key findings remain the same, it does alter some rankings of approaches used in the study.  ...  ACKNOWLEDGEMENTS We want to thank Yuming Zhou from Nanjing University for pointing out the inconsistencies in the results to us so fast, as well as the editors of this journals who helped to determine  ... 
arXiv:1707.09281v1 fatcat:qu7ynbc45fgf3mp2asdc57pr7a

Search Based Training Data Selection For Cross Project Defect Prediction

Seyedrebvar Hosseini, Burak Turhan, Mika Mäntylä
2016 Proceedings of the The 12th International Conference on Predictive Models and Data Analytics in Software Engineering - PROMISE 2016  
Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction (CPDP).  ...  We use 13 datasets from PROMISE repository in order to compare the performance of GIS with benchmark CPDP methods, namely (NN)-filter and naive CPDP, as well as with within project defect prediction (WPDP  ...  RQ1: How is the performance of GIS compared with benchmark cross project defect prediction approaches?  ... 
doi:10.1145/2972958.2972964 dblp:conf/promise/HosseiniTM16 fatcat:r7lal6ueknd57g4ckdtx2vtt4e

A Ranking-Oriented Approach to Cross-Project Software Defect Prediction: An Empirical Study

Guoan You, Yutao Ma
2016 Proceedings of the 28th International Conference on Software Engineering and Knowledge Engineering  
In recent years, cross-project defect prediction (CPDP) has become very popular in the field of software defect prediction.  ...  Inspired by the idea of the Point-wise approach to Learning to Rank, we propose a ranking-oriented CPDP approach called ROCPDP.  ...  Because some prior studies [2, 10, 13] have shown that defect prediction performs well within projects and cross projects when there is a sufficient amount of training data, in this paper we also want  ... 
doi:10.18293/seke2016-047 dblp:conf/seke/YouM16 fatcat:lml2jrtnofhfrer7kenjrlrzhu

A benchmark study on the effectiveness of search-based data selection and feature selection for cross project defect prediction

Seyedrebvar Hosseini, Burak Turhan, Mika Mäntylä
2018 Information and Software Technology  
Overall, the performance of GIS is comparable to that of within project defect prediction (WPDP) benchmarks, i.e. CV and PR.  ...  Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction(CPDP).  ...  Positive effect sizes point to an effect size in favor of GIS. project defect prediction approaches? 616 Table 5 5 presents the results of GIS and cross project benchmarks.  ... 
doi:10.1016/j.infsof.2017.06.004 fatcat:udnnpksahbcohomqqogcnq4t64

Benchmarking cross-project defect prediction approaches with costs metrics [article]

Steffen Herbold
2018 arXiv   pre-print
In recent years, many researchers focused on the problem of Cross-Project Defect Prediction (CPDP), i.e., the creation of prediction models based on training data from other projects.  ...  Defect prediction can be a powerful tool to guide the use of quality assurance resources.  ...  ACKNOWLEDGMENTS The authors would like to thank GWDG for the access to the scientific compute cluster used for the training and evaluation of thousands of defect prediction models.  ... 
arXiv:1801.04107v1 fatcat:7e74bef3wjditgc6zyoddxvheu

A Systematic Review of Unsupervised Learning Techniques for Software Defect Prediction [article]

Ning Li, Martin Shepperd, Yuchen Guo
2020 arXiv   pre-print
Results: Our meta-analysis shows that unsupervised models are comparable with supervised models for both within-project and cross-project prediction.  ...  In order to compare prediction performance across these studies in a consistent way, we (re-)computed the confusion matrices and employed the Matthews Correlation Coefficient (MCC) as our main performance  ...  We also wish to acknowledge the use of the DConfusion tool developed by David Bowes and  ... 
arXiv:1907.12027v4 fatcat:q2o5ew5zhra5lauyebd3hl65uy

Evaluating software defect prediction performance: an updated benchmarking study [article]

Libo Li, Stefan Lessmann, Bart Baesens
2019 arXiv   pre-print
Prior studies use machine-learning models to detect faulty software code. We revisit past studies and point out potential improvements. Our new study proposes a revised benchmarking configuration.  ...  However, predictive power is heavily influenced by the evaluation metrics and testing procedure (frequentist or Bayesian approach). The classifier results depend on the software project.  ...  It is critical to take advantage of new research findings to continue to improve defect prediction results. Benchmarking study results depend heavily on the choice of statistical procedures.  ... 
arXiv:1901.01726v1 fatcat:72htkewgdjhptn3mzllqerzsoq

Introduction to the Special Issue on Mining Software Repositories in 2010

Jim Whitehead, Thomas Zimmermann
2012 Empirical Software Engineering  
Acknowledgments We are grateful to the continuous support and encouragement offered by the Editorial board for the Journal of Empirical Software Engineering and by the Editor-in-Chief Lionel Briand.  ...  This issue Empir Software Eng (2012) 17:500-502 is the result of a great deal of effort by the reviewers, authors, and attendees of MSR 2010.  ...  However, the absence of benchmarks made it difficult to compare approaches.  ... 
doi:10.1007/s10664-012-9206-z fatcat:og3a3i3xcvcivcgjfedyij2e5q

Evaluating Software Defect Prediction Performance: An Updated Benchmarking Study

Libo Li, Stefan Lessmann, Bart Baesens
2019 Social Science Research Network  
Prior studies use machine-learning models to detect faulty software code. We revisit past studies and point out potential improvements. Our new study proposes a revised benchmarking configuration.  ...  However, predictive power is heavily influenced by the evaluation metrics and testing procedure (frequentist or Bayesian approach). The classifier results depend on the software project.  ...  It is critical to take advantage of new research findings to continue to improve defect prediction results. Benchmarking study results depend heavily on the choice of statistical procedures.  ... 
doi:10.2139/ssrn.3312070 fatcat:f6cbmif7kvghfkbw5o4skyhcja

Transfer defect learning

Jaechang Nam, Sinno Jialin Pan, Sunghun Kim
2013 2013 35th International Conference on Software Engineering (ICSE)  
some target projects (cross-project defect prediction).  ...  Many software defect prediction approaches have been proposed and most are effective in within-project prediction settings.  ...  This work is supported in part by the project COLAB, funded by Macau Science and Technology Development Fund.  ... 
doi:10.1109/icse.2013.6606584 dblp:conf/icse/NamPK13 fatcat:tfh2nz7lhfdklob3fmftkqdtee

A Systematic Literature Review and Meta-Analysis on Cross Project Defect Prediction

Seyedrebvar Hosseini, Burak Turhan, Dimuthu Gunarathna
2017 IEEE Transactions on Software Engineering  
Cross project defect prediction (CPDP) recently gained considerable attention, yet there are no systematic efforts to analyse existing empirical evidence.  ...  . • We identified a set of 46 primary studies related to CPDP published until 2015. Community can use this set as a starting point to conduct further research on CPDP.  ...  Olli-Pekka Pakanen from M3S Oulu, who contributed to the initial phase of this literature review by evaluating the primary studies, and the anonymous reviewers, whose invaluable feedback resulted in significant  ... 
doi:10.1109/tse.2017.2770124 fatcat:puodxjkkdjglpdxynaktjiycpu

On the Time-Based Conclusion Stability of Cross-Project Defect Prediction Models [article]

Abdul Ali Bangash, Hareem Sahar, Abram Hindle, Karim Ali
2020 arXiv   pre-print
Will the researcher's conclusions hold a year from now for the same software projects? Perhaps not.  ...  The next release of a product, which is significantly different from its prior release, may drastically change defect prediction performance.  ...  To summarize, the main contributions of this paper are: -A methodology for time-aware evaluation of defect prediction approaches; -A case study of conclusion stability in cross-project defect prediction  ... 
arXiv:1911.06348v3 fatcat:qz5wtmuv7fgtjhleusfavljzc4

Simplification of Training Data for Cross-Project Defect Prediction [article]

Peng He, Bing Li, Deguang Zhang, Yutao Ma
2014 arXiv   pre-print
Cross-project defect prediction (CPDP) plays an important role in estimating the most likely defect-prone software components, especially for new or inactive projects.  ...  Based on an empirical study on 34 releases of 10 open-source projects, we have elaborately compared the prediction performance of different defect predictors built with five well-known classifiers using  ...  Before performing a cross-project defect prediction, we need to select a target data set and its appropriate TDS.  ... 
arXiv:1405.0773v2 fatcat:aokawo22rzh4fhdkaqogro44lm

Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings

S. Lessmann, B. Baesens, C. Mues, S. Pietsch
2008 IEEE Transactions on Software Engineering  
defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings.  ...  To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets  ...  We argue that the size of the study, the way predictive performance is measured, as well as the type of statistical test applied to secure conclusions have a major impact on cross-study comparability and  ... 
doi:10.1109/tse.2008.35 fatcat:q2u3jizdqnhirhnttfstovnbpa

Effort and Cost in Software Engineering

Hennie Huijgens, Arie van Deursen, Leandro L. Minku, Chris Lokan
2017 Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering - EASE'17  
Method: We compare two established repositories (ISBSG and EBSPM) comprising almost 700 projects from industry.  ...  Context: The research literature on software development projects usually assumes that effort is a good proxy for cost.  ...  ACKNOWLEDGMENTS Our thanks to Tableau for allowing us to use their BI solution to build the EBSPM-tool and ISBSG for allowing us to use their repository for research purposes.  ... 
doi:10.1145/3084226.3084249 dblp:conf/ease/HuijgensDML17 fatcat:tqadwdy4xrdbpp7a4kitoh3s5a
« Previous Showing results 1 — 15 out of 17,350 results