Filters








6,040 Hits in 4.1 sec

Benchmarking cross-project defect prediction approaches with costs metrics [article]

Steffen Herbold
2018 arXiv   pre-print
In recent years, many researchers focused on the problem of Cross-Project Defect Prediction (CPDP), i.e., the creation of prediction models based on training data from other projects.  ...  Within this paper, we provide a benchmark of 26 CPDP approaches based on cost metrics.  ...  ACKNOWLEDGMENTS The authors would like to thank GWDG for the access to the scientific compute cluster used for the training and evaluation of thousands of defect prediction models.  ... 
arXiv:1801.04107v1 fatcat:7e74bef3wjditgc6zyoddxvheu

A Ranking-Oriented Approach to Cross-Project Software Defect Prediction: An Empirical Study

Guoan You, Yutao Ma
2016 Proceedings of the 28th International Conference on Software Engineering and Knowledge Engineering  
In recent years, cross-project defect prediction (CPDP) has become very popular in the field of software defect prediction.  ...  methods in both CPDP and WPDP (within-project defect prediction) scenarios in terms of two common evaluation metrics for rank correlation.  ...  Because some prior studies [2, 10, 13] have shown that defect prediction performs well within projects and cross projects when there is a sufficient amount of training data, in this paper we also want  ... 
doi:10.18293/seke2016-047 dblp:conf/seke/YouM16 fatcat:lml2jrtnofhfrer7kenjrlrzhu

Search Based Training Data Selection For Cross Project Defect Prediction

Seyedrebvar Hosseini, Burak Turhan, Mika Mäntylä
2016 Proceedings of the The 12th International Conference on Predictive Models and Data Analytics in Software Engineering - PROMISE 2016  
We use 13 datasets from PROMISE repository in order to compare the performance of GIS with benchmark CPDP methods, namely (NN)-filter and naive CPDP, as well as with within project defect prediction (WPDP  ...  Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction (CPDP).  ...  RQ1: How is the performance of GIS compared with benchmark cross project defect prediction approaches?  ... 
doi:10.1145/2972958.2972964 dblp:conf/promise/HosseiniTM16 fatcat:r7lal6ueknd57g4ckdtx2vtt4e

Evaluating software defect prediction performance: an updated benchmarking study [article]

Libo Li, Stefan Lessmann, Bart Baesens
2019 arXiv   pre-print
Our findings suggest that predictive accuracy is generally good. However, predictive power is heavily influenced by the evaluation metrics and testing procedure (frequentist or Bayesian approach).  ...  Our new study proposes a revised benchmarking configuration. The configuration considers many new dimensions, such as class distribution sampling, evaluation metrics, and testing procedures.  ...  Our benchmarking study shows that software defect prediction should be assessed using extensive evaluation metrics and statistical tests.  ... 
arXiv:1901.01726v1 fatcat:72htkewgdjhptn3mzllqerzsoq

The use of cross-company fault data for the software fault prediction problem

Çağatay ÇATAL
2016 Turkish Journal of Electrical Engineering and Computer Sciences  
We investigated how to use cross-company (CC) data in software fault prediction and in predicting the fault labels of software modules when there are not enough fault data.  ...  This paper involves case studies of NASA projects that can be accessed from the PROMISE repository.  ...  Benchmarking Our benchmarking used two techniques, naive Bayes with logNum and the threshold-based fault prediction approach, on six NASA datasets.  ... 
doi:10.3906/elk-1409-137 fatcat:xfgy5nt2bfcmjltap7fo2m6ws4

Evaluating Software Defect Prediction Performance: An Updated Benchmarking Study

Libo Li, Stefan Lessmann, Bart Baesens
2019 Social Science Research Network  
Our findings suggest that predictive accuracy is generally good. However, predictive power is heavily influenced by the evaluation metrics and testing procedure (frequentist or Bayesian approach).  ...  Our new study proposes a revised benchmarking configuration. The configuration considers many new dimensions, such as class distribution sampling, evaluation metrics, and testing procedures.  ...  Our benchmarking study shows that software defect prediction should be assessed using extensive evaluation metrics and statistical tests.  ... 
doi:10.2139/ssrn.3312070 fatcat:f6cbmif7kvghfkbw5o4skyhcja

Effort and Cost in Software Engineering

Hennie Huijgens, Arie van Deursen, Leandro L. Minku, Chris Lokan
2017 Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering - EASE'17  
Objectives: We determine similarities and differences between size, effort, cost, duration, and number of defects of software projects.  ...  Context: The research literature on software development projects usually assumes that effort is a good proxy for cost.  ...  In particular, we observe that the two repositories had projects with similar duration and number of defects (independent variables), but different costs: relevancy filtering and locality-based approaches  ... 
doi:10.1145/3084226.3084249 dblp:conf/ease/HuijgensDML17 fatcat:tqadwdy4xrdbpp7a4kitoh3s5a

A Systematic Literature Review and Meta-Analysis on Cross Project Defect Prediction

Seyedrebvar Hosseini, Burak Turhan, Dimuthu Gunarathna
2017 IEEE Transactions on Software Engineering  
Cross project defect prediction (CPDP) recently gained considerable attention, yet there are no systematic efforts to analyse existing empirical evidence.  ...  Objective: To synthesise literature to understand the state-of-the-art in CPDP with respect to metrics, models, data approaches, datasets and associated performances.  ...  The training data can be from the same project, i.e., within project defect prediction (WPDP) or (the majority or the entirety) from other projects, i.e., cross project defect prediction (CPDP).  ... 
doi:10.1109/tse.2017.2770124 fatcat:puodxjkkdjglpdxynaktjiycpu

Transfer defect learning

Jaechang Nam, Sinno Jialin Pan, Sunghun Kim
2013 2013 35th International Conference on Software Engineering (ICSE)  
some target projects (cross-project defect prediction).  ...  Many software defect prediction approaches have been proposed and most are effective in within-project prediction settings.  ...  This work is supported in part by the project COLAB, funded by Macau Science and Technology Development Fund.  ... 
doi:10.1109/icse.2013.6606584 dblp:conf/icse/NamPK13 fatcat:tfh2nz7lhfdklob3fmftkqdtee

A benchmark study on the effectiveness of search-based data selection and feature selection for cross project defect prediction

Seyedrebvar Hosseini, Burak Turhan, Mika Mäntylä
2018 Information and Software Technology  
Overall, the performance of GIS is comparable to that of within project defect prediction (WPDP) benchmarks, i.e. CV and PR.  ...  Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction(CPDP).  ...  Positive effect sizes point to an effect size in favor of GIS. project defect prediction approaches? 616 Table 5 5 presents the results of GIS and cross project benchmarks.  ... 
doi:10.1016/j.infsof.2017.06.004 fatcat:udnnpksahbcohomqqogcnq4t64

Towards building a universal defect prediction model

Feng Zhang, Audris Mockus, Iman Keivanloo, Ying Zou
2014 Proceedings of the 11th Working Conference on Mining Software Repositories - MSR 2014  
To predict files with defects, a suitable prediction model must be built for a software project from either itself (withinproject) or other projects (cross-project).  ...  A universal model could also be interpreted as a basic relationship between software metrics and defects.  ...  The aforementioned approaches are able to improve the performance of cross-project defect prediction models. However, they use only partial dataset and end up with multiple models.  ... 
doi:10.1145/2597073.2597078 dblp:conf/msr/0001MKZ14 fatcat:xspbyv43urfg7eg4hbhtda22wu

Unsupervised Deep Domain Adaptation for Heterogeneous Defect Prediction

Lina GONG, Shujuan JIANG, Qiao YU, Li JIANG
2019 IEICE transactions on information and systems  
Heterogeneous defect prediction (HDP) is to detect the largest number of defective software modules in one project by using historical data collected from other projects with different metrics.  ...  Extensive experiments on 18 public projects from four datasets indicate that the proposed approach can build an effective prediction model for heterogeneous defect prediction (HDP) and outperforms the  ...  benchmark projects.  ... 
doi:10.1587/transinf.2018edp7289 fatcat:uzx4clv4gbdfzmzjrc47uctun4

Evaluating defect prediction approaches: a benchmark and an extensive comparison

Marco D'Ambros, Michele Lanza, Romain Robbes
2011 Empirical Software Engineering  
We present a benchmark for defect prediction, in the form of a publicly available dataset consisting of several software systems, and provide an extensive comparison of well-known bug prediction approaches  ...  , together with novel approaches we devised.  ...  We acknowledge the financial support of the Swiss National Science foundation for the project "SOSYA" (SNF Project No. 132175).  ... 
doi:10.1007/s10664-011-9173-9 fatcat:6qezbnnfkbawdajfscdzjwvrhi

Insights on Research Techniques towards Cost Estimation in Software Design

Praveen Naik, Shantaram Nayak
2017 International Journal of Electrical and Computer Engineering (IJECE)  
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement.  ...  There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times.  ...  Database from NASA is used with multiple metrics in order to perform cross-validation followed by accuracy testing.  ... 
doi:10.11591/ijece.v7i5.pp2883-2894 fatcat:ous4kahpgfd6vd3llhcvx6wptm

An empirical study on software defect prediction with a simplified metric set

Peng He, Bing Li, Xiao Liu, Jun Chen, Yutao Ma
2015 Information and Software Technology  
However, the rules for making an appropriate decision between within- and cross-project defect prediction when available historical data are insufficient remain unclear.  ...  cases, the minimum metric subset can be identified to facilitate the procedure of general defect prediction with acceptable loss of prediction precision in practice.  ...  Utilizing data across projects to build defect prediction models is commonly referred to as Cross-Project Defect Prediction (CPDP).  ... 
doi:10.1016/j.infsof.2014.11.006 fatcat:wq6dgwcnjnantn5it62zvyltfa
« Previous Showing results 1 — 15 out of 6,040 results