Training cost-sensitive neural networks with methods addressing the class imbalance problem
IEEE Transactions on Knowledge and Data Engineering
This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both over-sampling and under-sampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover,
... emble and soft-ensemble, i.e. the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCI data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multi-class tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multi-class tasks. Overall, threshold-moving and softensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may in fact only be effective on learning with imbalanced two-class data sets.