Universal Target Learning: An Efficient and Effective Technique for Semi-Naive Bayesian Learning

SiQi Gao, Hua Lou, LiMin Wang, Yang Liu, Tiehu Fan
2019 Entropy  
To mitigate the negative effect of classification bias caused by overfitting, semi-naive Bayesian techniques seek to mine the implicit dependency relationships in unlabeled testing instances. By redefining some criteria from information theory, Target Learning (TL) proposes to build for each unlabeled testing instance P the Bayesian Network Classifier BNC P , which is independent and complementary to BNC T learned from training data T . In this paper, we extend TL to Universal Target Learning
more » ... TL) to identify redundant correlations between attribute values and maximize the bits encoded in the Bayesian network in terms of log likelihood. We take the k-dependence Bayesian classifier as an example to investigate the effect of UTL on BNC P and BNC T . Our extensive experimental results on 40 UCI datasets show that UTL can help BNC improve the generalization performance.
doi:10.3390/e21080729 pmid:33267443 pmcid:PMC7515258 fatcat:qxdhbv2yorf63fnhqgkpoffbv4