A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Bridging the Gap between Naive Bayes and Maximum Entropy Text Classification
english
2007
Proceedings of the 7th International Workshop on Pattern Recognition in Information Systems
unpublished
english
The naive Bayes and maximum entropy approaches to text classification are typically discussed as completely unrelated techniques. In this paper, however, we show that both approaches are simply two different ways of doing parameter estimation for a common log-linear model of class posteriors. In particular, we show how to map the solution given by maximum entropy into an optimal solution for naive Bayes according to the conditional maximum likelihood criterion. Naive Bayes Model We denote the
doi:10.5220/0002425700590065
fatcat:p3z663bdxzgr3hivaqlkc2zer4