Automatic Evaluation of Local Topic Quality

Jeffrey Lund, Piper Armstrong, Wilson Fearn, Stephen Cowley, Courtni Byun, Jordan Boyd-Graber, Kevin Seppi
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Recent models, which claim to improve token-level topic assignments, are only validated on global metrics. We elicit human judgments of token-level topic assignments: over a variety of topic model types and parameters,
more » ... global metrics agree poorly with human assignments. Since human evaluation is expensive we propose automated metrics to evaluate topic models at a local level. Finally, we correlate our proposed metrics with human judgments: an evaluation based on the percent of topic switches correlates most strongly with human judgment of local topic quality. This new metric, which we call consistency, should be adopted alongside global metrics such as topic coherence.
doi:10.18653/v1/p19-1076 dblp:conf/acl/LundAFCBBS19 fatcat:e5aud6ljmveu3neh5yue7sympi