Re-Ranking Words to Improve Interpretability of Automatically Generated Topics

Areej Alokaili, Nikolaos Aletras, Mark Stevenson
2019 Proceedings of the 13th International Conference on Computational Semantics - Long Papers  
orcid.org/0000-0002-9483-6006 (2019) Re-ranking words to improve interpretability of automatically generated topics. Abstract Topics models, such as LDA, are widely used in Natural Language Processing. Making their output interpretable is an important area of research with applications to areas such as the enhancement of exploratory search interfaces and the development of interpretable machine learning models. Conventionally, topics are represented by their n most probable words, however,
more » ... representations are often difficult for humans to interpret. This paper explores the re-ranking of topic words to generate more interpretable topic representations. A range of approaches are compared and evaluated in two experiments. The first uses crowdworkers to associate topics represented by different word rankings with related documents. The second experiment is an automatic approach based on a document retrieval task applied on multiple domains. Results in both experiments demonstrate that re-ranking words improves topic interpretability and that the most effective re-ranking schemes were those which combine information about the importance of words both within topics and their relative frequency in the entire corpus. In addition, close correlation between the results of the two evaluation approaches suggests that the automatic method proposed here could be used to evaluate re-ranking methods without the need for human judgements. 3 https://radimrehurek.com/gensim 4 The implementations available in Gensim were used. 5 https://www.figure-eight.com/ 6 One of the ten micro-tasks on each page is reserved for quality assessment. 7 Alternative values for these parameters were explored but it was found that lowering the probability of the correct answer and/or raising the probability of the distractors made the task too difficult. 8 4 (ranking methods) × 3 (cardinalities) × 5 (judgments per document)
doi:10.18653/v1/w19-0404 dblp:conf/iwcs/AlokailiAS19 fatcat:lewikfpkhvgzrc2x65y34qh5la