Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification [article]

Yau-Shian Wang and Hung-Yi Lee and Yun-Nung Chen
2020 arXiv   pre-print
Learning continuous representations from unlabeled textual data has been increasingly studied for benefiting semi-supervised learning. Although it is relatively easier to interpret discrete representations, due to the difficulty of training, learning discrete representations for unlabeled textual data has not been widely explored. This work proposes TIGAN that learns to encode texts into two disentangled representations, including a discrete code and a continuous noise, where the discrete code
more » ... epresents interpretable topics, and the noise controls the variance within the topics. The discrete code learned by TIGAN can be used for unsupervised text classification. Compared to other unsupervised baselines, the proposed TIGAN achieves superior performance on six different corpora. Also, the performance is on par with a recently proposed weakly-supervised text classification method. The extracted topical words for representing latent topics show that TIGAN learns coherent and highly interpretable topics.
arXiv:2004.13255v1 fatcat:jodrqpdmeba73d3odvknooqami