A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Domain Adaptation for Sequence Labeling Tasks with a Probabilistic Language Adaptation Model
2013
International Conference on Machine Learning
In this paper, we propose to address the problem of domain adaptation for sequence labeling tasks via distributed representation learning by using a log-bilinear language adaptation model. The proposed neural probabilistic language model simultaneously models two different but related data distributions in the source and target domains based on induced distributed representations, which encode both generalizable and domain-specific latent features. We then use the learned dense real-valued
dblp:conf/icml/XiaoG13
fatcat:e5ayddancbe4vb2jemwmgbqqja