Incorporating non-local information into information extraction systems by Gibbs sampling

Jenny Rose Finkel, Trond Grenager, Christopher Manning
2005 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics - ACL '05  
Current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using Gibbs sampling in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it
more » ... CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 7% over state-of-the-art systems on two established information extraction tasks. 1 Prior uses in NLP of which we are aware include: Kim et al
doi:10.3115/1219840.1219885 dblp:conf/acl/FinkelGM05 fatcat:cktkw2haxzbz5klcv2elfr4d3i