A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2005; you can also visit the original URL.
The file type is
Current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using Gibbs sampling in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, itdoi:10.3115/1219840.1219885 dblp:conf/acl/FinkelGM05 fatcat:cktkw2haxzbz5klcv2elfr4d3i