Depth-bounding is effective: Improvements and evaluation of unsupervised PCFG induction

Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, Lane Schwartz
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
There have been several recent attempts to improve the accuracy of grammar induction systems by bounding the recursive complexity of the induction model (Ponvert et al., 2011; Noji and Johnson, 2016; Shain et al., 2016; Jin et al., 2018) . Modern depth-bounded grammar inducers have been shown to be more accurate than early unbounded PCFG inducers, but this technique has never been compared against unbounded induction within the same system, in part because most previous depthbounding models are
more » ... built around sequence models, the complexity of which grows exponentially with the maximum allowed depth. The present work instead applies depth bounds within a chart-based Bayesian PCFG inducer (Johnson et al., 2007b) , where bounding can be switched on and off, and then samples trees with and without bounding. 1 Results show that depth-bounding is indeed significantly effective in limiting the search space of the inducer and thereby increasing the accuracy of the resulting parsing model. Moreover, parsing results on English, Chinese and German show that this bounded model with a new inference technique is able to produce parse trees more accurately than or competitively with state-ofthe-art constituency-based grammar induction models.
doi:10.18653/v1/d18-1292 dblp:conf/emnlp/JinDMSS18 fatcat:kxytcssbjjarto4ieza6wdar3e