Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model

Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, Vadim Sheinin
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency or constituent trees. In this paper, we first propose to use the syntactic graph to represent three types of syntactic information, i.e., word order, dependency and constituency features; then employ a graph-tosequence model to encode the syntactic graph and decode a logical form. Experimental results
more » ... on benchmark datasets show that our model is comparable to the state-of-the-art on Jobs640, ATIS, and Geo880. Experimental results on adversarial examples demonstrate the robustness of the model is also improved by encoding more syntactic information.
doi:10.18653/v1/d18-1110 dblp:conf/emnlp/XuWWYCS18 fatcat:ztx75adlwvdwrfzib6p67yclru