Refining Raw Sentence Representations for Textual Entailment Recognition via Attention

Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola, Yutaka Matsuo
2017 Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP  
In this paper we present the model used by the team Rivercorners for the 2017 RepEval shared task. First, our model separately encodes a pair of sentences into variable-length representations by using a bidirectional LSTM. Later, it creates fixed-length raw representations by means of simple aggregation functions, which are then refined using an attention mechanism. Finally it combines the refined representations of both sentences into a single vector to be used for classification. With this
more » ... el we obtained test accuracies of 72.057% and 72.055% in the matched and mismatched evaluation tracks respectively, outperforming the LSTM baseline, and obtaining performances similar to a model that relies on shared information between sentences (ESIM). When using an ensemble both accuracies increased to 72.247% and 72.827% respectively.
doi:10.18653/v1/w17-5310 dblp:conf/repeval/BalazsMLM17 fatcat:clj33qfghjemnbawx5tijquhim