A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
[article]
2018
arXiv
pre-print
Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models
arXiv:1804.09541v1
fatcat:clcqk45vm5dddo5tjjs7hkxggy