A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Separating Answers from Queries for Neural Reading Comprehension
[article]
2016
arXiv
pre-print
We present a novel neural architecture for answering queries, designed to optimally leverage explicit support in the form of query-answer memories. ...
On recent benchmark datasets for reading comprehension, our model achieves state-of-the-art results. ...
Acknowledgments We thank Thomas Demeester, Thomas Werkmeister, Sebastian Krause, Tim Rocktäschel and Sebastian Riedel for their comments on an early draft of this work. ...
arXiv:1607.03316v3
fatcat:lmyfzbkmcfg2zj34fc5wmn5owu
Consensus Attention-based Neural Networks for Chinese Reading Comprehension
[article]
2018
arXiv
pre-print
Also, we propose a consensus attention-based neural network architecture to tackle the Cloze-style reading comprehension problem, which aims to induce a consensus attention over every words in the query ...
Furthermore, we setup a baseline for Chinese reading comprehension task, and hopefully this would speed up the process for future research. ...
Acknowledgements We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. ...
arXiv:1607.02250v3
fatcat:og73ebwwhffzvkggzzx3xtuhne
Reasoning with Memory Augmented Neural Networks for Language Comprehension
[article]
2017
arXiv
pre-print
We apply the proposed approach to language comprehension task by using Neural Semantic Encoders (NSE). ...
In this paper, we introduce a computational hypothesis testing approach based on memory augmented neural networks. ...
ACKNOWLEDGMENTS We would like to thank Abhyuday Jagannatha, Jesse Lingeman and the reviewers for their insightful comments and suggestions. ...
arXiv:1610.06454v2
fatcat:oqrccebymzetfcs3c554qn3duq
Dataset for the First Evaluation on Chinese Machine Reading Comprehension
[article]
2018
arXiv
pre-print
To add diversity in reading comprehension datasets, in this paper we propose a new Chinese reading comprehension dataset for accelerating related research in the community. ...
The proposed dataset contains two different types: cloze-style reading comprehension and user query reading comprehension, associated with large-scale training data as well as human-annotated validation ...
We thank the Sixteenth China National Conference on Computational Linguistics (CCL 2017) and Nanjing Normal University for providing space for evaluation workshop. ...
arXiv:1709.08299v2
fatcat:wwfwkc7725akxkuv6blogjdtsi
Teaching Machines to Read and Comprehend
[article]
2015
arXiv
pre-print
This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure. ...
for this type of evaluation. ...
Our first neural model for reading comprehension tests the ability of Deep LSTM encoders to handle significantly longer sequences. ...
arXiv:1506.03340v3
fatcat:6qdst7ekxrcflobqjpfhscn7iq
Iterative Alternating Neural Attention for Machine Reading
[article]
2016
arXiv
pre-print
We propose a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document. ...
Unlike previous models, we do not collapse the query into a single vector, instead we deploy an iterative alternating attention mechanism that allows a fine-grained exploration of both the query and the ...
Our architecture deploys a novel alternating attention mechanism, and tightly integrates successful ideas from past works in machine reading comprehension to obtain state-of-the-art results on three datasets ...
arXiv:1606.02245v4
fatcat:j4qxnjewbfawpog4l5xdvtkgwy
CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension
[article]
2018
arXiv
pre-print
We present a new dataset for machine comprehension in the medical domain. Our dataset uses clinical case reports with around 100,000 gap-filling queries about these cases. ...
We analyze the skills required for successful answering and show how reader performance varies depending on the applicable skills. ...
Acknowledgments We would like to thank Madhumita Sushil and the anonymous reviewers for useful comments. We are also grateful to BMJ Case Reports for allowing the collection of case reports. ...
arXiv:1803.09720v1
fatcat:vbgvcjt4lrbqrpzxfomamumuky
CliCR: a Dataset of Clinical Case Reports for Machine Reading Comprehension
2018
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
We present a new dataset for machine comprehension in the medical domain. Our dataset uses clinical case reports with around 100,000 gap-filling queries about these cases. ...
We analyze the skills required for successful answering and show how reader performance varies depending on the applicable skills. ...
Acknowledgments We would like to thank Madhumita Sushil and the anonymous reviewers for useful comments. We are also grateful to BMJ Case Reports for allowing the collection of case reports. ...
doi:10.18653/v1/n18-1140
dblp:conf/naacl/SusterD18
fatcat:ve32i5ptyrdzzfztshhypqi2jq
WikiPassageQA: A Benchmark Collection for Research on Non-factoid Answer Passage Retrieval
[article]
2018
arXiv
pre-print
With the rise in mobile and voice search, answer passage retrieval acts as a critical component of an effective information retrieval system for open domain question answering. ...
In this paper, we introduce a new Wikipedia based collection specific for non-factoid answer passage retrieval containing thousands of questions with annotated answers and show benchmark results on a variety ...
ACKNOWLEDGEMENTS This work was supported in part by the Center for Intelligent Information Retrieval, in part by NSF #IIS-1160894, and in part by NSF grant #IIS-1419693. ...
arXiv:1805.03797v1
fatcat:q3a6yszqbnc7tgtjbtr6nrgvnq
Enhancing Machine Reading Comprehension with Position Information
2019
IEEE Access
Therefore, the position information may be helpful in finding the answer rapidly and is useful for reading comprehension. ...
When people do the reading comprehension, they often try to find the words from the passages which are similar to the question words first. ...
Benefiting from the introduction of many large datasets, machine reading comprehension neural model made rapid progress in recently. ...
doi:10.1109/access.2019.2930407
fatcat:fw2cabd6evamxcotnsdsmwi6ja
The NarrativeQA Reading Comprehension Challenge
[article]
2017
arXiv
pre-print
To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. ...
Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. ...
A common strategy for assessing the language understanding capabilities of comprehension models is to demonstrate that they can answer questions about documents they read, akin to how reading comprehension ...
arXiv:1712.07040v1
fatcat:4vjf4yx6f5gadjpynnxb2mrp5y
The NarrativeQA Reading Comprehension Challenge
2018
Transactions of the Association for Computational Linguistics
To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. ...
Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. ...
A common strategy for assessing the language understanding capabilities of comprehension models is to demonstrate that they can answer questions about documents they read, akin to how reading comprehension ...
doi:10.1162/tacl_a_00023
fatcat:nuieq445d5ao5ndvm5hburluty
Dependent Gated Reading for Cloze-Style Question Answering
[article]
2018
arXiv
pre-print
Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. ...
In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. ...
Attention-over-attention
neural networks for reading comprehension. ...
arXiv:1805.10528v2
fatcat:cg6kv3h6tzbajfxtefnp35deyu
Entity Tracking Improves Cloze-style Reading Comprehension
2018
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Reading comprehension tasks test the ability of models to process long-term context and remember salient information. ...
Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. ...
Related Work The first popular neural network reading comprehension models were the Attentive Reader and its variant Impatient Reader (Hermann et al., 2015) . ...
doi:10.18653/v1/d18-1130
dblp:conf/emnlp/HoangWR18
fatcat:qy7zp2htwvh4fcxtingaubmcee
Entity Tracking Improves Cloze-style Reading Comprehension
[article]
2018
arXiv
pre-print
Reading comprehension tasks test the ability of models to process long-term context and remember salient information. ...
Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. ...
Related Work The first popular neural network reading comprehension models were the Attentive Reader and its variant Impatient Reader (Hermann et al., 2015) . ...
arXiv:1810.02891v1
fatcat:gwl7seembvd3hmofrnmtllga3q
« Previous
Showing results 1 — 15 out of 10,152 results