Filters








9,759 Hits in 4.3 sec

Taking a Closed-Book Examination: Decoupling KB-Based Inference by Virtual Hypothesis for Answering Real-World Questions

Xiao Zhang, Guorui Zhao, Qiangqiang Yuan
2021 Computational Intelligence and Neuroscience  
In addition, we create a specialized question answering dataset only for inference, and our method is proved to be effective by conducting experiments on both AI2 Science Questions dataset and ours.  ...  Complex question answering in real world is a comprehensive and challenging task due to its demand for deeper question understanding and deeper inference.  ...  Aristo [21] is a QA system for science questions which combines 5 solvers including IR, MLN [22] , and other inference methods.  ... 
doi:10.1155/2021/6689740 pmid:33688337 pmcid:PMC7920719 fatcat:p53kpk772nfqzm4sgjfssqje4y

Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?

Peter Jansen
2018 Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12)  
This is a major barrier to current inference models, as even elementary science questions require an average of 4 to 6 facts to answer and explain.  ...  Question Answering for complex questions is often modelled as a graph construction or traversal task, where a solver must build or traverse a graph of facts that answer and explain a given question.  ...  Introduction Question answering (QA) is a task where models must find answers to natural language questions, either by retrieving these answers from a corpus, or inferring them by some inference process  ... 
doi:10.18653/v1/w18-1703 dblp:conf/textgraphs/Jansen18 fatcat:affa552ayfauvhkjxl3mhaxvt4

WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference [article]

Peter A. Jansen, Elizabeth Wainwright, Steven Marmorstein, Clayton T. Morrison
2018 arXiv   pre-print
as "explanation graphs" -- sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge.  ...  Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science  ...  retrieval methods that work to identify passages of text likely to contain the answer in large corpora using statistical methods.  ... 
arXiv:1802.03052v1 fatcat:sn74to3szvg2nedluexrbibr4a

Automatic recognition of reading levels from user queries

Xiaoyong Liu, W. Bruce Croft, Paul Oh, David Hart
2004 Proceedings of the 27th annual international conference on Research and development in information retrieval - SIGIR '04  
The proposed method can be used directly in an information retrieval or question-answering system to determine the level of answers appropriate for the users.  ...  A local school teacher who is familiar with both elementary and middle school science classes is presented with the queries and retrieved Web pages and asked to judge whether each of the Web pages is topically  ... 
doi:10.1145/1008992.1009114 dblp:conf/sigir/LiuCOH04 fatcat:ibgr3wnvuzhhxjhphnuxlfcvmy

Answering Complex Questions Using Open Information Extraction

Tushar Khot, Ashish Sabharwal, Peter Clark
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)  
While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques  ...  Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrievalbased methods.  ...  Acknowledgments The authors would like to thank Oren Etzioni for valuable feedback on an early draft of this paper, and Colin Arenz and Michal Guerquin for helping us develop this system.  ... 
doi:10.18653/v1/p17-2049 dblp:conf/acl/KhotSC17 fatcat:pa7o4kqbmjgwpke5r6reu7lgy4

Framing QA as Building and Ranking Intersentence Answer Justifications

Peter Jansen, Rebecca Sharp, Mihai Surdeanu, Peter Clark
2017 Computational Linguistics  
We propose a question answering (QA) approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct  ...  We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines, including neural network  ...  This interest has been disclosed to the University of Arizona Institutional Review Committee and is being managed in accordance with its conflict of interest policies.  ... 
doi:10.1162/coli_a_00287 fatcat:kqssixxjpbavdc6nimqknvtn74

Answering Complex Questions Using Open Information Extraction [article]

Tushar Khot and Ashish Sabharwal and Peter Clark
2017 arXiv   pre-print
While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques  ...  Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods.  ...  Science QA: Elementary-level science QA tasks require reasoning to handle complex questions.  ... 
arXiv:1704.05572v1 fatcat:u66l6miwtzh3jgpv3yvtctdtxa

Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering? [article]

Peter Jansen
2018 arXiv   pre-print
This is a major barrier to current inference models, as even elementary science questions require an average of 4 to 6 facts to answer and explain.  ...  Question Answering for complex questions is often modeled as a graph construction or traversal task, where a solver must build or traverse a graph of facts that answer and explain a given question.  ...  Introduction Question answering (QA) is a task where models must find answers to natural language questions, either by retrieving these answers from a corpus, or inferring them by some inference process  ... 
arXiv:1805.11267v1 fatcat:mujrhpg6drdpbngz3e6cuuliiy

QASC: A Dataset for Question Answering via Sentence Composition [article]

Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, Ashish Sabharwal
2020 arXiv   pre-print
We present a multi-hop reasoning dataset, Question Answering via Sentence Composition(QASC), that requires retrieving facts from a large corpus and composing them to answer a multiple-choice question.  ...  Composing knowledge from multiple pieces of texts is a key challenge in multi-hop question answering.  ...  We thank the Amazon Mechanical Turk workers for their effort in creating and annotating QASC questions. Computations on beaker.org were supported in part by credits from Google Cloud.  ... 
arXiv:1910.11473v2 fatcat:fivtavvtgba6bgk5p5ashvalfq

My Computer Is an Honor Student — but How Intelligent Is It? Standardized Tests as a Measure of AI

Peter Clark, Oren Etzioni
2016 The AI Magazine  
Here we propose this task as a challenge problem for the community, summarize our state-of-the-art results on math and science tests, and provide supporting datasets  ...  Given the well-known limitations of the Turing Test, there is a need for objective tests to both focus attention on, and measure progress towards, the goals of AI.  ...  Simple Inference Many questions are unlikely to have answers explicitly written down anywhere, from questions requiring a relatively simple leap from what might be already known to questions requiring  ... 
doi:10.1609/aimag.v37i1.2636 fatcat:wph6iuncknfkpeqzajndi6l3rm

Question Answering via Integer Programming over Semi-Structured Knowledge [article]

Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, Dan Roth
2016 arXiv   pre-print
Answering science questions posed in natural language is an important AI challenge. Answering such questions often requires non-trivial inference and knowledge that goes beyond factoid retrieval.  ...  including questions requiring multi-step inference and a combination of multiple facts.  ...  The authors would like to thank Christos Christodoulopoulos, Sujay Jauhar, Sam Skjonsberg, and the Aristo Team at AI2 for invaluable discussions and insights.  ... 
arXiv:1604.06076v1 fatcat:zcushksgzfbavpchwdjz2e647q

Gender differences in inference generation by fourth-grade students

Virginia Clinton, Ben Seipel, Paul van den Broek, Kristen L. McMaster, Panayiota Kendeou, Sarah E. Carlson, David N. Rapp
2012 Journal of Research in Reading  
We wish to thank the students and teachers who participated in the study and the many undergraduate and graduate research assistants who provided classroom support, developed materials, and collected and  ...  Acknowledgements This data set was originally collected as part of a study supported with funding from the Institute of Education Sciences (Grant R305G040021).  ...  We did not seek to answer this question in our study.  ... 
doi:10.1111/j.1467-9817.2012.01531.x fatcat:km2isdcq7jeevmzxubr7gncpwi

Answering Elementary Science Questions by Constructing Coherent Scenes using Background Knowledge

Yang Li, Peter Clark
2015 Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing  
The scene constitutes an understanding of the text, and can be used to answer questions that go beyond the text.  ...  Rather, the reader uses his/her knowledge to fill in gaps and create a coherent, mental picture or "scene" depicting what text appears to convey.  ...  We thank the Aristo team at AI2 for invaluable discussions, and the anonymous reviewers for helpful comments.  ... 
doi:10.18653/v1/d15-1236 dblp:conf/emnlp/LiC15 fatcat:emh6iclb3bcn7bf6bi2522oiai

Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering [article]

Todor Mihaylov, Peter Clark, Tushar Khot, Ashish Sabharwal
2018 arXiv   pre-print
The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations.  ...  We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.  ...  We change the Question Match model to a classic BiLSTM Max-Out (Conneau et al., 2017) for textual entailment, by replacing the question q and a choice c i with the premise p and the hypothesis h, resp  ... 
arXiv:1809.02789v1 fatcat:4xzfjb4aobbelnw6n2nymewl4q

Autoregressive Reasoning over Chains of Facts with Transformers [article]

Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens
2020 arXiv   pre-print
This paper proposes an iterative inference algorithm for multi-hop explanation regeneration, that retrieves relevant factual evidence in the form of text snippets, given a natural language question and  ...  Combining multiple sources of evidence or facts for multi-hop reasoning becomes increasingly hard when the number of sources needed to make an inference grows.  ...  G078618N and from the European Research Council (ERC) under Grant Agreement No. 788506. The Flemish Supercomputer Center (VSC) provided hardware and GPUs.  ... 
arXiv:2012.11321v1 fatcat:2s2sbxrhyjdspkaeem7ois5zbu
« Previous Showing results 1 — 15 out of 9,759 results