Filters








3,382 Hits in 4.3 sec

Pretrained Transformers for Simple Question Answering over Knowledge Graphs [article]

D. Lukovnikov, A. Fischer, J. Lehmann
2020 arXiv   pre-print
Answering simple questions over knowledge graphs is a well-studied problem in question answering.  ...  Previous approaches for this task built on recurrent and convolutional neural network based architectures that use pretrained word embeddings.  ...  The main focus of our work is to investigate transfer learning for question answering over knowledge graphs (KGQA) using models pretrained for language modeling.  ... 
arXiv:2001.11985v1 fatcat:r5dz2n47bzefrd5skjkimgwtym

Vision–Language–Knowledge Co-Embedding for Visual Commonsense Reasoning

JaeYun Lee, Incheol Kim
2021 Sensors  
Visual commonsense reasoning is an intelligent task performed to decide the most appropriate answer to a question while providing the rationale or reason for the answer when an image, a natural language  ...  with the input image to answer the question.  ...  Therefore, in this study, pretraining is performed over two stages for the co-embedder.  ... 
doi:10.3390/s21092911 pmid:33919196 fatcat:ffoa2x6cindp3masddynz2qdmu

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering [article]

Antoine Bosselut, Ronan Le Bras, Yejin Choi
2020 arXiv   pre-print
In this paper, we present initial studies toward zero-shot commonsense question answering by formulating the task as inference over dynamically generated commonsense knowledge graphs.  ...  Our approach achieves significant performance boosts over pretrained language models and vanilla knowledge models, all while providing interpretable reasoning paths for its predictions.  ...  Acknowledgments We thank Maarten Sap and Hannah Rashkin for helpful feedback.  ... 
arXiv:1911.03876v2 fatcat:whj7yqj4tzapvmwhhblirxujgq

Recent Advances in Automated Question Answering In Biomedical Domain [article]

Krishanu Das Baksi
2021 arXiv   pre-print
The objective of automated Question Answering (QA) systems is to provide answers to user queries in a time efficient manner.  ...  The answers are usually found in either databases (or knowledge bases) or a collection of documents commonly referred to as the corpus.  ...  Template based Methods for Question Answering over Knowledge Graphs Template matching is one of the earliest methods for question answering using knowledge graphs.  ... 
arXiv:2111.05937v1 fatcat:5474jk6ozbalvmfjrgatu4tsna

A Two-Stage Approach towards Generalization in Knowledge Base Question Answering [article]

Srinivas Ravishankar, June Thai, Ibrahim Abdelaziz, Nandana Mihidukulasooriya, Tahira Naseem, Pavan Kapanipathi, Gaetano Rossiello, Achille Fokoue
2021 arXiv   pre-print
Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a  ...  across datasets and knowledge graphs.  ...  Introduction Knowledge Base Question Answering (KBQA) has gained significant popularity in recent times due to its real-world applications, facilitating access to rich Knowledge Graphs (KGs) without the  ... 
arXiv:2111.05825v2 fatcat:izdfi5e5gnccfpvd2aeame45zm

[Re] Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings

Jishnu Jaykumar P, Ashish Sardana
2021 Zenodo  
Exploring the effect of various knowledge graph embedding models in the Knowledge Graph Embedding module. 3. Exploring the effect of various transformer models in the Question Embedding module. 4.  ...  Question-Answering models were trained from scratch as no pre-trained models were available for our particular dataset.  ...  Moreover, MetaQA-KG-50 3- https://apoorvumang.github.io ReScience C 7.2 (#15) -P and Sardana 2021 [Re] Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings  ... 
doi:10.5281/zenodo.4834941 fatcat:uu33s5olqjckfk4brnpfqbcmy4

Contextualized Representations Using Textual Encyclopedic Knowledge [article]

Mandar Joshi, Kenton Lee, Yi Luan, Kristina Toutanova
2021 arXiv   pre-print
We show that integrating background knowledge from text is effective for tasks focusing on factual reasoning and allows direct reuse of powerful pretrained BERT-style encoders.  ...  Moreover, knowledge integration can be further improved with suitable pretraining via a self-supervised masked language model objective over words in background-augmented input text.  ...  Our approach records considerable improvements over state of the art base (12-layer) and large (24-layer) Transformer models for in-domain and out-of-domain document-level extractive question answering  ... 
arXiv:2004.12006v2 fatcat:5dm7mpxombcujou6n6ge6gf54a

Bridging the Knowledge Gap: Enhancing Question Answering with World and Domain Knowledge [article]

Travis R. Goodwin, Dina Demner-Fushman
2019 arXiv   pre-print
We evaluated the impact of including OSCAR when pretraining BERT with Wikipedia articles by measuring the performance when fine-tuning on two question answering tasks involving world knowledge and causal  ...  In this paper we present OSCAR (Ontology-based Semantic Composition Augmented Regularization), a method for injecting task-agnostic knowledge from an Ontology or knowledge graph into a neural network during  ...  We show that incorporating OSCAR into BERT's pretraining injects sufficient world knowledge to improve fine-tuned performance in three question answering datasets.  ... 
arXiv:1910.07429v1 fatcat:ahgn72ca4vgtvp55vdlxp5g5sq

Knowledge-Aware Language Model Pretraining [article]

Corby Rosset, Chenyan Xiong, Minh Phan, Xia Song, Paul Bennett, Saurabh Tiwary
2021 arXiv   pre-print
for GPT-2 models, significantly improving downstream tasks like zero-shot question-answering with no task-related training.  ...  How much knowledge do pretrained language models hold?  ...  With sufficiently large amount of parameters, i.e. several billions, and enough task-specific supervision, the pretrained language models can even directly generate answers for natural language questions  ... 
arXiv:2007.00655v2 fatcat:lmdngvj4i5c45nwzc7ni6lbec4

GreaseLM: Graph REASoning Enhanced Language Models for Question Answering [article]

Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec
2022 arXiv   pre-print
Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it.  ...  While knowledge graphs (KG) are often used to augment LMs with structured representations of world knowledge, it remains an open question how to effectively fuse and reason over the KG representations  ...  ACKNOWLEDGMENT We thank Rok Sosic, Maria Brbic, Jordan Troutman, Rajas Bansal, and our anonymous reviewers for discussions and for providing feedback on our manuscript.  ... 
arXiv:2201.08860v1 fatcat:2idgswwqknhnflhrc4a3tnulla

Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey [article]

Xiaokai Wei, Shen Wang, Dejiao Zhang, Parminder Bhatia, Andrew Arnold
2021 arXiv   pre-print
In this paper, we provide a comprehensive survey of the literature on this emerging and fast-growing field - Knowledge Enhanced Pretrained Language Models (KE-PLMs).  ...  Besides, we also survey the various NLU and NLG applications on which KE-PLM has demonstrated superior performance over vanilla PLMs.  ...  to entity typing [100] , question answering [101] [45] , story generation [22] and knowledge graph completion [102] .  ... 
arXiv:2110.08455v1 fatcat:b2nw5jdu7neo3brveddmah6mra

KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs [article]

Yinquan Lu, Haonan Lu, Guirong Fu, Qun Liu
2022 arXiv   pre-print
Re-pretraining these models is usually resource-consuming, and difficult to adapt to another domain with a different knowledge graph (KG).  ...  sub-graphs extracted from KG.  ...  Lin for insightful comments on the manuscript. We also thank Dr. Y. Guo for helpful suggestions in parallel training settings.  ... 
arXiv:2109.04223v2 fatcat:rszt2tcnhjb7ni4ng2ityk3ap4

Question Classification for Efficient QA System

K. P. Moholkar , Et. al.
2021 Turkish Journal of Computer and Mathematics Education  
For an efficient QA system, understanding the category of a question plays a pivot role in extracting suitable answer.  ...  Identifying the intent of the question helps to extract expected answer from a given passage. Pretrained language models (LMs) have demonstrated excellent results on many language tasks.  ...  The following graph illustrates the efficiency of our model over BERT and ALBERT models. The table 3 demonstrates the performance of the model for different question intent tasks.  ... 
doi:10.17762/turcomat.v12i2.1526 fatcat:ckfu67rchvamhahgib2zwq2miq

A General Method for Transferring Explicit Knowledge into Language Model Pretraining

Ruiqing Yan, Lanchang Sun, Fang Wang, Xiaoming Zhang, Feiran Huang
2021 Security and Communication Networks  
Different from recent research that optimizes pretraining models by knowledge masking strategies, we propose a simple but general method to transfer explicit knowledge with pretraining.  ...  To be specific, we first match knowledge facts from a knowledge base (KB) and then add a knowledge injunction layer to a transformer directly without changing its architecture.  ...  such as question answering [25] and text classification [26] .  ... 
doi:10.1155/2021/7115167 fatcat:rce6q7m27jacvlmyb7inhadozu

Video Question Answering: Datasets, Algorithms and Challenges [article]

Yaoyao Zhong, Wei Ji, Junbin Xiao, Yicong Li, Weihong Deng, Tat-Seng Chua
2022 arXiv   pre-print
Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos.  ...  We then point out the research trend of studying beyond factoid QA to inference QA towards the cognition of video contents, Finally, we conclude some promising directions for future exploration.  ...  , with 69M video-question-answer triplets, using contrastive learning between a multi-modal video-question Transformer and an answer Transformer.  ... 
arXiv:2203.01225v1 fatcat:dn4sz5pomnfb7igvmxofangzsa
« Previous Showing results 1 — 15 out of 3,382 results