Filters








3,515 Hits in 4.2 sec

Neural Entity Summarization with Joint Encoding and Weak Supervision [article]

Junyou Li, Gong Cheng, Qingxia Liu, Wen Zhang, Evgeny Kharlamov, Kalpa Gunaratna, Huajun Chen
2020 arXiv   pre-print
In this paper, we present a supervised approach NEST that is based on our novel neural model to jointly encode graph structure and text in KGs and generate high-quality diversified summaries.  ...  Since it is costly to obtain manually labeled summaries for training, our supervision is weak as we train with programmatically labeled data which may contain noise but is free of manual work.  ...  Acknowledgements This work was supported partially by the National Key R&D Program of China (2018YFB1005100) and the Six Talent Peaks Program of Jiangsu Province (RJFW-011).  ... 
arXiv:2005.00152v2 fatcat:agkmo62akvfyxnzr7ppyrdwgf4

Neural Entity Summarization with Joint Encoding and Weak Supervision

Junyou Li, Gong Cheng, Qingxia Liu, Wen Zhang, Evgeny Kharlamov, Kalpa Gunaratna, Huajun Chen
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
In this paper, we present a supervised approach NEST that is based on our novel neural model to jointly encode graph structure and text in KGs and generate high-quality diversified summaries.  ...  Since it is costly to obtain manually labeled summaries for training, our supervision is weak as we train with programmatically labeled data which may contain noise but is free of manual work.  ...  Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  ... 
doi:10.24963/ijcai.2020/224 dblp:conf/ijcai/DissingB20 fatcat:7n2v54u7tjci3f7faihyowtvae

Synthesizing strategies under expected and exceptional environment behaviors

Benjamin Aminof, Giuseppe De Giacomo, Alessio Lomuscio, Aniello Murano, Sasha Rubin
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
We consider an agent that operates with two models of the environment: one that captures expected behaviors and one that captures additional exceptional behaviors.  ...  We formalize these concepts in the context of linear-temporal logic, and give an algorithm for solving this problem.  ...  Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  ... 
doi:10.24963/ijcai.2020/228 dblp:conf/ijcai/Li0LZKGC20 fatcat:enkcr52ytnhrvde7iakwfvyjr4

Neural Entity Linking: A Survey of Models Based on Deep Learning [article]

Ozge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, Chris Biemann
2021 arXiv   pre-print
We distill generic architectural components of a neural EL system, like candidate generation and entity ranking, and summarize prominent methods for each of them.  ...  The vast variety of modifications of this general neural entity linking architecture are grouped by several common themes: joint entity recognition and linking, models for global linking, domain-independent  ...  The work of Artem Shelmanov in the current study (preparation of sections related to application of entity linking to neural language models, entity ranking, contextmention encoding, and overall harmonization  ... 
arXiv:2006.00575v3 fatcat:ra3kwc4tmbfhlmgtlevkcshcqq

Neural entity linking: A survey of models based on deep learning

Özge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, Chris Biemann, Mehwish Alam, Davide Buscaldi, Michael Cochez, Francesco Osborne, Diego Reforgiato Recupero, Harald Sack
2022 Semantic Web Journal  
This work distills a generic architecture of a neural EL system and discusses its components, such as candidate generation, mention-context encoding, and entity ranking, summarizing prominent methods for  ...  including zero-shot and distant supervision methods, and cross-lingual approaches.  ...  The work of Artem Shelmanov in the current study (preparation of sections related to application of entity linking to neural language models, entity ranking, context-mention encoding, and overall harmonization  ... 
doi:10.3233/sw-222986 fatcat:6gwmbtev7ngbliovf6cpf5hyde

A Brief Review of Relation Extraction Based on Pre-Trained Language Models [chapter]

Tiange Xu, Fu Zhang
2020 Frontiers in Artificial Intelligence and Applications  
This review mainly summarizes the research progress of pre-trained language models such as BERT in supervised learning and distant supervision relation extraction.  ...  Relation extraction is to extract the semantic relation between entity pairs in text, and it is a key point in building Knowledge Graphs and information extraction.  ...  Acknowledgments The authors thank the anonymous referees for their valuable comments and suggestions. The work is supported by the National Natural Science Foundation of China (61672139).  ... 
doi:10.3233/faia200755 fatcat:3n7a5c4ze5chtljsrtzmd323pi

TextCube

Yu Meng, Jiaxin Huang, Jingbo Shang, Jiawei Han
2019 Proceedings of the VLDB Endowment  
We focus on new TextCube construction methods that are scalable, weakly-supervised, domain-independent, language-agnostic, and effective (i.e., generating quality TextCubes from large corpora of various  ...  We will demonstrate with real datasets (including news articles, scientific publications, and product reviews) on how TextCubes can be constructed to assist multidimensional analysis of massive text corpora  ...  The neural models are typically trained to first model latent semantics of sentences via an encoder, and then generate summarizations by decoding hidden states of the latent space [16, 17] .  ... 
doi:10.14778/3352063.3352113 fatcat:lskuc2z45bg5fdjay37jpp62x4

Knowledge Efficient Deep Learning for Natural Language Processing [article]

Hai Wang
2020 arXiv   pre-print
In particular, we apply KRDL built on Markov logic networks to denoise weak supervision.  ...  There are various classical approaches to making the models more knowledge efficient such as multi-task learning, transfer learning, weakly supervised and unsupervised learning etc.  ...  Entity linking can incorporate more weak supervision. Joint entity linking and relation extraction can be improved by feeding back extraction results to linking.  ... 
arXiv:2008.12878v1 fatcat:vhcxrhydyfcsnh3iu5t3g5goky

Neural relation extraction: a survey [article]

Mehmet Aydar and Ozge Bozal and Furkan Ozbay
2020 arXiv   pre-print
Neural relation extraction discovers semantic relations between entities from unstructured text using deep learning methods.  ...  We discuss advantageous and incompetent sides of existing studies and investigate additional research directions and improvement ideas in this field.  ...  Conclusion In this survey, we summarized neural relation extraction methods in terms of their approaches and data supervision and datasets for this task.  ... 
arXiv:2007.04247v1 fatcat:xxrcy2ef75dk5aeijqlf6tjgke

Explicit State Tracking with Semi-Supervisionfor Neural Dialogue Generation

Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Zhao, Dawei Yin
2018 Proceedings of the 27th ACM International Conference on Information and Knowledge Management - CIKM '18  
In this paper, we propose the semi-supervised explicit dialogue state tracker (SEDST) for neural dialogue generation.  ...  Specifically, we propose an encoder-decoder architecture, named CopyFlowNet, to represent an explicit dialogue state with a probabilistic distribution over the vocabulary space.  ...  To sum up, our main contributions can be summarized as follows: • We focus on tracking explicit dialogue states with semisupervision for neural dialogue generation. • We propose a semi-supervised neural  ... 
doi:10.1145/3269206.3271683 dblp:conf/cikm/JinLRCLZY18 fatcat:reztbc7xefhqtpfe6ixqm6ui4m

A Text-Generated Method to Joint Extraction of Entities and Relations

Haihong E, Siqi Xiao, Meina Song
2019 Applied Sciences  
using a unified decoding process, and entities can be repeatedly presented in multiple triples to solve the overlapped-relation problem.  ...  Entity-relation extraction is a basic task in natural language processing, and recently, the use of deep-learning methods, especially the Long Short-Term Memory (LSTM) network, has achieved remarkable  ...  Abbreviations The following abbreviations are used in this manuscript: RC Relation Classification NER Named Entity Recognition LSTM Long Short-Term Memory RNN Recurrent Neural Network CNN Convolutional  ... 
doi:10.3390/app9183795 fatcat:5uss2xre7zdhvnrtw2qhps7ghq

Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction [article]

Qinyuan Ye, Liyuan Liu, Maosen Zhang, Xiang Ren
2019 arXiv   pre-print
Experiments demonstrate that bias adjustment achieves consistent performance gains on DS-trained models, especially on neural models, with an up to 23 data can be found at .  ...  In this paper, we study the problem what limits the performance of DS-trained neural models, conduct thorough analyses, and identify a factor that can influence the performance greatly, shifted label distribution  ...  , Snapchat Gift and JP Morgan AI Research Award.  ... 
arXiv:1904.09331v2 fatcat:676knvbzyjcovbh7y3lxsdl7fe

Learning Dual Retrieval Module for Semi-supervised Relation Extraction [article]

Hongtao Lin, Jun Yan, Meng Qu, Xiang Ren
2019 arXiv   pre-print
Relation extraction is an important task in structuring content of text data, and becomes especially challenging when learning with weak supervision---where only a limited number of labeled sentences are  ...  given and a large number of unlabeled sentences are available.  ...  ACKNOWLEDGEMENT This work has been supported in part by National Science Foundation SMA 18-29268, Amazon Faculty Award, and JP Morgan AI Research Award.  ... 
arXiv:1902.07814v2 fatcat:wv4qtt7wbrfipbp4dh3e7hrowm

Weak Supervision helps Emergence of Word-Object Alignment and improves Vision-Language Tasks [article]

Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf
2019 arXiv   pre-print
(2) that adding weak supervision for alignment between visual objects and words improves the quality of the learned models on tasks requiring reasoning.  ...  In particular, this new learning signal allows obtaining SOTA-level performances on GQA dataset (VQA task) with pre-trained models without finetuning on the task, and a new SOTA on NLVR2 dataset (Language-driven  ...  Conclusion In this work, we design a vision-language encoder and train it with a novel object-word alignment weak supervision.  ... 
arXiv:1912.03063v1 fatcat:ete5vlbecfh67e324zfeo7qahu

PARE: A Simple and Strong Baseline for Monolingual and Multilingual Distantly Supervised Relation Extraction [article]

Vipul Rathore, Kartikeya Badola, Mausam, Parag Singla
2022 arXiv   pre-print
Neural models for distantly supervised relation extraction (DS-RE) encode each sentence in an entity-pair bag separately. These are then aggregated for bag-level relation prediction.  ...  In response, we explore a simple baseline approach (PARE) in which all sentences of a bag are concatenated into a passage of sentences, and encoded jointly using BERT.  ...  We thank Abhyuday Bhartiya for helping in reproducing results from the DiS-ReX paper, and Keshav Kolluru for helpful comments on an earlier draft of the paper.  ... 
arXiv:2110.07415v2 fatcat:ing2hsgkyrdtfjdx2xpgo6d6ga
« Previous Showing results 1 — 15 out of 3,515 results