Filters








3,735 Hits in 6.9 sec

Chinese Relation Extraction with Multi-Grained Information and External Linguistic Knowledge

Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, Ying Shen
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
To address the issues, we propose a multi-grained lattice framework (MG lattice) for Chinese relation extraction to take advantage of multi-grained language information and external linguistic knowledge  ...  Chinese relation extraction is conducted using neural networks with either character-based or word-based inputs, and most existing methods typically suffer from segmentation errors and ambiguity of polysemy  ...  In this paper, we proposed the multi-granularity lattice framework (MG lattice), a unified model comprehensively utilizes both internal information and external knowledge, to conduct the Chinese RE task  ... 
doi:10.18653/v1/p19-1430 dblp:conf/acl/LiDLZS19 fatcat:jxp5llwmknbx7f67jovaf6ebne

Learning Fine-grained Fact-Article Correspondence in Legal Cases [article]

Jidong Ge, Yunyun huang, Xiaoyu Shen, Chuanyi Li, Wei Hu
2021 arXiv   pre-print
We treat the learning as a text matching task and propose a multi-level matching network to address it.  ...  Furthermore, we compare with previous researches and find that establishing the fine-grained fact-article correspondences can improve the recommendation accuracy by a large margin.  ...  and the external knowledge of law articles Q.  ... 
arXiv:2104.10726v3 fatcat:o2ht6u6bifezffvy6nbkxyctrm

Lexicon-Enhanced Multi-Task Convolutional Neural Network for Emotion Distribution Learning

Yuchang Dong, Xueqiang Zeng
2022 Axioms  
The LMT-CNN model designs an end-to-end multi-module deep neural network to utilize both semantic information and linguistic knowledge.  ...  Specifically, the architecture of the LMT-CNN model consists of a semantic information module, an emotion knowledge module based on affective words, and a multi-task prediction module to predict emotion  ...  In contrast, deep learning algorithms can automate feature extraction, which allows researchers to extract features with minimal domain knowledge and manpower.  ... 
doi:10.3390/axioms11040181 fatcat:slpbpbkvizctdjw5zqhstogsi4

Toward Fast and Accurate Neural Chinese Word Segmentation with Multi-Criteria Learning [article]

Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, Wei Chu
2020 arXiv   pre-print
Multi-criteria Chinese word segmentation aims to capture various annotation criteria among datasets and leverage their common underlying knowledge.  ...  Private and shared projection layers are proposed to capture domain-specific knowledge and common knowledge, respectively.  ...  ., 2018; pointed out that exploiting external knowledge can improve the CWS accuracy.  ... 
arXiv:1903.04190v2 fatcat:g5hhis3vajdyznv4wphhrqgaju

Background Knowledge Based Multi-Stream Neural Network for Text Classification

Fuji Ren, Jiawen Deng
2018 Applied Sciences  
Background knowledge is composed of keywords and co-occurred words which are extracted from external corpus.  ...  The multi-stream network mainly consists of the basal stream, which retained original sequence information, and background knowledge based streams.  ...  The background knowledge is extracted from external corpus and is composed of keywords and co-occurred words.  ... 
doi:10.3390/app8122472 fatcat:3zt2mcwomvgx7bmxisyyuvy7uy

Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report Generation [article]

Mingjie Li, Fuyu Wang, Xiaojun Chang, Xiaodan Liang
2020 arXiv   pre-print
In more detail, ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.  ...  Beyond the common difficulties faced in the natural image captioning, medical report generation specifically requires the model to describe a medical image with a fine-grained and semantic-coherence paragraph  ...  model to bridge visual and linguistic information.  ... 
arXiv:2006.03744v1 fatcat:3nwchs5irffyvlxyba4vkiajmu

Cross-lingual Name Tagging and Linking for 282 Languages

Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, Heng Ji
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB)  ...  Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.  ...  FA8750-13-2-0041 and FA8750-13-2-0045, and NSF CAREER No. IIS-1523198.  ... 
doi:10.18653/v1/p17-1178 dblp:conf/acl/PanZMNKJ17 fatcat:sdo4vpvxk5haxkql3554v4alqa

Extracting event and their relations from texts: A survey on recent research progress and challenges

Kang Liu, Yubo Chen, Jian Liu, Xinyu Zuo, Jun Zhao
2020 AI Open  
This paper summaries some constructed event-centric knowledge graphs and the recent typical approaches for event and event relation extraction, besides task description, widely used evaluation datasets  ...  In event relation extraction, we focus on the extraction approaches for three typical event relation types, including coreference, causal and temporal relations, respectively.  ...  Priority Research Program of Chinese Academy of Sciences (Grant No.  ... 
doi:10.1016/j.aiopen.2021.02.004 fatcat:qxbcmk55vzcb5nznhgfgwrbe4u

TravelBERT: Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation [article]

Hongyin Zhu, Hao Peng, Zhiheng Lyu, Lei Hou, Juanzi Li, Jinghui Xiao
2021 arXiv   pre-print
To capture the corresponding relations among these multi-format knowledge, our approach uses masked language model objective to learn word knowledge, uses triple classification objective and title matching  ...  objective to learn entity knowledge and topic knowledge respectively.  ...  For the TravelOIE dataset, the data annotation relies on the information extraction mechanism of dependency parsing (Qiu and Zhang, 2014 ), but we did not specifically add linguistic knowledge during  ... 
arXiv:2109.01048v2 fatcat:eaegcialtrbtdjuqxl46nuwedm

LOME: Large Ontology Multilingual Extraction [article]

Patrick Xia, Guanghui Qin, Siddharth Vashishtha, Yunmo Chen, Tongfei Chen, Chandler May, Craig Harman, Kyle Rawlins, Aaron Steven White, Benjamin Van Durme
2021 arXiv   pre-print
By doing so, the system constructs an event and entity focused knowledge graph. We can further apply third-party modules for other types of annotation, like relation extraction.  ...  It subsequently performs coreference resolution, fine-grained entity typing, and temporal relation prediction between events.  ...  Acknowledgments We thank Kenton Murray, Manling Li, Varun Iyer, and Zhuowan Li for helpful discussions and feedback.  ... 
arXiv:2101.12175v2 fatcat:aviegtknb5hb5kf5eysd5qdili

Neural relation extraction: a review

2020 Turkish Journal of Electrical Engineering and Computer Sciences  
external knowledge bases are used to enhance 21 weakly labeled training set.  ...  A triple (h, r , t ) implies that entity h has relation r with another entity t . Knowledge graphs (KG) 18 such as FreeBase [4] and DBpedia [2] are examples of such representations.  ...  ., and Li, P. (2018b). Hierarchical relation extraction with coarse-to-fine grained 40 attention.  ... 
doi:10.3906/elk-2005-119 fatcat:o36duadbunhmbesuyayc5jfmxe

Linguistically Annotated Reordering: Evaluation and Analysis

Deyi Xiong, Min Zhang, Aiti Aw, Haizhou Li
2010 Computational Linguistics  
When combined with BWR, LAR provides complementary information for phrase reordering, which collectively improves the BLEU score significantly.  ...  In LAR, we build hard hierarchical skeletons and inject soft linguistic knowledge from source parse trees to nodes of hard skeletons during translation.  ...  Acknowledgments We would like to thank the three anonymous reviewers for their helpful comments and suggestions.  ... 
doi:10.1162/coli_a_00009 fatcat:kzn6ydkakzecparatgjgtcnyfq

A Survey of Implicit Discourse Relation Recognition [article]

Wei Xiang, Bang Wang
2022 arXiv   pre-print
The task of implicit discourse relation recognition (IDRR) is to detect implicit relation and classify its sense between two text segments without a connective.  ...  We also present performance comparisons for those solutions experimented on a public corpus with standard data processing procedures.  ...  Ji and Eisenstein [41] proposed to further augment a tree-structured RNN with external entity mentions and some other linguistically informed features, like word-pair feature, to enrich input words'  ... 
arXiv:2203.02982v1 fatcat:ubublxw2fnfdpexgw4jslj76tm

Enhancing Topic-to-Essay Generation with External Commonsense Knowledge

Pengcheng Yang, Lei Li, Fuli Luo, Tianyu Liu, Xu Sun
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Experiments show that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent than existing methods in terms of both automatic and  ...  However, this commonsense knowledge provides additional background information, which can help to generate essays that are more novel and diverse.  ...  This shows that with the help of external commonsense knowledge, the source information can be enriched, leading to the outputs that are more novel and diverse.  ... 
doi:10.18653/v1/p19-1193 dblp:conf/acl/YangLLLS19 fatcat:pa3msr4qezgm3or7ocegzcl44u

BERT Based Chinese Relation Extraction for Public Security

Jiaqi Hou, Xin Li, Haipeng Yao, Haichun Sun, Tianle Mai, Rongchen Zhu
2020 IEEE Access  
With the advent of the big data era, effectively extracting public security information from the internet has become of great significance.  ...  Therefore, in this paper, we propose a Bidirectional Encoder Representation from Transformers (BERT) based on the Chinese relation extraction algorithm for public security, which can effectively mine security  ...  ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their helpful feedback and suggestions.  ... 
doi:10.1109/access.2020.3002863 fatcat:6kolwgcalzar3insvgn43e2heu
« Previous Showing results 1 — 15 out of 3,735 results