Filters








303 Hits in 4.3 sec

Improving Generalization in Coreference Resolution via Adversarial Training [article]

Sanjay Subramanian, Dan Roth
2019 arXiv   pre-print
In order for coreference resolution systems to be useful in practice, they must be able to generalize to new text.  ...  training set.  ...  This work was supported in part by contract HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA).  ... 
arXiv:1908.04728v1 fatcat:bngypk5al5fj5emdk47somhen4

Improving Generalization in Coreference Resolution via Adversarial Training

Sanjay Subramanian, Dan Roth
2019 Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*  
In order for coreference resolution systems to be useful in practice, they must be able to generalize to new text.  ...  training set.  ...  This work was supported in part by contract HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA).  ... 
doi:10.18653/v1/s19-1021 dblp:conf/starsem/SubramanianR19 fatcat:hx5zapgqzrc6xli6kdjzycrrdy

On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning [article]

Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren
2021 arXiv   pre-print
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution.  ...  We find, in extensive experiments across hate speech detection, toxicity detection, occupation prediction, and coreference resolution tasks over various bias factors, that the effects of UBM are indeed  ...  , and coreference resolution in English corpora.  ... 
arXiv:2010.12864v2 fatcat:kqydvgnh5bdwfp22iz4vtfd2oi

Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. Smith, Matt Gardner
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
Machine comprehension of texts longer than a single sentence often requires coreference resolution.  ...  We deal with this issue by using a strong baseline model as an adversary in the crowdsourcing loop, which helps crowdworkers avoid writing questions with exploitable surface cues.  ...  Requirement of coreference resolution We found that 78% of the manually analyzed questions cannot be answered without coreference resolution.  ... 
doi:10.18653/v1/d19-1606 dblp:conf/emnlp/DasigiLMSG19 fatcat:mc43ix73djfhnc4dg54hbdbix4

Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning [article]

Pradeep Dasigi, Nelson F. Liu, Ana Marasović, Noah A. Smith, Matt Gardner
2019 arXiv   pre-print
Machine comprehension of texts longer than a single sentence often requires coreference resolution.  ...  We deal with this issue by using a strong baseline model as an adversary in the crowdsourcing loop, which helps crowdworkers avoid writing questions with exploitable surface cues.  ...  Requirement of coreference resolution We found that 78% of the manually analyzed questions cannot be answered without coreference resolution.  ... 
arXiv:1908.05803v2 fatcat:7cb25dknwjb67pbwmhbarwryzq

A Brief Survey and Comparative Study of Recent Development of Pronoun Coreference Resolution [article]

Hongming Zhang, Xinran Zhao, Yangqiu Song
2020 arXiv   pre-print
Compared with the general coreference resolution task, the main challenge of PCR is the coreference relation prediction rather than the mention detection.  ...  In this survey, we first introduce representative datasets and models for the ordinary pronoun coreference resolution task.  ...  In general, all models perform better on frequent objects because they appear more in the training data.  ... 
arXiv:2009.12721v1 fatcat:ndvcpk35h5c2xeeim5a3obh7ta

Coreferential Reasoning Learning for Language Representation [article]

Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, Zhiyuan Liu
2020 arXiv   pre-print
However, most existing language representation models cannot explicitly handle coreference, which is essential to the coherent understanding of the whole discourse.  ...  The experimental results show that, compared with existing baseline models, CorefBERT can achieve significant improvements consistently on various downstream NLP tasks that require coreferential reasoning  ...  F.7 Resolving the Coreference in the Corpus In our preliminary experiment, we resolve the coreference of training corpus via the StanfordNLP tool 18 and apply our copy-based objective on this training  ... 
arXiv:2004.06870v2 fatcat:gdxj7yucdzb6ll4cbsgf3j72fy

What's in a Name? Are BERT Named Entity Representations just as Good for any other Name? [article]

Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, Sunita Sarawagi
2020 arXiv   pre-print
We highlight that on several tasks while such perturbations are natural, state of the art trained models are surprisingly brittle.  ...  Experiments on three NLP tasks show that our method enhances robustness and increases accuracy on both natural and adversarial datasets.  ...  Coreference Resolution (CoRef) Task Coreference resolution refers to the problem of finding all expressions that refer to the same entity in a text.  ... 
arXiv:2007.06897v1 fatcat:risou5ezp5fuvftprgbhqxdu6u

A Neural Entity Coreference Resolution Review [article]

Nikolaos Stylianou, Ioannis Vlahavas
2019 arXiv   pre-print
Emphasis is given on Pronoun Resolution, a subtask of Coreference Resolution, which has seen various improvements in the recent years.  ...  Entity Coreference Resolution is the task of resolving all the mentions in a document that refer to the same real world entity and is considered as one of the most difficult tasks in natural language understanding  ...  the context of the project "Strengthening Human Resources Research Potential via Doctorate Research" (MIS-5000432), implemented by the State Scholarships Foundation (ΙΚΥ).  ... 
arXiv:1910.09329v1 fatcat:mmzwofjl2zajdkmarg2y52mfkm

A general framework for information extraction using dynamic span graphs

Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, Hannaneh Hajishirzi
2019 Proceedings of the 2019 Conference of the North  
This is unlike previous multitask frameworks for information extraction in which the only interaction between tasks is in the shared first-layer LSTM.  ...  We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs.  ...  and coreference resolution.  ... 
doi:10.18653/v1/n19-1308 dblp:conf/naacl/LuanWHSOH19 fatcat:fwvmu7ifz5d6fb36xcy5ho6igu

A General Framework for Information Extraction using Dynamic Span Graphs [article]

Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, Hannaneh Hajishirzi
2019 arXiv   pre-print
This is unlike previous multi-task frameworks for information extraction in which the only interaction between tasks is in the shared first-layer LSTM.  ...  We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs.  ...  and coreference resolution.  ... 
arXiv:1904.03296v1 fatcat:mq6dvql3qfhfnmfqgu5kqcjhsq

Extracting event and their relations from texts: A survey on recent research progress and challenges

Kang Liu, Yubo Chen, Jian Liu, Xinyu Zuo, Jun Zhao
2020 AI Open  
; 2) how to extract relations across sentences or in a document level; 3) how to acquire or augment labeled instances for model training.  ...  In event relation extraction, we focus on the extraction approaches for three typical event relation types, including coreference, causal and temporal relations, respectively.  ...  The statistics of commonly used datasets for event coreference resolution are listed in Table 7 .  ... 
doi:10.1016/j.aiopen.2021.02.004 fatcat:qxbcmk55vzcb5nznhgfgwrbe4u

Predicting Simplified Thematic Progression Pattern for Discourse Analysis

Xuefeng Xi, Victor S. Sheng, Shuhui Yang, Baochuan Fu, Zhiming Cui
2020 Computers Materials & Continua  
Furthermore, these features are used in a hybrid approach to a major discourse analysis task, Chinese coreference resolution.  ...  This novel approach is built up via heuristic sieves and a machine learning method that comprehensively utilizes both the top-down STPP features and the bottom-up semantic features.  ...  progression pattern with a generative adversarial network. ; Du, Buntine and Johnson (2013); Du, Pate and Johnson (2015); Wang, Li, Lyu et al. (2017); Meng, Rice, Wang et al. (2018)] Example (a): (1)  ... 
doi:10.32604/cmc.2020.06992 fatcat:gdrwsmseszcylnwjbzttwkdt4q

Measuring and Reducing Gendered Correlations in Pre-trained Models [article]

Kellie Webster and Xuezhi Wang and Ian Tenney and Alex Beutel and Emily Pitler and Ellie Pavlick and Jilin Chen and Ed Chi and Slav Petrov
2021 arXiv   pre-print
We explore such gendered correlations as a case study for how to address unintended correlations in pre-trained models.  ...  general mitigations.  ...  Coreference Resolution We measure gendered correlations in coreference resolution using the WinoGender evaluation dataset trained on OntoNotes (Hovy et al., 2006) .  ... 
arXiv:2010.06032v2 fatcat:nypjdkct2bg4jaw3scmcaph2ca

Multi-Task Learning in Natural Language Processing: An Overview [article]

Shijie Chen, Yu Zhang, Qiang Yang
2021 arXiv   pre-print
We first review MTL architectures used in NLP tasks and categorize them into four classes, including the parallel architecture, hierarchical architecture, modular architecture, and generative adversarial  ...  In recent years, Multi-Task Learning (MTL), which can leverage useful information of related tasks to achieve simultaneous performance improvement on multiple related tasks, has been used to handle these  ...  are mapped into a unified feature space via generative adversarial training.  ... 
arXiv:2109.09138v1 fatcat:hlgzjykuvzczzmsgnl32w5qo5q
« Previous Showing results 1 — 15 out of 303 results