A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Enhanced Meta-Learning for Cross-lingual Named Entity Recognition with Minimal Resources
[article]
2020
arXiv
pre-print
We conduct extensive experiments on cross-lingual named entity recognition with minimal resources over five target languages. ...
For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER). ...
Enhanced Meta-Learning for Cross-Lingual NER with Minimal Resources In this section, we elaborate on the proposed approach. ...
arXiv:1911.06161v2
fatcat:zzwpoo5rz5g4tbmldwvuxiovtu
Enhanced Meta-Learning for Cross-Lingual Named Entity Recognition with Minimal Resources
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
We conduct extensive experiments on cross-lingual named entity recognition with minimal resources over five target languages. ...
For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER). ...
Enhanced Meta-Learning for Cross-Lingual NER with Minimal Resources In this section, we elaborate on the proposed approach. ...
doi:10.1609/aaai.v34i05.6466
fatcat:aq4zussw7ncyvc7dzo2olnv4xu
Zero-Resource Cross-Domain Named Entity Recognition
[article]
2020
arXiv
pre-print
Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. ...
We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. ...
Winata et al. (2020) introduced the cross-accent speech recognition task and utilized meta-learning to cope with the data scarcity issue in target accents. ...
arXiv:2002.05923v2
fatcat:waj3ssskzbgidl74uowht3jroq
Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective
[article]
2022
arXiv
pre-print
In addition, we describe promising applications and outline some potential directions for future research. ...
., low-resource scenarios. Many neural approaches on low-resource KE have been widely investigated and achieved impressive performance. ...
For instance, given a sentence "Jack is married to the Iraqi microbiologist known as Dr. Germ.": Named Entity Recognition should identify the types of entities, e.g., 'Jack', 'Dr. ...
arXiv:2202.08063v1
fatcat:2q64tx2mzne53gt24adi6ymj7a
X-METRA-ADA: Cross-lingual Meta-Transfer Learning Adaptation to Natural Language Understanding and Question Answering
[article]
2021
arXiv
pre-print
Recently, meta-learning has garnered attention as a promising technique for enhancing transfer learning under low-resource scenarios: particularly for cross-lingual transfer in Natural Language Understanding ...
In this work, we propose X-METRA-ADA, a cross-lingual MEta-TRAnsfer learning ADAptation approach for NLU. ...
Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. ...
arXiv:2104.09696v2
fatcat:yt6um3pmbrf5zh7bdclsu5duoi
UniTrans: Unifying Model Transfer and Data Transfer for Cross-Lingual Named Entity Recognition with Unlabeled Data
[article]
2020
arXiv
pre-print
Prior works in cross-lingual named entity recognition (NER) with no/little labeled data fall into two primary categories: model transfer based and data transfer based methods. ...
To handle both problems, we propose a novel approach termed UniTrans to Unify both model and data Transfer for cross-lingual NER, and furthermore, to leverage the available information from unlabeled target-language ...
Introduction Named entity recognition (NER) is a fundamental task in natural language processing, which seeks to locate and classify named entities, like locations, organizations, etc., in unstructured ...
arXiv:2007.07683v1
fatcat:hcbecmliv5hr3mg3h6mrrlp47a
Low-Resource Adaptation of Neural NLP Models
[article]
2020
arXiv
pre-print
The objective of this thesis is to investigate methods for dealing with such low-resource scenarios in information extraction and natural language understanding. ...
We develop and adapt neural NLP models to explore a number of research questions concerning NLP tasks with minimal or no training data. ...
"Neural Architectures for Named Entity Recognition". ...
arXiv:2011.04372v1
fatcat:626mbe5ba5bkdflv755o35u5pq
A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios
[article]
2021
arXiv
pre-print
A goal of our survey is to explain how these methods differ in their requirements as understanding them is essential for choosing a technique suited for a specific low-resource setting. ...
As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings. ...
on Life-long Learning for Spoken Language Systems,
2020. Cross-lingual text classification with minimal pages 18–26, Suzhou, China. ...
arXiv:2010.12309v3
fatcat:26dwmlkmn5auha2ob2qdlrvla4
Reinforced Iterative Knowledge Distillation for Cross-Lingual Named Entity Recognition
[article]
2021
arXiv
pre-print
Named entity recognition (NER) is a fundamental component in many applications, such as Web Search and Voice Assistants. ...
To tackle this challenge, cross-lingual NER transfers knowledge from a rich-resource language to languages with low resources through pre-trained multilingual language models. ...
Problem Definition and Preliminaries We model cross-lingual named entity recognition as a sequence labeling problem. ...
arXiv:2106.00241v1
fatcat:mvayp27cy5hc5gwefgan3ynr3e
A Common Semantic Space for Monolingual and Cross-Lingual Meta-Embeddings
[article]
2021
arXiv
pre-print
The resulting cross-lingual meta-embeddings also exhibit excellent cross-lingual transfer learning capabilities. ...
This paper presents a new technique for creating monolingual and cross-lingual meta-embeddings. ...
We also acknowledge the support of the NVIDIA Corporation with the donation of a Titan V GPU used for this research. ...
arXiv:2001.06381v2
fatcat:37n6qk7shfakjjaejno2fyojnu
Exploring Bilingual Parallel Corpora for Syntactically Controllable Paraphrase Generation
2020
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
In order to train one model over the two languages of parallel corpora, we embed sentences of them into the same content and style spaces with shared content and style encoders using cross-lingual word ...
Additionally, we introduce cycle and masking learning schemes to efficiently train the model. ...
Introduction Named entity recognition (NER) is a fundamental task in natural language processing, which seeks to locate and classify named entities, like locations, organizations, etc., in unstructured ...
doi:10.24963/ijcai.2020/543
dblp:conf/ijcai/WuLKHL20
fatcat:dyibs6hi7rc47ly75wglfvqhk4
Multi-Task Learning in Natural Language Processing: An Overview
[article]
2021
arXiv
pre-print
Deep learning approaches have achieved great success in the field of Natural Language Processing (NLP). ...
In recent years, Multi-Task Learning (MTL), which can leverage useful information of related tasks to achieve simultaneous performance improvement on multiple related tasks, has been used to handle these ...
This dataset contains 51,164 questions in 9 categories, 3361 logical form patterns, and 23,144 entities. • ECSA [33] is a dataset for slot filling, named entity recognition, and segmentation to evaluate ...
arXiv:2109.09138v1
fatcat:hlgzjykuvzczzmsgnl32w5qo5q
Unsupervised Cross-lingual Adaptation for Sequence Tagging and Beyond
[article]
2021
arXiv
pre-print
Cross-lingual adaptation with multilingual pre-trained language models (mPTLMs) mainly consists of two lines of works: zero-shot approach and translation-based approach, which have been studied extensively ...
Then, the adaptation approach is applied to the refined parameters and the cross-lingual transfer is performed in a warm-start way. ...
., 2019a) , it still remains unknown which one is better on more finegrained XLU tasks, such as named entity recognition. ...
arXiv:2010.12405v3
fatcat:vi7stnnkfbdbvc6krap436tu5u
Information Extraction: Past, Present and Future
[chapter]
2012
Multi-source, Multilingual Information Extraction and Summarization
Such a record may capture a real-world entity with its attributes mentioned in text, or a real-world event, occurrence, or state, with its arguments or actors: who did what to whom, where and when. ...
Searching for specific, targeted factual information constitutes a large proportion of all searching activity on the part of information consumers. ...
Named-entity extraction: Detection of domain-independent named entities, such as temporal expressions, numbers and currency, geographical references, etc. • Phrase recognition: Recognition of small-scale ...
doi:10.1007/978-3-642-28569-1_2
dblp:series/tanlp/PiskorskiY13
fatcat:aoc7stoinzf6jc2dengl5ltwte
Neural entity linking: A survey of models based on deep learning
2022
Semantic Web Journal
including zero-shot and distant supervision methods, and cross-lingual approaches. ...
This survey presents a comprehensive description of recent neural entity linking (EL) systems developed since 2015 as a result of the "deep learning revolution" in natural language processing. ...
The work of Artem Shelmanov in the current study (preparation of sections related to application of entity linking to neural language models, entity ranking, context-mention encoding, and overall harmonization ...
doi:10.3233/sw-222986
fatcat:6gwmbtev7ngbliovf6cpf5hyde
« Previous
Showing results 1 — 15 out of 987 results