Filters








70 Hits in 10.1 sec

A Two-stage Model for Slot Filling in Low-resource Settings: Domain-agnostic Non-slot Reduction and Pretrained Contextual Embeddings

Cennet Oguz, Ngoc Thang Vu
2020 Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing   unpublished
The second step identifies slot names only for slot tokens by using state-of-the-art pretrained contextual embeddings such as ELMO and BERT.  ...  In this paper, we propose a novel two-stage model architecture that can be trained with only a few indomain hand-labeled examples.  ...  Conclusion and Future Work We propose a novel two-stage model for slot filling in low-resource domains.  ... 
doi:10.18653/v1/2020.sustainlp-1.10 fatcat:wb3kctcpg5blzcqa2um4lmnyxa

A survey of joint intent detection and slot-filling models in natural language understanding [article]

H. Weld, X. Huang, S. Long, J. Poon, S. C. Han
2021 arXiv   pre-print
However, more recently, joint models for intent classification and slot filling have achieved state-of-the-art performance, and have proved that there exists a strong relationship between the two tasks  ...  In this article, we describe trends, approaches, issues, data sets, evaluation metrics in intent classification and slot filling.  ...  surrounding the slots. 4.3.5 Low resource data sets.  ... 
arXiv:2101.08091v3 fatcat:ai6w2imilrfupf4m5fm2rjtzxi

Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey [article]

Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, Erik Cambria
2022 arXiv   pre-print
We comprehensively review state-of-the-art research outcomes in dialogue systems and analyze them from two angles: model type and system type.  ...  From the angle of system type, we discuss task-oriented and open-domain dialogue systems as two streams of research, providing insight into the hot topics related.  ...  Acknowledgements This research/project is supported by A*STAR under its Industry Alignment Fund (LOA Award I1901E0046).  ... 
arXiv:2105.04387v5 fatcat:yd3gqg45rjgzxbiwfdlcvf3pye

Few-Shot Semantic Parsing for New Predicates [article]

Zhuang Li, Lizhen Qu, Shuo Huang, Gholamreza Haffari
2021 arXiv   pre-print
As a result, our method consistently outperforms all the baselines in both one and two-shot settings.  ...  In this work, we investigate the problems of semantic parsing in a few-shot learning setting. In this setting, we are provided with utterance-logical form pairs per new predicate.  ...  The initialization of those embeddings will be explained in the following section. Slot Filling A tree node in a semantic tree may contain more than one slot variables due to template normalization.  ... 
arXiv:2101.10708v1 fatcat:mzij6vgxq5eojnnilyucosjw7m

nocaps: novel object captioning at scale [article]

Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, Peter Anderson
2019 arXiv   pre-print
However, if these models are to ever function in the wild, a much larger variety of visual concepts must be learned, ideally from less supervision.  ...  Dubbed 'nocaps', for novel object captioning at scale, our benchmark consists of 166,100 human-generated captions describing 15,100 images from the OpenImages validation and test sets.  ...  Keeping the third criterion intact in nocaps setting would suppress such region proposals, and result in lesser visual grounding, which is not desirable for NBT.  ... 
arXiv:1812.08658v3 fatcat:3zgog46y7nblxazbx4swdpigay

Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey [article]

Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heinz, Dan Roth
2021 arXiv   pre-print
We conclude with discussions on limitations and suggested directions for future research.  ...  We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches.  ...  Acknowledgments We would like to thank Paul Cummer for his insightful comments on this work.  ... 
arXiv:2111.01243v1 fatcat:4xfjkkby2bfnhdrhmrdlliy76m

A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios [article]

Michael A. Hedderich, Lukas Lange, Heike Adel, Jannik Strötgen, Dietrich Klakow
2021 arXiv   pre-print
A goal of our survey is to explain how these methods differ in their requirements as understanding them is essential for choosing a technique suited for a specific low-resource setting.  ...  As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings.  ...  In Proceedings of the 2017 Conference two-stage model for low resource table-to-text gen- on Empirical Methods in Natural Language Process- eration.  ... 
arXiv:2010.12309v3 fatcat:26dwmlkmn5auha2ob2qdlrvla4

Pretrained Transformers for Text Ranking: BERT and Beyond

Andrew Yates, Rodrigo Nogueira, Jimmy Lin
2021 Proceedings of the 14th ACM International Conference on Web Search and Data Mining  
In the context of text ranking, these models produce high quality results across many domains, tasks, and settings.  ...  The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query for a particular task.  ...  However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this survey also attempts to prognosticate where the  ... 
doi:10.1145/3437963.3441667 fatcat:6teqmlndtrgfvk5mneq5l7ecvq

Natural Language Processing Advancements By Deep Learning: A Survey [article]

Amirsina Torfi, Rouzbeh A. Shirvani, Yaser Keneshloo, Nader Tavaf, Edward A. Fox
2021 arXiv   pre-print
Recent developments in computational power and the advent of large amounts of linguistic data have heightened the need and demand for automating semantic analysis using data-driven approaches.  ...  Natural Language Processing (NLP) helps empower intelligent machines by enhancing a better understanding of the human language for linguistic-based human-computer communication.  ...  The main pipeline in NLU is to classify the user query domain and user intent, and fill a set of slots to create a semantic frame.  ... 
arXiv:2003.01200v4 fatcat:riw6vvl24nfvboy56v2zfcidpu

Extracting and Learning a Dependency-Enhanced Type Lexicon for Dutch [article]

Konstantinos Kogkalidis
2019 arXiv   pre-print
In order to overcome difficulties arising as a result of that variability, the thesis explores and expands upon a type grammar based on Multiplicative Intuitionistic Linear Logic, agnostic to word order  ...  Two experiments are ran on the resulting grammar instantiation. The first pertains to the learnability of the type-assignment process by a neural architecture.  ...  Considering the above, we refrain from pretraining the decoder in the current setting.  ... 
arXiv:1909.02955v2 fatcat:qjdwpjelcjgdnewx5wy2xofii4

On the Opportunities and Risks of Foundation Models [article]

Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch (+102 others)
2022 arXiv   pre-print
Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization  ...  AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.  ...  Foundation Models (CRFM), a center at Stanford University borne out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).  ... 
arXiv:2108.07258v3 fatcat:kohwrwk2ybf7fd7wsuz2gp65ki

Bridge to Target Domain by Prototypical Contrastive Learning and Label Confusion: Re-explore Zero-Shot Learning for Slot Filling

Liwen Wang, Xuefeng Li, Jiachi Liu, Keqing He, Yuanmeng Yan, Weiran Xu
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
A two- pers), pages 753–757, New Orleans, Louisiana. As- stage model for slot filling in low-resource settings: sociation for Computational Linguistics.  ...  Domain-agnostic non-slot reduction and pretrained contextual embeddings. In Proceedings of Sus- Keqing He, Yuanmeng Yan, Si hong Liu, Z.  ... 
doi:10.18653/v1/2021.emnlp-main.746 fatcat:i6izsrbwmfh4pocyeawkz2jsa4

Proceedings of the BioCreative V.5 Challenge Evaluation Workshop

Martin Krallinger, Alfonso Valencia
2022 Zenodo  
The Spanish National Bioinformatics Institute (INB) unit at the Spanish National Cancer Research Centre (CNIO) is a member of the INB, PRB2-ISCIII and is supported by grant PT13/0001/0030, of the PE I+  ...  Acknowledgment We acknowledge the OpenMinted (654021) and the ELIXIR-EXCELERATE (676559) H2020 projects, and the Encomienda MINETAD-CNIO as part of the Plan for the Advancement of Language Technology for  ...  word embedding by the word2vec tool as pretrained word embedding.  ... 
doi:10.5281/zenodo.6519885 fatcat:gzzr6ogkwvfe3eglv6anrzt5s4

No Language Left Behind: Scaling Human-Centered Machine Translation [article]

NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun (+27 others)
2022 arXiv   pre-print
In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers.  ...  Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages.  ...  We thank Javier Ferrando and Carlos Escolano for their invaluable help in using the ALTI+ method. We thank Brian O'Horo and Justine Kao for their insights and guidance.  ... 
arXiv:2207.04672v2 fatcat:gsbt3imt4bgodpmubpaq53onnm

Knowledge Augmented Machine Learning with Applications in Autonomous Driving: A Survey [article]

Julian Wörmann, Daniel Bogdoll, Etienne Bührle, Han Chen, Evaristus Fuh Chuo, Kostadin Cvejoski, Ludger van Elst, Tobias Gleißner, Philip Gottschall, Stefan Griesche, Christian Hellert, Christian Hesels (+34 others)
2022 arXiv   pre-print
As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge.  ...  The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models.  ...  Generally the first stage in these Two-stage object detection model consists of a Region Proposal Network (RPN), where in the second stage the candidate region proposals are classified based on the feature  ... 
arXiv:2205.04712v1 fatcat:u2bgxr2ctnfdjcdbruzrtjwot4
« Previous Showing results 1 — 15 out of 70 results