Filters








174 Hits in 6.0 sec

An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning [article]

Yaoyao Liu, Bernt Schiele, Qianru Sun
2020 arXiv   pre-print
In this paper, we propose to meta-learn the ensemble of epoch-wise empirical Bayes models (E3BM) to achieve robust predictions.  ...  "Empirical" means that the hyperparameters, e.g., used for learning and ensembling the epoch-wise models, are generated by hyperprior learners conditional on task-specific data.  ...  An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning  ... 
arXiv:1904.08479v6 fatcat:bdvnjuj3jva6potmga6ksjb2vi

Bayesian Model-Agnostic Meta-Learning [article]

Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio and Sungjin Ahn
2018 arXiv   pre-print
Learning to infer Bayesian posterior from a few-shot dataset is an important step towards robust meta-learning due to the model uncertainty inherent in the problem.  ...  Experiment results show the accuracy and robustness of the proposed method in various tasks: sinusoidal regression, image classification, active learning, and reinforcement learning.  ...  Acknowledgments JY thanks SAP and Kakao Brain for their support. TK thanks NSERC, MILA and Kakao Brain for their support.  ... 
arXiv:1806.03836v4 fatcat:qe2scrfmzzahfbgfpn5rtwugzq

Interventional Few-Shot Learning [article]

Zhongqi Yue and Hanwang Zhang and Qianru Sun and Xian-Sheng Hua
2020 arXiv   pre-print
We uncover an ever-overlooked deficiency in the prevailing Few-Shot Learning (FSL) methods: the pre-trained knowledge is indeed a confounder that limits the performance.  ...  Thanks to it, we propose a novel FSL paradigm: Interventional Few-Shot Learning (IFSL).  ...  We also want to thank Alibaba City Brain Group for the donations of GPUs. Broader Impact The proposed method aims to improve the Few-Shot Learning task.  ... 
arXiv:2009.13000v2 fatcat:atfbrjpz3zhmzj2aow7opnv3su

Massively Multilingual Transfer for NER [article]

Afshin Rahimi, Yuan Li, Trevor Cohn
2019 arXiv   pre-print
We propose two techniques for modulating the transfer, suitable for zero-shot or few-shot learning, respectively.  ...  Evaluating on named entity recognition, we show that our techniques are much more effective than strong baselines, including standard ensembling, and our unsupervised method rivals oracle selection of  ...  epochs of training.  ... 
arXiv:1902.00193v4 fatcat:y2rjryqrjnbs3asksdi3t2ardy

MetaNODE: Prototype Optimization as a Neural ODE for Few-Shot Learning [article]

Baoquan Zhang, Xutao Li, Shanshan Feng, Yunming Ye, Rui Ye
2021 arXiv   pre-print
Few-Shot Learning (FSL) is a challenging task, i.e., how to recognize novel classes with few examples?  ...  A gradient flow inference network is carefully designed to learn to estimate the continuous gradient flow for prototype dynamics.  ...  few-shot learning.  ... 
arXiv:2103.14341v2 fatcat:a6xxy2gdtbgidhtdfbqd3uskum

Learning to Discover Novel Visual Categories via Deep Transfer Clustering [article]

Kai Han and Andrea Vedaldi and Andrew Zisserman
2019 arXiv   pre-print
The first contribution is to extend Deep Embedded Clustering to a transfer learning setting; we also improve the algorithm by introducing a representation bottleneck, temporal ensembling, and consistency  ...  We consider the problem of discovering novel object categories in an image collection. While these images are unlabelled, we also assume prior knowledge of related but different image classes.  ...  We are grateful to EPSRC Programme Grant Seebibyte EP/M013774/1 and ERC StG IDIU-638009 for support.  ... 
arXiv:1908.09884v1 fatcat:wxluchmvizg2bcvscz25meuscu

Bayesian Meta-Prior Learning Using Empirical Bayes [article]

Sareh Nabi, Houssam Nassif, Joseph Hong, Hamed Mamani, Guido Imbens
2021 arXiv   pre-print
Our method learns empirical meta-priors from the data itself and uses them to decouple the learning rates of first-order and second-order features (or any other given feature grouping) in a Generalized  ...  Our Empirical Bayes method clamps features in each group together and uses the deployed model's observed data to empirically compute a hierarchical prior in hindsight.  ...  Acknowledgments The authors thank Gregory Duncan, Sham Kakade, Milos Curcic, Lalit Jain, Omid Rafieian, Devin Didericksen, Yi Liu, Karthik Mohan, Emily Ikeda-Flowers, Calvin Kwok, Zachary Austin, and Jason Yang for  ... 
arXiv:2002.01129v3 fatcat:namlexrtdbbdtazytqwd65pewm

Multidimensional Belief Quantification for Label-Efficient Meta-Learning [article]

Deep Pandey, Qi Yu
2022 arXiv   pre-print
Optimization-based meta-learning offers a promising direction for few-shot learning that is essential for many real-world computer vision applications.  ...  However, learning from few samples introduces uncertainty, and quantifying model confidence for few-shot predictions is essential for many critical domains.  ...  We would also like to thank the anonymous reviewers for their constructive comments.  ... 
arXiv:2203.12768v1 fatcat:spzkdqejhfbwnijstgyachbybe

Seeing into Darkness: Scotopic Visual Recognition [article]

Bo Chen, Pietro Perona
2016 arXiv   pre-print
When photons are few and far in between, the concept of 'image' breaks down and it is best to consider directly the flow of photons.  ...  Here we develop a framework that allows a machine to classify objects with as few photons as possible, while maintaining the error rate below an acceptable threshold.  ...  We found that empirically a learning rate of 0.004 works best for WaldNet, and 0.001 works best for the other architectures.  ... 
arXiv:1610.00405v1 fatcat:lfyioirfnngwnm3k2mjh4sw47e

Towards Robust Pattern Recognition: A Review [article]

Xu-Yao Zhang, Cheng-Lin Liu, Ching Y. Suen
2020 arXiv   pre-print
Actually, our brain is robust at learning concepts continually and incrementally, in complex, open and changing environments, with different contexts, modalities and tasks, by showing only a few examples  ...  directions for robust pattern recognition.  ...  Therefore, the few-shot [152] or even zero-shot [154] learning abilities of pattern recognition systems are of great value for real applications.  ... 
arXiv:2006.06976v1 fatcat:mn35i7bmhngl5hxr3vukdcmmde

Zero-Shot Clinical Acronym Expansion via Latent Meaning Cells [article]

Griffin Adams, Mert Ketenci, Shreyas Bhave, Adler Perotte, Noémie Elhadad
2020 arXiv   pre-print
We evaluate the LMC model on the task of zero-shot clinical acronym expansion across three datasets.  ...  We demonstrate that not only is metadata itself very helpful for the task, but that the LMC inference algorithm provides an additional large benefit.  ...  Acknowledgments We thank Arthur Bražinskas, Rajesh Ranganath, and the reviewers for their constructive, thoughtful feedback.  ... 
arXiv:2010.02010v2 fatcat:as45nhkdhzhnbif4a2ygztw36y

Learning from Multiple Noisy Partial Labelers [article]

Peilin Yu, Tiffany Ding, Stephen H. Bach
2022 arXiv   pre-print
We show how to scale up learning, for example learning on 100k examples in one minute, a 300x speed up compared to a naive implementation.  ...  On these tasks, our framework has accuracy comparable to recent embedding-based zero-shot learning methods, while using only pre-trained attribute detectors.  ...  Disclosure: Stephen Bach is an advisor to Snorkel AI, a company that provides software and services for weakly supervised machine learning.  ... 
arXiv:2106.04530v2 fatcat:jzdljaqc5nhatnjix6nhjdkqwy

A Comprehensive Review on Summarizing Financial News Using Deep Learning [article]

Saurabh Kamal, Sahil Sharma
2021 arXiv   pre-print
In this research, embedding techniques used are BoW, TF-IDF, Word2Vec, BERT, GloVe, and FastText, and then fed to deep learning models such as RNN and LSTM.  ...  Natural Language Processing techniques are typically used to deal with such a large amount of data and get valuable information out of it.  ...  Among these machine learning techniques used in this paper, random forest performs best with an accuracy of 88.95%, and Naïve Bayes performs worst with an accuracy of 54.77%.  ... 
arXiv:2109.10118v1 fatcat:hjvzbfguvbaxph7iqhliwadhvu

Siamese Networks for Large-Scale Author Identification [article]

Chakaveh Saedi, Mark Dras
2021 arXiv   pre-print
Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP.  ...  Deep learning methods, which blur the boundaries between classification-based and similarity-based approaches, are promising in terms of ability to learn a notion of similarity, but have previously only  ...  For the most part systems in these tasks use conventional machine learning (i.e. not deep learning): the 2018 winner used an ensemble classifier (Custódio and Paraboni, 2018) and the runner-up a linear  ... 
arXiv:1912.10616v3 fatcat:has2zgpd2fawrp6b2yk5hhewqa

An Overview of Neural Network Compression [article]

James O' Neill
2020 arXiv   pre-print
Pushing state of the art on salient tasks within these domains corresponds to these models becoming larger and more difficult for machine learning practitioners to use given the increasing memory and storage  ...  Most of the papers discussed are proposed in the context of at least one of these DNN architectures.  ...  Tarvainen and Valpola (2017) find that averaging the models weights of an ensemble at each epoch is more effective than averaging label predictions for semi-supervised learning.  ... 
arXiv:2006.03669v2 fatcat:u2p6gvwhobh53hfjxawzclw7fq
« Previous Showing results 1 — 15 out of 174 results