Filters








34,251 Hits in 7.9 sec

Sample Efficient Linear Meta-Learning by Alternating Minimization [article]

Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
2021 arXiv   pre-print
In this work, we study a simple alternating minimization method (MLLAM), which alternately learns the low-dimensional subspace and the regressors.  ...  Meta-learning of linear regression tasks, where the regressors lie in a low-dimensional subspace, is an extensively-studied fundamental problem in this domain.  ...  Conclusion In this paper, we analyzed an alternating minimization method for the problem of linear meta-learning, a simple but canonical problem in meta-learning.  ... 
arXiv:2105.08306v1 fatcat:z3nycpqciven3kw3za4osq5rc4

Meta-Learning with Differentiable Convex Optimization [article]

Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto
2019 arXiv   pre-print
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers.  ...  To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization  ...  Meta-learning approaches for few-shot learning aim to minimize the generalization error across a distribution of tasks sampled from a task distribution.  ... 
arXiv:1904.03758v2 fatcat:lgpeaofwhnf7bjdf6h2d7qpnd4

Few-Shot Classification By Few-Iteration Meta-Learning [article]

Ardhendu Shekhar Tripathi, Martin Danelljan, Luc Van Gool, Radu Timofte
2021 arXiv   pre-print
By employing an efficient initialization module and a Steepest Descent based optimization algorithm, our base learner predicts a powerful classifier within only a few iterations.  ...  The latter learns a linear classifier during the inference through an unrolled optimization procedure.  ...  To ensure practical meta-learning, the adaptation process must be both, efficient and differentiable.  ... 
arXiv:2010.00511v2 fatcat:62ahbj4pezhn3o3wvhnluujnle

Meta-Learning With Differentiable Convex Optimization

Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers.  ...  To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization  ...  Meta-learning approaches for few-shot learning aim to minimize the generalization error across a distribution of tasks sampled from a task distribution.  ... 
doi:10.1109/cvpr.2019.01091 dblp:conf/cvpr/LeeMRS19 fatcat:gcrmqlejsrdgdnrnrtp5zhcq6a

Meta-learning based Alternating Minimization Algorithm for Non-convex Optimization [article]

Jingyuan Xia, Shengxi Li, Jun-Jie Huang, Imad Jaimoukha, Deniz Gunduz
2021 arXiv   pre-print
To tackle these issues, we propose a meta-learning based alternating minimization (MLAM) method, which aims to minimize a partial of the global losses over iterations instead of carrying minimization on  ...  In this paper, we propose a novel solution for non-convex problems of multiple variables, especially for those typically solved by an alternating minimization (AM) strategy that splits the original optimization  ...  The overall structure of meta-learning for alternating minimization algorithm. The parameters of the applied meta network are denoted by θ.  ... 
arXiv:2009.04899v5 fatcat:xoxjvlws6vcaxcwpm4moo7p6c4

Meta-Learning Priors for Efficient Online Bayesian Regression [article]

James Harrison, Apoorva Sharma, Marco Pavone
2018 arXiv   pre-print
These features are neural networks, which are trained via a meta-learning (or "learning-to-learn") approach.  ...  Furthermore, by operating in the weight space, it substantially reduces sample complexity.  ...  Acknowledgments This work was supported by the Office of Naval Research YIP program (Grant N00014-17-1-2433), by DARPA under the Assured Autonomy program, and by the Toyota Research Institute (TRI).  ... 
arXiv:1807.08912v2 fatcat:l2m2oohlyzdrxbcsk7a2t5piii

Towards Better Understanding Meta-learning Methods through Multi-task Representation Learning Theory [article]

Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Yevhenii Zotkin, Amaury Habrard
2021 arXiv   pre-print
We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework.  ...  Finally, we use the derived insights to improve the generalization capacity of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies  ...  Such a requirement is reflected by the fact that the excess target risk is bounded by a quantity that involves the number of samples in both meta-train and meta-test samples and the number of available  ... 
arXiv:2010.01992v2 fatcat:4ivssruqgfcftdo6h5uyab7sji

Continual Learning with Deep Artificial Neurons [article]

Blake Camp, Jaya Krishna Mandivarapu, Rolando Estrada
2020 arXiv   pre-print
We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network, which facilitates a meta-objective during deployment.  ...  Here, we isolate continual learning as our meta-objective, and we show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting  ...  Figure 7 : 7 Vectorizing the connections between pairs of neurons has a clear impact on the efficiency with which the network learns to minimize the Memory Loss during Meta-Training.  ... 
arXiv:2011.07035v1 fatcat:zln4gdiuvra63hmjc3isut3awq

Learning a Weighted Meta-Sample Based Parameter Free Sparse Representation Classification for Microarray Data

Bo Liao, Yan Jiang, Guanqun Yuan, Wen Zhu, Lijun Cai, Zhi Cao, Enrique Hernandez-Lemus
2014 PLoS ONE  
Second, sparse representation coefficients can be obtained by ' 1 regularization of underdetermined linear equations. Thus, data dependent sparsity can be adaptively tuned.  ...  First, we extract the weighted meta-samples for each sub class from raw data, and the rationality of the weighting strategy is proven mathematically.  ...  To capture more alternative information from gene expression data, the so-called meta-samples are proposed by [8] [9] [10] [11] .  ... 
doi:10.1371/journal.pone.0104314 pmid:25115965 pmcid:PMC4130588 fatcat:5me3awlxubae5hrfcdkx2quzbi

Energy Aware Wireless Sensor Network Through Meta Learning Prediction Technique

Niyati Patel, Darshan Patel
2016 International Journal of Computer Applications  
Here, we propose meta learning prediction technique for energy awareness in wireless sensor network.  ...  In this paper, we highlight the recent research efforts like energy efficient routing protocols, dynamic power management, behavior model of wireless sensor node using meta data for saving the allocated  ...  An alternative way of obtaining quick performance estimates is to run the algorithms whose performance we wish to estimate on a sample of the data, obtaining the so-called sub sampling Land markers [11  ... 
doi:10.5120/ijca2016908817 fatcat:ftlvthrvkbgzxolxw623j45jaq

Resource Management and Model Personalization for Federated Learning over Wireless Edge Networks

Ravikumar Balakrishnan, Mustafa Akdeniz, Sagar Dhakal, Arjun Anand, Ariela Zeira, Nageen Himayat
2021 Journal of Sensor and Actuator Networks  
We also develop a federated meta-learning solution, based on task similarity, that serves as a sample efficient initialization for federated learning, as well as improves model personalization and generalization  ...  Client and Internet of Things devices are increasingly equipped with the ability to sense, process, and communicate data with high efficiency.  ...  Alternatively, clients can transform (linear or non-linear) their data to a different dimension and share the distribution over their examples with the server.  ... 
doi:10.3390/jsan10010017 fatcat:2j635xalqjgenkgxny5k7fvp7m

Model-based Adversarial Meta-Reinforcement Learning [article]

Zichuan Lin, Garrett Thomas, Guangwen Yang, Tengyu Ma
2021 arXiv   pre-print
Meta-reinforcement learning (meta-RL) aims to learn from multiple training tasks the ability to adapt efficiently to unseen test tasks.  ...  We propose a minimax objective and optimize it by alternating between learning the dynamics model on a fixed task and finding the adversarial task for the current model -- the task for which the policy  ...  The work is also in part supported by SDSI and SAIL.  ... 
arXiv:2006.08875v2 fatcat:kutnxcgprffhzo222dl7fl2lv4

On Hyper-parameter Tuning for Stochastic Optimization Algorithms [article]

Haotian Zhang, Jianyong Sun, Zongben Xu
2020 arXiv   pre-print
This paper proposes the first-ever algorithmic framework for tuning hyper-parameters of stochastic optimization algorithm based on reinforcement learning.  ...  Hyper-parameters impose significant influences on the performance of stochastic optimization algorithms, such as evolutionary algorithms (EAs) and meta-heuristics.  ...  Given data set D, model M(w) and loss function L(·, ·), MAML learns initialization w * by minimizing L (M(w − α∇L(M(w), D)), D).  ... 
arXiv:2003.02038v2 fatcat:ogioxwsojrdnrjzz2g5tyucveu

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks [article]

Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein
2020 arXiv   pre-print
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models which are trained classically.  ...  In doing so, we introduce and verify several hypotheses for why meta-learned models perform better.  ...  Acknowledgements This work was supported by the ONR MURI program, the DARPA YFA program, DARPA GARD, the JHU HLTCOE, and the National Science Foundation DMS division.  ... 
arXiv:2002.06753v3 fatcat:iis3k64tnjbfbjpi6oikkjzz4a

Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning [article]

Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter
2019 arXiv   pre-print
We propose to combine the benefits of the two approaches by presenting an integrated approach called Curious Meta-Controller.  ...  We demonstrate that our approach can significantly improve the sample efficiency and achieve near-optimal performance on learning robotic reaching and grasping tasks from raw-pixel input in both dense  ...  ACKNOWLEDGEMENT This work was supported by the DAAD German Academic Exchange Service (Funding Programme No. 57214224) with partial support from the German Research Foundation DFG under project CML (TRR  ... 
arXiv:1905.01718v1 fatcat:24y65ncfcffrrmmu53f2vkcbvm
« Previous Showing results 1 — 15 out of 34,251 results