A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition
[article]
2021
arXiv
pre-print
We propose an approach to IML, which we call Episodic Replay Distillation (ERD), that mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning ...
Most meta-learning approaches assume the existence of a very large set of labeled data available for episodic meta-learning of base knowledge. ...
Conclusions In this paper we proposed Episodic Replay Distillation, an approach to incremental few-shot recognition that uses episodic meta-learning over episodes split into cross-task and exemplar sub-episodes ...
arXiv:2111.04993v2
fatcat:5c43uppfxbfybnuxlbcqniva4u
Reviewing continual learning from the perspective of human-level intelligence
[article]
2021
arXiv
pre-print
Humans' continual learning (CL) ability is closely related to Stability Versus Plasticity Dilemma that describes how humans achieve ongoing learning capacity and preservation for learned information. ...
Analogous to biological counterpart, "smart" AI agents are supposed to i) remember previously learned information (information retrospection); ii) infer on new information continuously (information prospection ...
Through repetitive episodic training, the learner can gradually generalize over few-shot tasks. This training method has been widely incorporated in incremental few-shot learning. ...
arXiv:2111.11964v1
fatcat:je5lyidbongfxj4v67zxs2a3bi
Recent Advances of Continual Learning in Computer Vision: An Overview
[article]
2021
arXiv
pre-print
In particular, the works are grouped by their representative techniques, including regularization, knowledge distillation, memory, generative replay, parameter isolation, and a combination of the above ...
For each category of these techniques, both its characteristics and applications in computer vision are presented. ...
[31] focused on incremental few-shot learning. ...
arXiv:2109.11369v2
fatcat:c7cptaycjvcxjkyi6qvwivx4qi
Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting
[article]
2021
arXiv
pre-print
We evaluate the proposed framework on four challenging benchmark datasets for image and video few-shot classification and obtain state-of-the-art results for both generalized and incremental few shot learning ...
Both generalized and incremental few-shot learning have to deal with three major challenges: learning novel classes from only few samples per class, preventing catastrophic forgetting of base classes, ...
To this end, following for incremental few-shot learning. ...
arXiv:2108.08165v1
fatcat:jcwmbksz3vd7feu2unhrtib45q
Small Sample Learning in Big Data Era
[article]
2018
arXiv
pre-print
The first category of SSL approaches can be called "concept learning", which emphasizes learning new concepts from only few related observations. ...
The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. ...
the few-shot regime as well as episodic training idea. ...
arXiv:1808.04572v3
fatcat:lqqzzrmgfnfb3izctvdzgopuny
Meta-Learning in Neural Networks: A Survey
2021
IEEE Transactions on Pattern Analysis and Machine Intelligence
We survey promising applications and successes of meta-learning including few-shot learning, reinforcement learning and architecture search. ...
multiple learning episodes. ...
In dataset distillation [151] - [153] , the support images themselves are learned such that a few steps on them allows for good generalization on real query images. ...
doi:10.1109/tpami.2021.3079209
pmid:33974543
fatcat:wkzeodki4fbcnjlcczn4mr6kry
Meta-Learning in Neural Networks: A Survey
[article]
2020
arXiv
pre-print
We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning. ...
learning episodes. ...
In dataset distillation [156] , [157] , the support images themselves are learned such that a few steps on them allows for good generalization on real query images. ...
arXiv:2004.05439v2
fatcat:3r23tsxxkfbgzamow5miglkrye
Lifelong Adaptive Machine Learning for Sensor-based Human Activity Recognition Using Prototypical Networks
[article]
2022
arXiv
pre-print
sensor-based data streams in a task-free data-incremental fashion and mitigates catastrophic forgetting using experience replay and continual prototype adaptation. ...
Moreover, analysis has so far focused on task-incremental or class-incremental learning paradigms where task boundaries are known. ...
While this has inspired several meta-learning algorithms, this setup does not allow classes learnt in one episode to be carried forward to the next, which is critical for continual incremental learning ...
arXiv:2203.05692v1
fatcat:xgtaxzixsrdhhd7xcs7h4y7o7u
Personalizing Pre-trained Models
[article]
2021
arXiv
pre-print
We developed a technique, called Multi-label Weight Imprinting (MWI), for multi-label, continual, and few-shot learning, and CLIPPER uses MWI with image representations from CLIP. ...
We consider how upstream pretrained models can be leveraged for downstream few-shot, multilabel, and continual learning tasks. ...
However, commonly used transfer techniques, e.g., fine-tuning or distillation, do not currently support few-shot, multilabel, and continual learning. ...
arXiv:2106.01499v1
fatcat:7wu5e533crf3padwftwtnx2f3y
Imbalanced Continual Learning with Partitioning Reservoir Sampling
[article]
2020
arXiv
pre-print
Continual learning from a sequential stream of data is a crucial challenge for machine learning research. ...
Lastly, we propose a new sampling strategy for replay-based approach named Partitioning Reservoir Sampling (PRS), which allows the model to maintain a balanced knowledge of both head and tail classes. ...
Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NIPS (2017) 79. ...
arXiv:2009.03632v1
fatcat:7if3wpjr5razdl7kz6lcewaoti
Federated Reconnaissance: Efficient, Distributed, Class-Incremental Learning
[article]
2021
arXiv
pre-print
for distributed, continual learning. ...
, increasing the accuracy by over 22% after learning 600 Omniglot classes and over 33% after learning 20 mini-ImageNet classes incrementally. ...
Leach for his excellent and discerning feedback on an earlier version of this manuscript. ...
arXiv:2109.00150v1
fatcat:sw5fhcl5wzbl7ivku6k3ra27pe
Memory-based Parameter Adaptation
[article]
2018
arXiv
pre-print
Much higher learning rates can be used for this local adaptation, reneging the need for many iterations over similar data before good predictions can be made. ...
We demonstrate this on a range of supervised tasks: large-scale image classification and language modelling. ...
ACKNOWLEDGMENTS We would like to thank Gabor Melis for providing the LSTM baselines on the language tasks. ...
arXiv:1802.10542v1
fatcat:56h6yirgufabvixwp7cw5apgem
Generalized Knowledge Distillation via Relationship Matching
[article]
2022
arXiv
pre-print
It also achieves state-of-the-art performance on standard knowledge distillation, one-step incremental learning, and few-shot learning tasks. ...
The knowledge of a well-trained deep neural network (a.k.a. the "teacher") is valuable for learning similar tasks. ...
REFILLED also outperforms recent methods in one-step incremental learning, few-shot learning, and middle-shot learning problems. ...
arXiv:2205.01915v1
fatcat:t6trevhwvvfpvgmaaptz4yz6tq
Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes
[article]
2020
arXiv
pre-print
We show that they are very promising methods for continual learning. ...
In this thesis, we propose to explore continual algorithms with replay processes. Replay processes gather together rehearsal methods and generative replay methods. ...
Few-shot Learning: Few shot learning Lake et al. (2011); Fei-Fei et al. (2006) is the ability to learn to recognize new concepts based on only few samples of them. ...
arXiv:2007.00487v3
fatcat:fwdjynkclbchvgo73qhs6biice
Continual Lifelong Learning with Neural Networks: A Review
[article]
2019
arXiv
pre-print
However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data ...
We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic ...
The authors would like to thank Sascha Griffiths, Vincenzo Lomonaco, Sebastian Risi, and Jun Tani for valuable feedback and suggestions. ...
arXiv:1802.07569v3
fatcat:6zn2hqi2djbu3lx5mbr75nvipq
« Previous
Showing results 1 — 15 out of 79 results