Learning to Generalize to Unseen Tasks with Bilevel Optimization [article]

Hayeon Lee, Donghyun Na, Hae Beom Lee, Sung Ju Hwang
2019 arXiv   pre-print
Recent metric-based meta-learning approaches, which learn a metric space that generalizes well over combinatorial number of different classification tasks sampled from a task distribution, have been shown to be effective for few-shot classification tasks of unseen classes. They are often trained with episodic training where they iteratively train a common metric space that reduces distance between the class representatives and instances belonging to each class, over large number of episodes
more » ... random classes. However, this training is limited in that while the main target is the generalization to the classification of unseen classes during training, there is no explicit consideration of generalization during meta-training phase. To tackle this issue, we propose a simple yet effective meta-learning framework for metricbased approaches, which we refer to as learning to generalize (L2G), that explicitly constrains the learning on a sampled classification task to reduce the classification error on a randomly sampled unseen classification task with a bilevel optimization scheme. This explicit learning aimed toward generalization allows the model to obtain a metric that separates well between unseen classes. We validate our L2G framework on mini-ImageNet and tiered-ImageNet datasets with two base meta-learning few-shot classification models, Prototypical Networks and Relation Networks. The results show that L2G significantly improves the performance of the two methods over episodic training. Further visualization shows that L2G obtains a metric space that clusters and separates unseen classes well.
arXiv:1908.01457v1 fatcat:h4pqh4lcgzg4bgbz7nvq3xfexe