12/07/2020

Laplacian Regularized Few-Shot Learning

Imtiaz Ziko, Jose Dolz, Eric Granger, Ismail Ben Ayed

Keywords: Unsupervised and Semi-Supervised Learning

Abstract: Few-shot learning attempts to generalize to unlabeled query samples of new classes, which are unseen during training, given just a few labeled examples of those classes. It has received substantial research interest recently, with a large body of works based on complex meta-learning strategies and architecture choices. We propose a Laplacian-regularization objective for few-shot tasks, which integrates two types of potentials: (1) unary potentials assigning query samples to the nearest class prototype and (2) pairwise Laplacian potentials encouraging nearby query samples to have consistent predictions.We optimize a tight upper bound of a concave-convex relaxation of our objective, thereby guaranteeing convergence, while computing independent updates for each query sample. Following the standard experimental setting for few-shot learning, our LaplacianShot technique outperforms state-of-the-art methods significantly, while using simple cross-entropy training on the base classes. In the 1-shot setting on the standard miniImageNet and tieredImageNet benchmarks, and on the recent meta-iNat benchmark, across various networks, LaplacianShot consistently pro-vides 3 − 4% improvement in accuracy over the best-performing state-of-the-art method.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers