22/11/2021

Meta-learning the Learning Trends Shared Across Tasks

Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah

Keywords: Meta-learning, Few-shot learning

Abstract: Meta-learning stands for ‘learning to learn’ such that generalization to new tasks is achieved. Among these methods,Gradient-based meta-learning algorithms are a specific sub-class that excel at quick adaptation to new tasks with limited data. This demonstrates their ability to acquire transferable knowledge, a capability that is central to human learning. However, the existing meta-learning approaches only depend on the current task information during the adaptation, and do not share the meta-knowledge of how a similar task has been adapted before. To address this gap, we propose a ‘Path-aware’ model-agnostic meta-learning approach. Specifically, our approach not only learns a good initialization (meta-parameters) for adaptation, it also learns an optimal way to adapt these parameters to a set of task-specific parameters, with learnable update directions, learning rates and, most importantly, the way updates evolve over different time-steps. Our approach is simple to implement and demonstrates faster convergence compared to the competing methods. We report significant performance improvements on a number of datasets for few-shot learning on classification and regression tasks.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers