14/09/2020

Inductive Generalized Zero-shot Learning with Adversarial Relation Network

Guanyu Yang, Kaizhu Huang, Rui Zhang, John Goulermas, Amir Hussain

Keywords: zero-shot learning, adversarial examples, gradient penalty

Abstract: We consider the inductive Generalized Zero Shot Learning (GZSL) problem where test information is assumed unavailable during training. In lack of training samples and attributes for unseen classes, most existing GZSL methods tend to classify target samples as seen classes. To alleviate such problem, we design an adversarial Relation Network that favors target samples towards unseen classes while enjoying robust recognition for seen classes. Specifically, through the adversarial framework, we can attain a robust recognizer where a small gradient adjustment to the instance will not affect too much the classification of seen classes but substantially increase the classification accuracy on unseen classes. We conduct a series of experiments extensively on four benchmarks i.e., AwA1, AwA2, aPY, and CUB. Experimental results show that our proposed method can attain encouraging performance, which is higher than the best of state-of-the-art models by 10.8%, 14.0%, 6.9%, and 1.9% on the four benchmark datasets, respectively in the inductive GZSL scenario. (The code is available on https://github.com/ygyvsys/AdvRN-with-SR)

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ECML PKDD 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers