07/09/2020

MagnifierNet: Towards Semantic Adversary and Fusion for Person Re-identification

Yushi Lan, Yuan Liu, Xinchi Zhou, Tian Maoqing, Xuesen Zhang, Shuai Yi, Hongsheng Li

Keywords: person re-identification, adversarial samples, metric learning, multi-task learning, image retrieval

Abstract: Although person re-identification (ReID) has achieved significant improvement recently by enforcing part alignment, it is still a challenging task when it comes to distinguishing visually similar identities or identifying the occluded person. In these scenarios, magnifying details in each part features and selectively fusing them together may provide a feasible solution. In this work, we propose MagnifierNet, a triple-branch network which accurately mines details from whole to parts. Firstly, the holistic salient features are encoded by a global branch. Secondly, to enhance detailed representation for each semantic region, the "Semantic Adversarial Branch" is designed to learn from dynamically generated semantic-occluded samples during training. Meanwhile, we introduce "Semantic Fusion Branch" to filter out irrelevant noises by selectively fusing semantic region information sequentially. To further improve feature diversity, we introduce a novel loss function "Semantic Diversity Loss" to remove redundant overlaps across learned semantic representations. State-of-the-art performance has been achieved on three benchmarks by large margins. Specifically, the mAP score is improved by 6% and 5% on the most challenging CUHK03-L and CUHK03-D benchmarks.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers