14/06/2020

Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment

Po-Hsiang Huang, Fu-En Yang, Yu-Chiang Frank Wang

Keywords: face reenactment, video retargeting, representation learning, video generation, adversarial learning, self-supervised learning

Abstract: Human face reenactment aims at transferring motion patterns from one face (from a source-domain video) to an-other (in the target domain with the identity of interest).While recent works report impressive results, they are notable to handle multiple identities in a unified model. In this paper, we propose a unique network of CrossID-GAN to perform multi-ID face reenactment. Given a source-domain video with extracted facial landmarks and a target-domain image, our CrossID-GAN learns the identity-invariant motion patterns via the extracted landmarks and such information to produce the videos whose ID matches that of the target domain. Both supervised and unsupervised settings are proposed to train and guide our model during training.Our qualitative/quantitative results confirm the robustness and effectiveness of our model, with ablation studies confirming our network design.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CVPR 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers