22/11/2021

Separating Content and Style for Unsupervised Image-to-Image Translation

Yunfei Liu, Haofei Wang, Yang Yue, Feng Lu

Keywords: Image-to-Image Translation, unsupervised learning, CNN Interpretation

Abstract: Unsupervised image-to-image translation aims to learn the mapping between two visual domains with unpaired samples. The existing works usually focus on disentangling the domain-invariant content code and domain-specific style code individually for multi-modal purposes. However, interpreting and manipulating the translated image has not been well explored. In this paper, we propose to separate the content code and style code simultaneously in a unified framework. Based on the correlation between the latent features and the high-level domain-invariant tasks, the proposed framework shows good properties like multi-modal translation, good interpretability, and ease of manipulation. The experimental results also demonstrate that the proposed approach outperforms the existing unsupervised image translation methods in terms of visual quality and diversity.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers