19/08/2021

You Get What You Sow: High Fidelity Image Synthesis with a Single Pretrained Network

Kefeng Zhu, Peilin Tong, Hongwei Kan, Rengang Li

Keywords: Machine Learning, Classification, Deep Learning, Explainable/Interpretable Machine Learning

Abstract: State-of-the-art image synthesis methods are mostly based on generative adversarial networks and require large dataset and extensive training. Although the model-inversion-oriented branch of methods eliminate the training requirement, the quality of the resulting image tends to be limited due to the lack of sufficient natural and class-specific information. In this paper, we introduce a novel strategy for high fidelity image synthesis with a single pretrained classification network. The strategy includes a class-conditional natural regularization design and a corresponding metadata collecting procedure for different scenarios. We show that our method can synthesize high quality natural images that closely follow the features of one or more given seed images. Moreover, our method achieves surprisingly decent results in the task of sketch-based image synthesis without training. Finally, our method further improves the performance in terms of accuracy and efficiency in the data-free knowledge distillation task.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at IJCAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers