Abstract:
State-of-the-art image synthesis methods are mostly based on generative adversarial networks and require large dataset and extensive training. Although the model-inversion-oriented branch of methods eliminate the training requirement, the quality of the resulting image tends to be limited due to the lack of sufficient natural and class-specific information. In this paper, we introduce a novel strategy for high fidelity image synthesis with a single pretrained classification network. The strategy includes a class-conditional natural regularization design and a corresponding metadata collecting procedure for different scenarios. We show that our method can synthesize high quality natural images that closely follow the features of one or more given seed images. Moreover, our method achieves surprisingly decent results in the task of sketch-based image synthesis without training. Finally, our method further improves the performance in terms of accuracy and efficiency in the data-free knowledge distillation task.