Abstract:
Super-Resolution Generative Adversarial Networks (SRGAN) and follow-up perceptual single image super-resolution(SISR) method has shown us impressive texture generation capability. However, these models do not fully exploit the difference between the reconstructed image and the original image. In this paper, we propose a Self-Interpolation Ranker(SI-Ranker) to take advantage of the difference between the reconstructed image and the original image. SI-Ranker sorts the interpolated image of the reconstruction image and the original image and guides the image reconstruction during training. This method allows the generator to focus on the difference between the reconstruction image and the original image to improve the quality of the reconstructed image while obtaining a reconstruction image closer to the original image. In addition, we propose Patch Distance Loss (PDL) constrain the reconstruction of the image, which constrains the reconstruction of the image by cutting the image into small pieces and calculating the cosine similarity between the two. Experiments show that SIR-SRGAN improves consistency with the original at both pixel and feature levels, allowing it to be compared to state-of-the-art methods.