IRGUN : Improved Residue Based Gradual Up-Scaling Network for Single Image Super Resolution
TL;DR: A novel Improved Residual based Gradual Up-Scaling Network (IRGUN) to improve the quality of the super-resolved image for a large magnification factor and recovers fine details effectively at large (8X) magnification factors.
Abstract: Convolutional neural network based architectures have achieved decent perceptual quality super resolution on natural images for small scaling factors (2X and 4X). However, image super-resolution for large magnication factors (8X) is an extremely challenging problem for the computer vision community. In this paper, we propose a novel Improved Residual based Gradual Up-Scaling Network (IRGUN) to improve the quality of the super-resolved image for a large magnification factor. IRGUN has a Gradual Upsampling and Residue-based Enhancment Network (GUREN) which comprises of series of Up-scaling and Enhancement blocks (UEB) connected end-to-end and fine-tuned together to give a gradual magnification and enhancement. Due to the perceptual importance of the luminance in super-resolution, the model is trained on luminance (Y) channel of the YCbCr image. Whereas, the chrominance components (Cb and Cr) channel are up-scaled using bicubic interpolation and combined with super-resolved Y channel of the image, which is then converted to RGB. A cascaded 3D-RED architecture trained on RGB images is utilized to incorporate its inter-channel correlation. In addition to this, the training methodology is also presented in the paper. In the training procedure, the weights of the previous UEB are used in the next immediate UEB for faster and better convergence. Each UEB is trained on its respective scale by taking the output image of the previous UEB as input and corresponding HR image of the same scale as ground truth to the successive UEB. All the UEBs are then connected end-to-end and fine tuned. The IRGUN recovers fine details effectively at large (8X) magnification factors. The efficiency of IRGUN is presented on various benchmark datasets and at different magnification scales.
...read more
Citations
222 citations
Cites background from "IRGUN : Improved Residue Based Grad..."
...CEERI team proposed an improved residual based gradual upscaling network (IRGUN) [29]....
[...]
...The IRGUN has a series of up-scaling and enhancement blocks (UEB) connected end-to-end and fine-tuned together to give a gradual magnification and enhancement....
[...]
...Title:: Improved residual based gradual upscaling network(IRGUN) Members:Manoj Sharma, Rudrabha Mukhopadhyay, Avinash Upadhyay, Sriharsha Koundinya, Ankit Shukla, Santanu Chaudhury Affiliation: CSIR-CEERI, India...
[...]
7 citations
6 citations
Cites background from "IRGUN : Improved Residue Based Grad..."
...For instance, [58, 32, 87, 33, 36, 58, 15, 1, 64, 43, 26, 16, 12, 70, 92, 40, 53, 79, 22, 57, 59, 4, 60, 78, 66] are some deep networks for super-resolution....
[...]
References
78,539 citations
"IRGUN : Improved Residue Based Grad..." refers methods in this paper
...We trained our model with Adam optimizer [13]....
[...]
6,077 citations
4,680 citations
4,397 citations
"IRGUN : Improved Residue Based Grad..." refers background or methods or result in this paper
...Red shows highest while blue shows second highest Dataset Scale Bicubic SRGAN [15] VDSR [11] GUN [38] RDN [27] EDSR [28] LapSRN [14] IRGUN...
[...]
...They are, SRGAN [15], VDSR [11], GUN [36], RDN [40], EDSR [26] and LapSRN [1]....
[...]
...However, in some of the models the increment in scale is also done using convolutional layers [1, 15, 18, 25, 28, 30]....
[...]
...Among the above mentioned frameworks, SRGAN gives good perceptual quality while its PSNR and SSIM metrics are poor in comparison to other methods....
[...]
...We compare results with SRGAN [15], VDSR [11], GUN [36], RDN [40], EDSR [26] and LapSRN [1]....
[...]
4,389 citations
"IRGUN : Improved Residue Based Grad..." refers methods in this paper
...Reconstruction of missing information with known LR/HR example pair uses learning based methods such as neighbor embedding based methods [2], local self-exemplar methods and sparse representation based methods [34, 36]....
[...]