scispace - formally typeset
Open AccessPosted Content

Efficient Image Super-Resolution Using Pixel Attention

TLDR
This work designs a lightweight convolutional neural network for image super resolution with a newly proposed pixel attention scheme that could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters.
Abstract
This work aims at designing a lightweight convolutional neural network for image super resolution (SR). With simplicity bare in mind, we construct a pretty concise and effective network with a newly proposed pixel attention scheme. Pixel attention (PA) is similar as channel attention and spatial attention in formulation. The difference is that PA produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. On the basis of PA, we propose two building blocks for the main branch and the reconstruction branch, respectively. The first one - SC-PA block has the same structure as the Self-Calibrated convolution but with our PA layer. This block is much more efficient than conventional residual/dense blocks, for its twobranch architecture and attention scheme. While the second one - UPA block combines the nearest-neighbor upsampling, convolution and PA layers. It improves the final reconstruction quality with little parameter cost. Our final model- PAN could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters (17.92% of SRResNet and 17.09% of CARN). The effectiveness of each proposed component is also validated by ablation study. The code is available at this https URL.

read more

Citations
More filters
Proceedings ArticleDOI

ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

TL;DR: Wang et al. as discussed by the authors proposed a new solution pipeline that combines classification and SR in a unified framework, which can help most existing methods (e.g., FSRCNN, CARN, SRResNet, RCAN) save up to 50% FLOPs on DIV8K datasets.
Proceedings ArticleDOI

NTIRE 2021 Challenge on Image Deblurring

TL;DR: The NTIRE 2021 Challenge on Image Deblurring as mentioned in this paper focused on image deblurring, where both the tracks aim to recover a high-quality clean image from a blurry image, different artifacts are jointly involved.
Posted Content

Interpreting Super-Resolution Networks with Local Attribution Maps

TL;DR: This work proposes a novel attribution approach called local attribution map (LAM), which inherits the integral gradient method yet with two unique features: one is to use the blurred image as the baseline input, and the other is to adopt the progressive blurring function as the path function.
Journal ArticleDOI

NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results

TL;DR: The NTIRE 2022 challenge was to super-resolve an input image with a magnification factor of ×4 based on pairs of low and corresponding high resolution images and the aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Journal ArticleDOI

Squeeze-and-Excitation Networks

TL;DR: This work proposes a novel architectural unit, which is term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and finds that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at minimal additional computational cost.

Automatic differentiation in PyTorch

TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Related Papers (5)