Open AccessPosted Content
Efficient Image Super-Resolution Using Pixel Attention
TLDR
This work designs a lightweight convolutional neural network for image super resolution with a newly proposed pixel attention scheme that could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters.Abstract:
This work aims at designing a lightweight convolutional neural network for image super resolution (SR). With simplicity bare in mind, we construct a pretty concise and effective network with a newly proposed pixel attention scheme. Pixel attention (PA) is similar as channel attention and spatial attention in formulation. The difference is that PA produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. On the basis of PA, we propose two building blocks for the main branch and the reconstruction branch, respectively. The first one - SC-PA block has the same structure as the Self-Calibrated convolution but with our PA layer. This block is much more efficient than conventional residual/dense blocks, for its twobranch architecture and attention scheme. While the second one - UPA block combines the nearest-neighbor upsampling, convolution and PA layers. It improves the final reconstruction quality with little parameter cost. Our final model- PAN could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters (17.92% of SRResNet and 17.09% of CARN). The effectiveness of each proposed component is also validated by ablation study. The code is available at this https URL.read more
Citations
More filters
Proceedings ArticleDOI
ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic
TL;DR: Wang et al. as discussed by the authors proposed a new solution pipeline that combines classification and SR in a unified framework, which can help most existing methods (e.g., FSRCNN, CARN, SRResNet, RCAN) save up to 50% FLOPs on DIV8K datasets.
Proceedings ArticleDOI
NTIRE 2021 Challenge on Image Deblurring
Seungjun Nah,Sanghyun Son,Suyoung Lee,Radu Timofte,Kyoung Mu Lee,Liangyu Chen,Jie Zhang,Xin Lu,Xiaojie Chu,Chengpeng Chen,Zhiwei Xiong,Ruikang Xu,Zeyu Xiao,Jie Huang,Yueyi Zhang,Si Xi,Jia Wei,Haoran Bai,Songsheng Cheng,Hao Wei,Long Sun,Jinhui Tang,Jinshan Pan,Donghyeon Lee,Chulhee Lee,Taesung Kim,Xiaobing Wang,Dafeng Zhang,Zhihong Pan,Tianwei Lin,Wenhao Wu,Dongliang He,Baopu Li,Boyun Li,Teng Xi,Gang Zhang,Jingtuo Liu,Junyu Han,Errui Ding,Guangpin Tao,Wenqing Chu,Yun Cao,Donghao Luo,Ying Tai,Tong Lu,Chengjie Wang,Jilin Li,Feiyue Huang,Hanting Chen,Shuaijun Chen,Tianyu Guo,Yunhe Wang,Syed Waqas Zamir,Aditya Arora,Salman Khan,Munawar Hayat,Fahad Shahbaz Khan,Ling Shao,Yushen Zuo,Yimin Ou,Yuanjun Chai,Lei Shi,Shuai Liu,Lei Lei,Chaoyu Feng,Kai Zeng,Yuying Yao,Xinran Liu,Zhizhou Zhang,Huacheng Huang,Yunchen Zhang,Mingchao Jiang,Wenbin Zou,Si Miao,Yangwoo Kim,Yuejin Sun,Senyou Deng,Wenqi Ren,Xiaochun Cao,Tao Wang,Maitreya Suin,A.N. Rajagopalan,Vinh Van Duong,Thuc Huu Nguyen,Jonghoon Yim,Byeungwoo Jeon,Ru Li,Junwei Xie,Jong-Wook Han,Jun-Ho Choi,Jun-Hyuk Kim,Jong-Seok Lee,Jiaxin Zhang,Fan Peng,David Svitov,Dmitry Pakulich,Jaeyeob Kim,Jechang Jeong +97 more
TL;DR: The NTIRE 2021 Challenge on Image Deblurring as mentioned in this paper focused on image deblurring, where both the tracks aim to recover a high-quality clean image from a blurry image, different artifacts are jointly involved.
Posted Content
AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results
Kai Zhang,Martin Danelljan,Yawei Li,Radu Timofte,Jie Liu,Jie Tang,Gangshan Wu,Yu Zhu,Xiangyu He,Wenjie Xu,Chenghua Li,Cong Leng,Jian Cheng,Guangyang Wu,Wenyi Wang,Xiaohong Liu,Hengyuan Zhao,Xiangtao Kong,Jingwen He,Yu Qiao,Chao Dong,Xiaotong Luo,Liang Chen,Jiangtao Zhang,Maitreya Suin,Kuldeep Purohit,A. N. Rajagopalan,Xiaochuan Li,Zhiqiang Lang,Jiangtao Nie,Wei Wei,Lei Zhang,Abdul Muqeet,Jiwon Hwang,Subin Yang,JungHeum Kang,Sung-Ho Bae,Yongwoo Kim,Yanyun Qu,Geun-Woo Jeon,Jun-Ho Choi,Jun-Hyuk Kim,Jong-Seok Lee,Steven Marty,Eric Marty,Dongliang Xiong,Siang Chen,Lin Zha,Jiande Jiang,Xinbo Gao,Wen Lu,Haicheng Wang,Vineeth Bhaskara,Alex Levinshtein,Stavros Tsogkas,Allan D. Jepson,Xiangzhen Kong,Tongtong Zhao,Shanshan Zhao,Hrishikesh P S,Densen Puthussery,C. V. Jiji,Nan Nan,Shuai Liu,Jie Cai,Zibo Meng,Jiaming Ding,Chiu Man Ho,Xuehui Wang,Qiong Yan,Yuzhi Zhao,Long Chen,Long Sun,Wenhao Wang,Zhenbing Liu,Rushi Lan,Rao Muhammad Umer,Christian Micheloni +77 more
TL;DR: The AIM 2020 challenge on efficient single image super-resolution was to super-resolve an input image with a magnification factor x4 based on a set of prior examples of low and corresponding high resolution images with focus on the proposed solutions and results.
Posted Content
Interpreting Super-Resolution Networks with Local Attribution Maps
Jinjin Gu,Chao Dong +1 more
TL;DR: This work proposes a novel attribution approach called local attribution map (LAM), which inherits the integral gradient method yet with two unique features: one is to use the blurred image as the baseline input, and the other is to adopt the progressive blurring function as the path function.
Journal ArticleDOI
NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Yawei Li,Kai Zhang,Radu Timofte,Luc Van Gool,Fang Kong,Ming-Xing Li,Songwei Liu,Zongcai Du,Di Liu,Chenhui Zhou,Jing Chen,Qingrui Han,Zheyuan Li,Ying-Chieh Liu,Xiangyu Chen,Haoming Cai,Yuanjiao Qiao,Chao Dong,Long Sun,Jinshan Pan,Yingchun Zhu,Zhikai Zong,Xiaoxiao Liu,Zheng Hui,Tao Yang,Peiran Ren,Xuansong Xie,Xian-Sheng Hua,Yanbo J. Wang,Xiaozhong Ji,Chuming Lin,Donghao Luo,Ying Tai,Chengjie Wang,Zhizhong Zhang,Yuan Xie,Shen Cheng,Ziwei Luo,Lei Yu,Zhi-hong Wen,Qi Wu1,Youwei Li,Haoqiang Fan,Jian Sun,Shuaicheng Liu,Yuanfei Huang,Meiguang Jin,Huan Huang,Jing Liu,Xinjian Zhang,Yan Wang,Ling Yun Long,Gen Li,Zuo-yuan Cao,Lei Sun,Panaetov Alexander,Yucong Wang,Mi Cai,Li Fa Wang,Lu Tian,Zheyuan Wang,Hong-mei Ma,Jie Liu,Chao Chen,Yiyu Cai,Jie Tang,Gang Wu,Weiran Wang,Shi-Cai Huang,Honglei Lu,Huan Liu,Keyan Wang,Shi Chen,Yu-Hsuan Miao,Zimo Huang,Li Zhang,Mustafa Ayazouglu,Wei Xiong,Chengyi Xiong,Fei Wang,Hao Li,Rui Wen,Zhihao Yang,Wen Wu Zou,Wei Zheng,Tian-Chun Ye,Yuncheng Zhang,Xiangzhen Kong,Aditya Arora,Syed Waqas Zamir,Salman Khan,Munawar Hayat,Fahad Shahbaz Khan,Dan Ning,Jing Tang,Han Huang,Yufei Wang,Z Peng,Hao Li,Wenxue Guan,Shengrong Gong,Xin Li,Jun Liu,Wan Jun Wang,Deng-Guang Zhou,Kun Zeng,Han-Yuan Lin,Xinyu Chen,Jin-Tao Fang +108 more
TL;DR: The NTIRE 2022 challenge was to super-resolve an input image with a magnification factor of ×4 based on pairs of low and corresponding high resolution images and the aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Image quality assessment: from error visibility to structural similarity
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Posted Content
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Journal ArticleDOI
Squeeze-and-Excitation Networks
TL;DR: This work proposes a novel architectural unit, which is term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and finds that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at minimal additional computational cost.
Automatic differentiation in PyTorch
Adam Paszke,Sam Gross,Soumith Chintala,Gregory Chanan,Edward Z. Yang,Zachary DeVito,Zeming Lin,Alban Desmaison,Luca Antiga,Adam Lerer +9 more
TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.