Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices
Mustafa Ayazoglu
- pp 2472-2479
Reads0
Chats0
TLDR
XLSR as mentioned in this paper is a hardware limitation aware, extremely lightweight quantization robust real-time super resolution network (xLSR) based on root modules introduced in Image classification.Abstract:
Single-Image Super Resolution (SISR) is a classical computer vision problem and it has been studied for over decades. With the recent success of deep learning methods, recent work on SISR focuses solutions with deep learning methodologies and achieves state-of-the-art results. However most of the state-of-the-art SISR methods contain millions of parameters and layers, which limits their practical applications. In this paper, we propose a hardware (Synaptics Dolphin NPU) limitation aware, extremely lightweight quantization robust real-time super resolution network (XLSR). The proposed model’s building block is inspired from root modules introduced in [15] for Image classification. We successfully applied root modules to SISR problem, further more to make the model uint8 quantization robust we used Clipped ReLU at the last layer of the network and achieved great balance between reconstruction quality and runtime. Furthermore, although the proposed network contains 30x fewer parameters than VDSR [16] its performance surpasses it on Div2K validation set. The network proved itself by winning Mobile AI 2021 Real-Time Single Image Super Resolution Challenge.read more
Citations
More filters
Proceedings ArticleDOI
Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Andrey Ignatov,Radu Timofte,Maurizio Denna,Abdel Younes,Andrew Lek,Mustafa Ayazoglu,Jie Liu,Zongcai Du,Jiaming Guo,Xueyi Zhou,Hao Jia,Youliang Yan,Zexin Zhang,Yixin Chen,Yunbo Peng,Yue Lin,Xindong Zhang,Hui Zeng,Kun Zeng,Peirong Li,Zhihuang Liu,Shiqi Xue,Shengpeng Wang +22 more
TL;DR: In this paper, the authors introduced the first Mobile AI challenge, where the target is to develop an end-to-end deep learning-based image super-resolution solutions that can demonstrate a realtime performance on mobile or edge NPUs.
Journal ArticleDOI
NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Yawei Li,Kai Zhang,Radu Timofte,Luc Van Gool,Fang Kong,Ming-Xing Li,Songwei Liu,Zongcai Du,Di Liu,Chenhui Zhou,Jing Chen,Qingrui Han,Zheyuan Li,Ying-Chieh Liu,Xiangyu Chen,Haoming Cai,Yuanjiao Qiao,Chao Dong,Long Sun,Jinshan Pan,Yingchun Zhu,Zhikai Zong,Xiaoxiao Liu,Zheng Hui,Tao Yang,Peiran Ren,Xuansong Xie,Xian-Sheng Hua,Yanbo J. Wang,Xiaozhong Ji,Chuming Lin,Donghao Luo,Ying Tai,Chengjie Wang,Zhizhong Zhang,Yuan Xie,Shen Cheng,Ziwei Luo,Lei Yu,Zhi-hong Wen,Qi Wu1,Youwei Li,Haoqiang Fan,Jian Sun,Shuaicheng Liu,Yuanfei Huang,Meiguang Jin,Huan Huang,Jing Liu,Xinjian Zhang,Yan Wang,Ling Yun Long,Gen Li,Zuo-yuan Cao,Lei Sun,Panaetov Alexander,Yucong Wang,Mi Cai,Li Fa Wang,Lu Tian,Zheyuan Wang,Hong-mei Ma,Jie Liu,Chao Chen,Yiyu Cai,Jie Tang,Gang Wu,Weiran Wang,Shi-Cai Huang,Honglei Lu,Huan Liu,Keyan Wang,Shi Chen,Yu-Hsuan Miao,Zimo Huang,Li Zhang,Mustafa Ayazouglu,Wei Xiong,Chengyi Xiong,Fei Wang,Hao Li,Rui Wen,Zhihao Yang,Wen Wu Zou,Wei Zheng,Tian-Chun Ye,Yuncheng Zhang,Xiangzhen Kong,Aditya Arora,Syed Waqas Zamir,Salman Khan,Munawar Hayat,Fahad Shahbaz Khan,Dan Ning,Jing Tang,Han Huang,Yufei Wang,Z Peng,Hao Li,Wenxue Guan,Shengrong Gong,Xin Li,Jun Liu,Wan Jun Wang,Deng-Guang Zhou,Kun Zeng,Han-Yuan Lin,Xinyu Chen,Jin-Tao Fang +108 more
TL;DR: The NTIRE 2022 challenge was to super-resolve an input image with a magnification factor of ×4 based on pairs of low and corresponding high resolution images and the aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics.
Proceedings ArticleDOI
NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
TL;DR: The NTIRE 2022 challenge on efficient single image super-resolution with focus on the proposed solutions and results was presented in this article , where the aim was to design a network for single image SR that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29.00dB on DIV2K validation set.
Journal ArticleDOI
Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Andrey Ignatov,Radu Timofte,Ma. Cristine Faye J. Denna,Abdelbadie Younes,G. Gankhuyag,Jin Huh,Myeong Kyun Kim,Kihwan Yoon,Hyeongjun Moon,Seungho Lee,Yoonsik Choe,Jinwoo Jeong,Sungjei Kim,M Smyl,Tomasz Latkowski,Pawel Kubik,Michał Sokolski,Yu Ma,Jiahao Chao,Zhou Zhou,Hong-Xin Gao,Zhen Yang,Zhenbing Zeng,Zhen-bing Zhuge,Chenghua Li,Dan Zhu,Mengdi Sun,Ran Duan,Yanping Gao,Lingshun Kong,Long Sun,Xing Jian Zhang,Jiawei Zhang,Yaqi Wu,Jinshan Pan,Gao-Xiang Yu,Jin Zhang,Feng Zhang,Zhe Ma,Hongbin Wang,Hojin Cho,Steve Kim,Hua Li,Yan Ma,Ziwei Luo,Youwei Li,Lei Yu,Zhihong Wen,Qi Wu,Haoqiang Fan,Shuaicheng Liu,Lize Zhang,Zhikai Zong,J. Kwon,Junxi Zhang,Meng-Ying Li,N Fu,Guanchen Ding,Han Zhu,Zhen Chen,Gen Li,Li Sun,Dafeng Zhang,Neo Karl Yang,Jerry X. Zhao,Mustafa Ayazoglu,Bahri Batuhan Bilecen,Shota Hirose,Kasidis Arunruangsirilert,Luo Ao,Ho Chun Leung,Andrew Wei,Jie Liu,Qiang Li,Dahai Yu,Ao Li,Lei Luo,Ce Zhu,Seongmin Hong,Dong-Chun Park,Joonhee Lee,Byeong-Hyun Lee,Seunggyu Lee,Sengsub Chun,Ruiyuan He,Xuhao Jiang,Haihang Ruan,Xinjian Zhang,Jing Liu,Garas Gendy,Nabil Sabor,Jin-Long Hou,Guanghui He +92 more
TL;DR: In this article , the authors proposed an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs, which is fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images.
Proceedings ArticleDOI
CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
TL;DR: Cheeun et al. as discussed by the authors proposed a trainable bit selector module to determine the proper bit-width and quantization level for each layer and a given local image patch, which is governed by the quantization sensitivity that is estimated by using both the average magnitude of image gradient of the patch and the standard deviation of the input feature of the layer.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Posted Content
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew Howard,Menglong Zhu,Bo Chen,Dmitry Kalenichenko,Weijun Wang,Tobias Weyand,M. Andreetto,Hartwig Adam +7 more
TL;DR: This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
Posted Content
Distilling the Knowledge in a Neural Network
TL;DR: This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Proceedings ArticleDOI
Aggregated Residual Transformations for Deep Neural Networks
TL;DR: ResNeXt as discussed by the authors is a simple, highly modularized network architecture for image classification, which is constructed by repeating a building block that aggregates a set of transformations with the same topology.
Proceedings ArticleDOI
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Christian Ledig,Lucas Theis,Ferenc Huszar,Jose Caballero,Andrew Cunningham,Alejandro Acosta,Andrew Peter Aitken,Alykhan Tejani,Johannes Totz,Zehan Wang,Wenzhe Shi +10 more
TL;DR: SRGAN as mentioned in this paper proposes a perceptual loss function which consists of an adversarial loss and a content loss, which pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images.