AIM 2020 Challenge on Learned Image Signal Processing Pipeline
Andrey Ignatov,Radu Timofte,Zhilu Zhang,Ming Liu,Haolin Wang,Wangmeng Zuo,Jiawei Zhang,Ruimao Zhang,Zhanglin Peng,Sijie Ren,Linhui Dai,Xiaohong Liu,Chengqi Li,Jun Chen,Yuichi Ito,Bhavya Vasudeva,Puneesh Deora,Umapada Pal,Zhenyu Guo,Yu Zhu,Tian Liang,Chenghua Li,Cong Leng,Zhihong Pan,Baopu Li,Byung-Hoon Kim,Joonyoung Song,Jong Chul Ye,JaeHyun Baek,Magauiya Zhussip,Yeskendir Koishekenov,Hwechul Cho Ye,Xin Liu,Xueying Hu,Jun Jiang,Jinwei Gu,Kai Li,Pengliang Tan,Bingxin Hou +38 more
- pp 152-170
Reads0
Chats0
TLDR
The second AIM learned ISP challenge as mentioned in this paper focused on real-world RAW-to-RGB mapping problem, where the goal was to map the original low-quality RAW images captured by the Huawei P20 device to the same photos obtained with the Canon 5D DSLR camera.Abstract:
This paper reviews the second AIM learned ISP challenge and provides the description of the proposed solutions and results. The participating teams were solving a real-world RAW-to-RGB mapping problem, where to goal was to map the original low-quality RAW images captured by the Huawei P20 device to the same photos obtained with the Canon 5D DSLR camera. The considered task embraced a number of complex computer vision subtasks, such as image demosaicing, denoising, white balancing, color and contrast correction, demoireing, etc. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions’ perceptual results measured in a user study. The proposed solutions significantly improved the baseline results, defining the state-of-the-art for practical image signal processing pipeline modeling.read more
Citations
More filters
Posted Content
Efficient Image Super-Resolution Using Pixel Attention
TL;DR: This work designs a lightweight convolutional neural network for image super resolution with a newly proposed pixel attention scheme that could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters.
Book ChapterDOI
Efficient Image Super-Resolution Using Pixel Attention
TL;DR: Zhao et al. as discussed by the authors designed a lightweight convolutional neural network with a pixel attention scheme, which produces 3D attention maps instead of a 1D attention vector or a 2D map.
Proceedings ArticleDOI
Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Andrey Ignatov,Radu Timofte,Maurizio Denna,Abdel Younes,Andrew Lek,Mustafa Ayazoglu,Jie Liu,Zongcai Du,Jiaming Guo,Xueyi Zhou,Hao Jia,Youliang Yan,Zexin Zhang,Yixin Chen,Yunbo Peng,Yue Lin,Xindong Zhang,Hui Zeng,Kun Zeng,Peirong Li,Zhihuang Liu,Shiqi Xue,Shengpeng Wang +22 more
TL;DR: In this paper, the authors introduced the first Mobile AI challenge, where the target is to develop an end-to-end deep learning-based image super-resolution solutions that can demonstrate a realtime performance on mobile or edge NPUs.
Proceedings ArticleDOI
Real-Time Video Super-Resolution on Smartphones with Deep Learning, Mobile AI 2021 Challenge: Report
Andrey Ignatov,Andrés Romero,Heewon Kim,Radu Timofte,Chiu Man Ho,Zibo Meng,Kyoung Mu Lee,Yuxiang Chen,Yutong Wang,Zeyu Long,Chenhao Wang,Yifei Chen,Boshen Xu,Shuhang Gu,Lixin Duan,Wen Li,Wang Bofei,Zhang Diankai,Zheng Chengjian,Liu Shaoli,Gao Si,Zhang Xiaofeng,Lu Kaidi,Xu Tianyu,Zheng Hui,Xinbo Gao,Xiumei Wang,Jiaming Guo,Xueyi Zhou,Hao Jia,Youliang Yan +30 more
TL;DR: In this paper, the first Mobile AI challenge was introduced, where the target is to develop an end-to-end deep learning-based video super-resolution solutions that can achieve a real-time performance on mobile GPUs.
Proceedings ArticleDOI
Learned Smartphone ISP on Mobile NPUs with Deep Learning, Mobile AI 2021 Challenge: Report
Andrey Ignatov,Cheng-Ming Chiang,Hsien-Kai Kuo,Anastasia Sycheva,Radu Timofte,Min-Hung Chen,Man-Yu Lee,Yu-Syuan Xu,Yu Tseng,Shusong Xu,Jin Guo,Chao-Hung Chen,Ming-Chun Hsyu,Wen-Chia Tsai,Chao-Wei Chen,Grigory Malivenko,Minsu Kwon,Myungje Lee,Jaeyoon Yoo,Changbeom Kang,Shinjo Wang,Zheng Shaolong,Hao Dejun,Xie Fen,Feng Zhuang,Yipeng Ma,Jingyang Peng,Tao Wang,Fenglong Song,Chih-Chung Hsu,Kwan-Lin Chen,Mei-Hsuang Wu,Vishal Chudasama,Kalpesh Prajapati,Heena Patel,Anjali Sarvaiya,Kishor P. Upla,Kiran B. Raja,Raghavendra Ramachandra,Christoph Busch,Etienne de Stoutz +40 more
TL;DR: In this article, an end-to-end deep learning-based image signal processing (ISP) pipeline that can replace classical hand-crafted ISPs and achieve nearly real-time performance on smartphone NPUs was developed.
References
More filters
Proceedings ArticleDOI
Least Squares Generative Adversarial Networks
TL;DR: The Least Squares Generative Adversarial Network (LSGAN) as discussed by the authors adopts the least square loss function for the discriminator to solve the vanishing gradient problem in GANs.
Book ChapterDOI
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
TL;DR: Very deep residual channel attention networks (RCAN) as mentioned in this paper proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections Each residual group contains some residual blocks with short skip connections.
Proceedings ArticleDOI
GCNet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond
TL;DR: A simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation is created, and this simplified design shares similar structure with Squeeze-Excitation Network (SENet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks.
Proceedings ArticleDOI
Scale-Recurrent Network for Deep Image Deblurring
TL;DR: A Scale-recurrent Network (SRN-DeblurNet) is proposed and shown to produce better quality results than state-of-the-arts, both quantitatively and qualitatively in single image deblurring.
Proceedings ArticleDOI
Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates
Leslie N. Smith,Nicholay Topin +1 more
TL;DR: Super-convergence as discussed by the authors is a phenomenon where residual networks can be trained using an order of magnitude fewer iterations than is used with standard training methods, which is relevant to understanding why deep networks generalize well.