scispace - formally typeset
Open AccessJournal ArticleDOI

You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network

TLDR
Zhang et al. as discussed by the authors proposed a self-supervised image dehazing method called You Only Look Yourself (YOLY), which employs three joint subnetworks to separate the observed hazy image into several latent layers, i.e., scene radiance layer, transmission map layer, and atmospheric light layer.
Abstract
In this paper, we study two challenging and less-touched problems in single image dehazing, namely, how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and an image collection (untrained). An unsupervised model will avoid the intensive labor of collecting hazy-clean image pairs, and an untrained model is a “real” single image dehazing approach which could remove haze based on the observed hazy image only and no extra images are used. Motivated by the layer disentanglement, we propose a novel method, called you only look yourself (YOLY) which could be one of the first unsupervised and untrained neural networks for image dehazing. In brief, YOLY employs three joint subnetworks to separate the observed hazy image into several latent layers, i.e., scene radiance layer, transmission map layer, and atmospheric light layer. After that, three layers are further composed to the hazy image in a self-supervised manner. Thanks to the unsupervised and untrained characteristics of YOLY, our method bypasses the conventional training paradigm of deep models on hazy-clean pairs or a large scale dataset, thus avoids the labor-intensive data collection and the domain shift issue. Besides, our method also provides an effective learning-based haze transfer solution thanks to its layer disentanglement mechanism. Extensive experiments show the promising performance of our method in image dehazing compared with 14 methods on six databases. The code could be accessed at www.pengxi.me .

read more

Citations
More filters
Journal ArticleDOI

Zero-Shot Image Dehazing

TL;DR: A novel method based on the idea of layer disentanglement by viewing a hazy image as the entanglement of several “simpler” layers, i.e., a haazi-free image layer, transmission map layer, and atmospheric light layer is proposed.
Proceedings Article

CLEARER: Multi-Scale Neural Architecture Search for Image Restoration

TL;DR: A differentiable strategy could be employed to search when to fuse or extract multi-resolution features, while the discretization issue faced by the gradient-based NAS could be alleviated.
Journal ArticleDOI

Face Editing Based on Facial Recognition Features

TL;DR: IricGAN as mentioned in this paper proposes a hierarchical feature combination (HFC) function to construct a sample's source space through multiscale feature mixing, which can guarantee the integrity of the source space while significantly compressing the network.
Journal ArticleDOI

Learning lightweight super-resolution networks with weight pruning.

TL;DR: Zhang et al. as discussed by the authors proposed an information multi-slicing network which extracts and integrates multi-scale features at a granular level to acquire a more lightweight and accurate SR network.
Journal ArticleDOI

RELAXNet: Residual efficient learning and attention expected fusion network for real-time semantic segmentation

TL;DR: Wang et al. as discussed by the authors proposed a lightweight semantic segmentation method based on attention mechanism to address the problem of dense prediction, which adopted a well-designed combination of depthwise convolution, dilated convolution and factorized convolution with channel shuffle to boost information interaction.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Proceedings ArticleDOI

Visibility in bad weather from a single image

TL;DR: A cost function in the framework of Markov random fields is developed, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation, and is applicable for both color and gray images.
Related Papers (5)