scispace - formally typeset
Proceedings ArticleDOI

Non-local Image Dehazing

TLDR
This work proposes an algorithm, linear in the size of the image, deterministic and requires no training, that performs well on a wide variety of images and is competitive with other state-of-the-art methods on the single image dehazing problem.
Abstract
Haze limits visibility and reduces image contrast in outdoor images. The degradation is different for every pixel and depends on the distance of the scene point from the camera. This dependency is expressed in the transmission coefficients, that control the scene attenuation and amount of haze in every pixel. Previous methods solve the single image dehazing problem using various patch-based priors. We, on the other hand, propose an algorithm based on a new, non-local prior. The algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors, that form tight clusters in RGB space. Our key observation is that pixels in a given cluster are often non-local, i.e., they are spread over the entire image plane and are located at different distances from the camera. In the presence of haze these varying distances translate to different transmission coefficients. Therefore, each color cluster in the clear image becomes a line in RGB space, that we term a haze-line. Using these haze-lines, our algorithm recovers both the distance map and the haze-free image. The algorithm is linear in the size of the image, deterministic and requires no training. It performs well on a wide variety of images and is competitive with other stateof-the-art methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

DR-Net: Transmission Steered Single Image Dehazing Network with Weakly Supervised Refinement.

TL;DR: A new deep network architecture for single image dehazing called DR-Net, which consists of a transmission prediction network that predicts transmission map for the input image, a haze removal network that reconstructs latent image steered by the transmission map, and a refinement network that enhances the details and color properties of the dehazed result via weakly supervised learning.
Journal ArticleDOI

Haze Relevant Feature Attention Network for Single Image Dehazing

TL;DR: In this paper, an end-to-end haze relevant feature attention network was proposed for single image dehazing, which does not require paired training images by embedding an attention module into a novel de-hazing generator that combines an encoder-decoder structure with dense blocks.
Journal ArticleDOI

Aerial image dehazing using a deep convolutional autoencoder

TL;DR: A new end-to-end aerial image dehazing method using a deep convolutional autoencoder that outputs a dehazed version without requiring any other information such as transmission map or atmospheric light value is proposed.
Proceedings ArticleDOI

ABC-NET: Avoiding Blocking Effect & Color Shift Network for Single Image Dehazing Via Restraining Transmission Bias

TL;DR: A new loss function (TransLoss) and a new natural activation function (NAF) are proposed to restrain negative bias of transmission and avoid tiny transmission value from being activated, respectively in an end-to-end CNN dehazing network avoiding color shift and blocking effect, termed as ABC-Net.
Posted Content

A GAN-Based Input-Size Flexibility Model for Single Image Dehazing.

Abstract: Image-to-image translation based on generative adversarial network (GAN) has achieved state-of-the-art performance in various image restoration applications. Single image dehazing is a typical example, which aims to obtain the haze-free image of a haze one. This paper concentrates on the challenging task of single image dehazing. Based on the atmospheric scattering model, we design a novel model to directly generate the haze-free image. The main challenge of image dehazing is that the atmospheric scattering model has two parameters, i.e., transmission map and atmospheric light. When we estimate them respectively, the errors will be accumulated to compromise dehazing quality. Considering this reason and various image sizes, we propose a novel input-size flexibility conditional generative adversarial network (cGAN) for single image dehazing, which is input-size flexibility at both training and test stages for image-to-image translation with cGAN framework. We propose a simple and effective U-type residual network (UR-Net) to combine the generator and adopt the spatial pyramid pooling (SPP) to design the discriminator. Moreover, the model is trained with multi-loss function, in which the consistency loss is a novel designed loss in this paper. We finally build a multi-scale cGAN fusion model to realize state-of-the-art single image dehazing performance. The proposed models receive a haze image as input and directly output a haze-free one. Experimental results demonstrate the effectiveness and efficiency of the proposed models.
References
More filters
Proceedings ArticleDOI

Visibility in bad weather from a single image

TL;DR: A cost function in the framework of Markov random fields is developed, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation, and is applicable for both color and gray images.
Journal ArticleDOI

Single image dehazing

TL;DR: Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.
Proceedings ArticleDOI

Fast visibility restoration from a single color or gray level image

TL;DR: A novel algorithm and variants for visibility restoration from a single image which allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera.
Proceedings ArticleDOI

Single image haze removal using dark channel prior

TL;DR: A simple but effective image prior - dark channel prior to remove haze from a single input image is proposed, based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel.
Journal ArticleDOI

Dehazing Using Color-Lines

TL;DR: A new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines is described.
Related Papers (5)