scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Non-local Image Dehazing

27 Jun 2016-pp 1674-1682
TL;DR: This work proposes an algorithm, linear in the size of the image, deterministic and requires no training, that performs well on a wide variety of images and is competitive with other state-of-the-art methods on the single image dehazing problem.
Abstract: Haze limits visibility and reduces image contrast in outdoor images. The degradation is different for every pixel and depends on the distance of the scene point from the camera. This dependency is expressed in the transmission coefficients, that control the scene attenuation and amount of haze in every pixel. Previous methods solve the single image dehazing problem using various patch-based priors. We, on the other hand, propose an algorithm based on a new, non-local prior. The algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors, that form tight clusters in RGB space. Our key observation is that pixels in a given cluster are often non-local, i.e., they are spread over the entire image plane and are located at different distances from the camera. In the presence of haze these varying distances translate to different transmission coefficients. Therefore, each color cluster in the clear image becomes a line in RGB space, that we term a haze-line. Using these haze-lines, our algorithm recovers both the distance map and the haze-free image. The algorithm is linear in the size of the image, deterministic and requires no training. It performs well on a wide variety of images and is competitive with other stateof-the-art methods.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
01 Oct 2017
TL;DR: An image dehazing model built with a convolutional neural network (CNN) based on a re-formulated atmospheric scattering model, called All-in-One Dehazing Network (AOD-Net), which demonstrates superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality.
Abstract: This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.

1,185 citations


Cites background or methods from "Non-local Image Dehazing"

  • ...(a) Inputs (b) FVR (c) DCP (d) BCCR (e) ATM (f) CAP (g) NLD [1] (h) DehazeNet [3] (i) MSCNN [17] (j) AOD-Net...

    [...]

  • ...We compared the proposed model with several stateof-the-art dehazing methods: Fast Visibility Restoration (FVR) [25], Dark-Channel Prior (DCP) [8], Boundary Constrained Context Regularization (BCCR) [12], Automatic Atmospheric Light Recovery (ATM) [22], Color Attenuation Prior (CAP) [32], Non-local Image Dehazing (NLD) [1], DehazeNet [3], and MSCNN [17]....

    [...]

  • ...Metrics ATM [22] BCCR [12] FVR [25] NLD [1] DCP [8] MSCNN [17] DehazeNet [3] CAP [32] AOD-Net...

    [...]

  • ...DCP, BCCR, ATM, NLD, and MSCNN produce un- realistic color tones on one or several images, such as DCP, BCCR and ATM results on the second row (notice the sky color), or BCCR, NLD and MSCNN results on the fourth row (notice the stone color)....

    [...]

  • ...Despite the challenge of estimating many physical parameters from a single image, many recent works have made significant progress towards this goal [1, 3, 17]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive study and evaluation of existing single image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called Realistic Single-Image DEhazing (RESIDE).
Abstract: We present a comprehensive study and evaluation of existing single-image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single-Image DEhazing (RESIDE). RESIDE highlights diverse data sources and image contents, and is divided into five subsets, each serving different training or evaluation purposes. We further provide a rich variety of criteria for dehazing algorithm evaluation, ranging from full-reference metrics to no-reference metrics and to subjective evaluation, and the novel task-driven evaluation. Experiments on RESIDE shed light on the comparisons and limitations of the state-of-the-art dehazing algorithms, and suggest promising future directions.

922 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: Zhang et al. as discussed by the authors proposed a Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together.
Abstract: We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https://github.com/hezhangsprinter/DCPDN

708 citations

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The proposed Enhanced Pix2pix Dehazing Network (EPDN), which generates a haze-free image without relying on the physical scattering model, is embedded by a generative adversarial network, which is followed by a well-designed enhancer.
Abstract: In this paper, we reduce the image dehazing problem to an image-to-image translation problem, and propose Enhanced Pix2pix Dehazing Network (EPDN), which generates a haze-free image without relying on the physical scattering model. EPDN is embedded by a generative adversarial network, which is followed by a well-designed enhancer. Inspired by visual perception global-first theory, the discriminator guides the generator to create a pseudo realistic image on a coarse scale, while the enhancer following the generator is required to produce a realistic dehazing image on the fine scale. The enhancer contains two enhancing blocks based on the receptive field model, which reinforces the dehazing effect in both color and details. The embedded GAN is jointly trained with the enhancer. Extensive experiment results on synthetic datasets and real-world datasets show that the proposed EPDN is superior to the state-of-the-art methods in terms of PSNR, SSIM, PI, and subjective visual effect.

449 citations


Cites background from "Non-local Image Dehazing"

  • ...[1], which assumes that colors of a haze-free image are well approximated by a few hundred distinct colors....

    [...]

Proceedings ArticleDOI
24 Sep 2018
TL;DR: The O-HAZE dataset as mentioned in this paper contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters, using traditional image quality metrics such as PSNR, SSIM and CIEDE2000.
Abstract: Haze removal or dehazing is a challenging ill-posed problem that has drawn a significant attention in the last few years. Despite this growing interest, the scientific community is still lacking a reference dataset to evaluate objectively and quantitatively the performance of proposed dehazing methods. The few datasets that are currently considered, both for assessment and training of learning-based dehazing techniques, exclusively rely on synthetic hazy images. To address this limitation, we introduce the first outdoor scenes database (named O-HAZE) composed of pairs of real hazy and corresponding haze-free images. In practice, hazy images have been captured in presence of real haze, generated by professional haze machines, and O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. To illustrate its usefulness, O-HAZE is used to compare a representative set of state-of-the-art dehazing techniques, using traditional image quality metrics such as PSNR, SSIM and CIEDE2000. This reveals the limitations of current techniques, and questions some of their underlying assumptions.

424 citations

References
More filters
Proceedings ArticleDOI
23 Jun 2008
TL;DR: A cost function in the framework of Markov random fields is developed, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation, and is applicable for both color and gray images.
Abstract: Bad weather, such as fog and haze, can significantly degrade the visibility of a scene. Optically, this is due to the substantial presence of particles in the atmosphere that absorb and scatter light. In computer vision, the absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the airlight. Based on this model, a few methods have been proposed, and most of them require multiple input images of a scene, which have either different degrees of polarization or different atmospheric conditions. This requirement is the main drawback of these methods, since in many situations, it is difficult to be fulfilled. To resolve the problem, we introduce an automated method that only requires a single input image. This method is based on two basic observations: first, images with enhanced visibility (or clear-day images) have more contrast than images plagued by bad weather; second, airlight whose variation mainly depends on the distance of objects to the viewer, tends to be smooth. Relying on these two observations, we develop a cost function in the framework of Markov random fields, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation. The method does not require the geometrical information of the input image, and is applicable for both color and gray images.

2,048 citations


"Non-local Image Dehazing" refers background or methods in this paper

  • ...The transmission map should be smooth, except for depth discontinuities [3, 12, 18, 20]....

    [...]

  • ...Tan [18] maximizes the contrast per patch, while maintaining a global coherent image....

    [...]

  • ...Finding Haze-Lines: We estimate A using one of the previous methods [2, 5, 18]....

    [...]

Journal ArticleDOI
01 Aug 2008
TL;DR: Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.
Abstract: In this paper we present a new method for estimating the optical transmission in hazy scenes given a single input image. Based on this estimation, the scattered light is eliminated to increase scene visibility and recover haze-free scene contrasts. In this new approach we formulate a refined image formation model that accounts for surface shading in addition to the transmission function. This allows us to resolve ambiguities in the data by searching for a solution in which the resulting shading and transmission functions are locally statistically uncorrelated. A similar principle is used to estimate the color of the haze. Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.

1,866 citations

Proceedings ArticleDOI
01 Sep 2009
TL;DR: A novel algorithm and variants for visibility restoration from a single image which allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera.
Abstract: One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.

1,219 citations


"Non-local Image Dehazing" refers background or methods in this paper

  • ...The transmission map should be smooth, except for depth discontinuities [3, 12, 18, 20]....

    [...]

  • ...A smoothness prior on the airlight is used in [20], assuming it is smooth except for depth discontinuities....

    [...]

Proceedings ArticleDOI
20 Jun 2009
TL;DR: A simple but effective image prior - dark channel prior to remove haze from a single input image is proposed, based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel.
Abstract: In this paper, we propose a simple but effective image prior - dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of the haze-free outdoor images. It is based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high quality haze-free image. Results on a variety of outdoor haze images demonstrate the power of the proposed prior. Moreover, a high quality depth map can also be obtained as a by-product of haze removal.

847 citations

Journal ArticleDOI
TL;DR: A new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines is described.
Abstract: Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible. In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information. An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances.

842 citations


"Non-local Image Dehazing" refers background or methods in this paper

  • ...This is a local phenomena that does not always hold and indeed, in [3] care is taken to ensure only patches where the assumption holds are considered....

    [...]

  • ...As shown in [3], the color lines in hazy images do not pass through the origin anymore, due to the additive haze component....

    [...]

  • ...In [3, 17], color lines are fitted in RGB space per-patch, looking for small patches with a constant transmission....

    [...]

  • ...Patch-based methods take great care to avoid artifacts by either using multiple patch sizes [19] or taking into consideration patch overlap and regularization using connections between distant pixels [3]....

    [...]

  • ...Several methods assume that transmission and radiance are piece-wise constant, and employ a prior on a patch basis [3, 4, 5]....

    [...]