scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Efficient Image Dehazing with Boundary Constraint and Contextual Regularization

01 Dec 2013-pp 617-624
TL;DR: An efficient regularization method to remove hazes from a single input image and can restore a high-quality haze-free image with faithful colors and fine image details is proposed.
Abstract: Images captured in foggy weather conditions often suffer from bad visibility. In this paper, we propose an efficient regularization method to remove hazes from a single input image. Our method benefits much from an exploration on the inherent boundary constraint on the transmission function. This constraint, combined with a weighted L1-norm based contextual regularization, is modeled into an optimization problem to estimate the unknown scene transmission. A quite efficient algorithm based on variable splitting is also presented to solve the problem. The proposed method requires only a few general assumptions and can restore a high-quality haze-free image with faithful colors and fine image details. Experimental results on a variety of haze images demonstrate the effectiveness and efficiency of the proposed method.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: DehazeNet as discussed by the authors adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing.
Abstract: Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.

1,880 citations

Journal ArticleDOI
TL;DR: A simple but powerful color attenuation prior for haze removal from a single input hazy image is proposed and outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.
Abstract: Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.

1,495 citations

Journal ArticleDOI
TL;DR: Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Abstract: When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

1,364 citations


Cites methods from "Efficient Image Dehazing with Bound..."

  • ...As mentioned, another widely used model is based on the observation that inverted low-light images 1−L look similar to haze images, which is thus expressed as [20], [21], [22]:...

    [...]

Book ChapterDOI
08 Oct 2016
TL;DR: A multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps by combining a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale network which refines results locally.
Abstract: The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.

1,230 citations


Cites background or methods from "Efficient Image Dehazing with Bound..."

  • ...Figure 4 shows that the proposed algorithm performs well on each image against the state-of-the-art dehazing methods [1,2,27,28] in terms of PSNR and SSIM....

    [...]

  • ...The proposed algorithm is more efficient than the state-of-the-art image dehazing methods [1,11,23,25,27] in terms of run time....

    [...]

  • ...(a) Input (b) [1] (c) [28] (d) [27] (e) [2] (f) Ours (g) GT...

    [...]

  • ...The results by Meng et al. [27] have some remaining haze as shown in the first line in Fig....

    [...]

  • ...We compare the proposed algorithm with the state-of-the-art dehazing methods [1,2,27,28] using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) metrics....

    [...]

Proceedings ArticleDOI
01 Oct 2017
TL;DR: An image dehazing model built with a convolutional neural network (CNN) based on a re-formulated atmospheric scattering model, called All-in-One Dehazing Network (AOD-Net), which demonstrates superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality.
Abstract: This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.

1,185 citations


Cites background or methods from "Efficient Image Dehazing with Bound..."

  • ...We compared the proposed model with several stateof-the-art dehazing methods: Fast Visibility Restoration (FVR) [25], Dark-Channel Prior (DCP) [8], Boundary Constrained Context Regularization (BCCR) [12], Automatic Atmospheric Light Recovery (ATM) [22], Color Attenuation Prior (CAP) [32], Non-local Image Dehazing (NLD) [1], DehazeNet [3], and MSCNN [17]....

    [...]

  • ...Metrics ATM [22] BCCR [12] FVR [25] NLD [1] DCP [8] MSCNN [17] DehazeNet [3] CAP [32] AOD-Net...

    [...]

  • ...DCP, BCCR, ATM, NLD, and MSCNN produce un- realistic color tones on one or several images, such as DCP, BCCR and ATM results on the second row (notice the sky color), or BCCR, NLD and MSCNN results on the fourth row (notice the stone color)....

    [...]

  • ...[12] further enforced the boundary constraint and contextual regularization for sharper restored images....

    [...]

  • ...Metrics ATM [22] BCCR [12] FVR [25] NLD [1, 2] DCP [8] MSCNN [17] DehazeNet [3] CAP [32] AOD-Net...

    [...]

References
More filters
Proceedings ArticleDOI
23 Jun 2008
TL;DR: A cost function in the framework of Markov random fields is developed, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation, and is applicable for both color and gray images.
Abstract: Bad weather, such as fog and haze, can significantly degrade the visibility of a scene. Optically, this is due to the substantial presence of particles in the atmosphere that absorb and scatter light. In computer vision, the absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the airlight. Based on this model, a few methods have been proposed, and most of them require multiple input images of a scene, which have either different degrees of polarization or different atmospheric conditions. This requirement is the main drawback of these methods, since in many situations, it is difficult to be fulfilled. To resolve the problem, we introduce an automated method that only requires a single input image. This method is based on two basic observations: first, images with enhanced visibility (or clear-day images) have more contrast than images plagued by bad weather; second, airlight whose variation mainly depends on the distance of objects to the viewer, tends to be smooth. Relying on these two observations, we develop a cost function in the framework of Markov random fields, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation. The method does not require the geometrical information of the input image, and is applicable for both color and gray images.

2,048 citations


"Efficient Image Dehazing with Bound..." refers background or methods in this paper

  • ...Tan [13] proposes to enhance the visibility of a haze image by maximizing its local contrast....

    [...]

  • ...Recently, some significant advances have also been achieved [4], [13], [5], [14], [7], [8]....

    [...]

  • ...Figure 7 illustrates the comparisons of our method with Tan’s work [13]....

    [...]

  • ...The following linear interpolation model is widely used to explain the formation of a haze image [10], [4], [5], [13], [7]: I(x) = t(x)J(x) + (1− t(x))A, (1)...

    [...]

Journal ArticleDOI
01 Aug 2008
TL;DR: Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.
Abstract: In this paper we present a new method for estimating the optical transmission in hazy scenes given a single input image. Based on this estimation, the scattered light is eliminated to increase scene visibility and recover haze-free scene contrasts. In this new approach we formulate a refined image formation model that accounts for surface shading in addition to the transmission function. This allows us to resolve ambiguities in the data by searching for a solution in which the resulting shading and transmission functions are locally statistically uncorrelated. A similar principle is used to estimate the color of the haze. Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.

1,866 citations

Journal ArticleDOI
TL;DR: A physics-based model is presented that describes the appearances of scenes in uniform bad weather conditions and a fast algorithm to restore scene contrast, which is effective under a wide range of weather conditions including haze, mist, fog, and conditions arising due to other aerosols.
Abstract: Images of outdoor scenes captured in bad weather suffer from poor contrast. Under bad weather conditions, the light reaching a camera is severely scattered by the atmosphere. The resulting decay in contrast varies across the scene and is exponential in the depths of scene points. Therefore, traditional space invariant image processing techniques are not sufficient to remove weather effects from images. We present a physics-based model that describes the appearances of scenes in uniform bad weather conditions. Changes in intensities of scene points under different weather conditions provide simple constraints to detect depth discontinuities in the scene and also to compute scene structure. Then, a fast algorithm to restore scene contrast is presented. In contrast to previous techniques, our weather removal algorithm does not require any a priori scene structure, distributions of scene reflectances, or detailed knowledge about the particular weather condition. All the methods described in this paper are effective under a wide range of weather conditions including haze, mist, fog, and conditions arising due to other aerosols. Further, our methods can be applied to gray scale, RGB color, multispectral and even IR images. We also extend our techniques to restore contrast of scenes with moving objects, captured using a video camera.

1,393 citations


"Efficient Image Dehazing with Bound..." refers methods in this paper

  • ...propose a physics-based scattering model [9], [10]....

    [...]

  • ...Representative works include [11], [9], [10], [12]....

    [...]

  • ...The following linear interpolation model is widely used to explain the formation of a haze image [10], [4], [5], [13], [7]: I(x) = t(x)J(x) + (1− t(x))A, (1)...

    [...]

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper advocates the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction.
Abstract: Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts.In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.

1,381 citations

Journal ArticleDOI
TL;DR: This work studies the visual manifestations of different weather conditions, and model the chromatic effects of the atmospheric scattering and verify it for fog and haze, and derives several geometric constraints on scene color changes caused by varying atmospheric conditions.
Abstract: Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from “bad” weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow. We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics, and identify effects caused by bad weather that can be turned to our advantage. Since the atmosphere modulates the information carried from a scene point to the observer, it can be viewed as a mechanism of visual information coding. We exploit two fundamental scattering models and develop methods for recovering pertinent scene properties, such as three-dimensional structure, from one or two images taken under poor weather conditions. Next, we model the chromatic effects of the atmospheric scattering and verify it for fog and haze. Based on this chromatic model we derive several geometric constraints on scene color changes caused by varying atmospheric conditions. Finally, using these constraints we develop algorithms for computing fog or haze color, depth segmentation, extracting three-dimensional structure, and recovering “clear day” scene colors, from two or more images taken under different but unknown weather conditions.

1,325 citations


Additional excerpts

  • ...propose a physics-based scattering model [9], [10]....

    [...]

  • ...Representative works include [11], [9], [10], [12]....

    [...]