scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A Generalized Low-Rank Appearance Model for Spatio-temporally Correlated Rain Streaks

01 Dec 2013-pp 1968-1975
TL;DR: This work proposes and generalizes a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks and removes rain streaks from image/video in a unified way.
Abstract: In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image/video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
01 Jul 2017
TL;DR: A deep detail network is proposed to directly reduce the mapping range from input to output, which makes the learning process easier and significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures.
Abstract: We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.

853 citations


Cites background from "A Generalized Low-Rank Appearance M..."

  • ...In [6], the authors propose a generalized model in which additive rain is assumed to be low rank....

    [...]

Journal ArticleDOI
TL;DR: This work attempts to leverage powerful generative modeling capabilities of the recently introduced conditional generative adversarial networks (CGAN) by enforcing an additional constraint that the de-rained image must be indistinguishable from its corresponding ground truth clean image.
Abstract: Severe weather conditions, such as rain and snow, adversely affect the visual quality of images captured under such conditions, thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect the performance of vision systems. Hence, it is important to address the problem of single image de-raining. However, the inherent ill-posed nature of the problem presents several challenges. We attempt to leverage powerful generative modeling capabilities of the recently introduced conditional generative adversarial networks (CGAN) by enforcing an additional constraint that the de-rained image must be indistinguishable from its corresponding ground truth clean image. The adversarial loss from GAN provides additional regularization and helps to achieve superior results. In addition to presenting a new approach to de-rain images, we introduce a new refined loss function and architectural novelties in the generator–discriminator pair for achieving improved results. The loss function is aimed at reducing artifacts introduced by GANs and ensure better visual quality. The generator sub-network is constructed using the recently introduced densely connected networks, whereas the discriminator is designed to leverage global and local information to decide if an image is real/fake. Based on this, we propose a novel single image de-raining method called image de-raining conditional generative adversarial network (ID-CGAN) that considers quantitative, visual, and also discriminative performance into the objective function. The experiments evaluated on synthetic and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performances. Furthermore, the experimental results evaluated on object detection datasets using the Faster-RCNN also demonstrate the effectiveness of proposed method in improving the detection performance on images degraded by rain.

747 citations


Cites background or methods from "A Generalized Low-Rank Appearance M..."

  • ...Input PRM [15] DSC [11] CNN [17] GMM [14]...

    [...]

  • ...SPM [10] PRM [15] DSC [11] CNN [17] GMM [14] CCR [27] DDN [16] JORDER [18] PAN [68] ID-CGAN...

    [...]

  • ...• GMM: GMM-based method [15] (ICCV ’13)...

    [...]

  • ...While PRM [15] is able to remove the rain-streaks, it produces blurred results which are not visually appealing....

    [...]

  • ...In such cases, previous works have designed appropriate prior in solving (1) such as sparsity prior [10]– [13], Gaussian Mixture Model (GMM) prior [14] and patchrank prior [15]....

    [...]

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes an effective method that uses simple patch-based priors for both the background and rain layers that removes rain streaks better than the existing methods qualitatively and quantitatively.
Abstract: This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples.

718 citations


Cites background or methods from "A Generalized Low-Rank Appearance M..."

  • ...To obtain GR, existing methods attempt to extract the internal properties of rain streaks within the input image itself, like [2, 11]....

    [...]

  • ...Unlike existing methods in single-image rain streak removal ([2, 11]), our method is easy to implement and generates considerably better results qualitatively and quantitatively....

    [...]

  • ...In other words, SR [11] again over-smooths the image content and cannot capture the rain streak in highly textured regions, while LRA [2] fails to remove the rain streaks in these two examples....

    [...]

  • ...Comparisons Here we compare our method with LRA [2] and SR [11] methods....

    [...]

  • ...Unlike [2, 11] that work on the entire image, we found that GR only requires small regions (e....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper introduced a deep network architecture called DerainNet for removing rain streaks from an image, which directly learned the mapping relationship between rainy and clean image detail layers from data.
Abstract: We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.

701 citations

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively is proposed and a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection.
Abstract: In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture.

640 citations


Cites background or methods from "A Generalized Low-Rank Appearance M..."

  • ...Others focus on rain removal from the single image, by regarding the rain streak removal problem as a signal separation problem [23, 18, 33, 9, 28], or by relying on nonlocal mean smoothing [24]....

    [...]

  • ...In [9], a generalized low rank model is proposed, where the rain streak layer is assumed to be low rank....

    [...]

  • ...• The degradation of rain is complex, and the existing rain model widely used in previous methods [23, 9]...

    [...]

  • ...Some focus on rain image recovery from video sequences [3, 4, 5, 9, 13, 14, 15, 16, 44]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"A Generalized Low-Rank Appearance M..." refers methods in this paper

  • ...Note that, although TV has been used to generate piecewise-smooth “cartoon” layer [14] of images, it tends to characterize edge-aware structure regardless of small-scale features (e....

    [...]

Book
01 Jan 1965
TL;DR: In this paper, the authors provide a broad overview of Fourier Transform and its relation with the FFT and the Hartley Transform, as well as the Laplace Transform and the Laplacian Transform.
Abstract: 1 Introduction 2 Groundwork 3 Convolution 4 Notation for Some Useful Functions 5 The Impulse Symbol 6 The Basic Theorems 7 Obtaining Transforms 8 The Two Domains 9 Waveforms, Spectra, Filters and Linearity 10 Sampling and Series 11 The Discrete Fourier Transform and the FFT 12 The Discrete Hartley Transform 13 Relatives of the Fourier Transform 14 The Laplace Transform 15 Antennas and Optics 16 Applications in Statistics 17 Random Waveforms and Noise 18 Heat Conduction and Diffusion 19 Dynamic Power Spectra 20 Tables of sinc x, sinc2x, and exp(-71x2) 21 Solutions to Selected Problems 22 Pictorial Dictionary of Fourier Transforms 23 The Life of Joseph Fourier

5,714 citations

Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations


"A Generalized Low-Rank Appearance M..." refers methods in this paper

  • ...Recently, a guided-image-filter (GIF) based method [10] was proposed for rain removal on one single color image....

    [...]

  • ...As to the case of single color image, we pick one frame from “heavy rain” to conduct rain removal, and also implement the GIF-based method [10] for comparison....

    [...]

  • ...To fully utilize such information, we conduct guided image filtering (GIF) [23] to enhance the image contrast....

    [...]

  • ...Inspired by [10], we also propose a GIF-based detail enhancement by reusing the estimated imaging noise....

    [...]

  • ...Although we have fine-tuned the GIF parameters, [10] still obtains over-smooth result with inaccurate rain streaks because their proposed guidance image only estimates rough image content....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a “split Bregman” method, which can solve a very broad class of L1-regularized problems, and applies this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.
Abstract: The class of L1-regularized optimization problems has received much attention recently because of the introduction of “compressed sensing,” which allows images and signals to be reconstructed from small amounts of data. Despite this recent attention, many L1-regularized problems still remain difficult to solve, or require techniques that are very problem-specific. In this paper, we show that Bregman iteration can be used to solve a wide variety of constrained optimization problems. Using this technique, we propose a “split Bregman” method, which can solve a very broad class of L1-regularized problems. We apply this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.

4,255 citations


"A Generalized Low-Rank Appearance M..." refers methods in this paper

  • ...Thanks to the success of split Bregman iteration [16], we use a similar variable splitting technique and obtain R, S argmin ∑ P R I , , , I , , T , , 12 R S F s....

    [...]

Journal ArticleDOI

3,156 citations


"A Generalized Low-Rank Appearance M..." refers methods in this paper

  • ...According to Plancherel’s theorem [20], we derive the closed-form optimum by FFT: S R ....

    [...]