scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2020"


Journal ArticleDOI
TL;DR: This work develops a simple data‐driven method for removing outliers and reducing noise in unordered point clouds using a deep learning architecture adapted from PCPNet, which was recently proposed for estimating local 3D shape properties in point clouds.
Abstract: Point clouds obtained with 3D scanners or by image-based reconstruction techniques are often corrupted with significant amount of noise and outliers. Traditional methods for point cloud denoising largely rely on local surface fitting (e.g. jets or MLS surfaces), local or non-local averaging or on statistical assumptions about the underlying noise model. In contrast, we develop a simple data-driven method for removing outliers and reducing noise in unordered point clouds. We base our approach on a deep learning architecture adapted from PCPNet, which was recently proposed for estimating local 3D shape properties in point clouds. Our method first classifies and discards outlier samples, and then estimates correction vectors that project noisy points onto the original clean surfaces. The approach is efficient and robust to varying amounts of noise and outliers, while being able to handle large densely sampled point clouds. In our extensive evaluation, both on synthetic and real data, we show an increased robustness to strong noise levels compared to various state-of-the-art methods, enabling accurate surface reconstruction from extremely noisy real data obtained by range scans. Finally, the simplicity and universality of our approach makes it very easy to integrate in any existing geometry processing pipeline. Both the code and pre-trained networks can be found on the project page (https://github.com/mrakotosaon/pointcleannet).

186 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A frequency-based decompositionand- enhancement model that first learns to recover image objects in the low-frequency layer and then enhances high-frequency details based on the recovered image objects and outperforms state-of-the-art approaches in enhancing practical noisy low-light images.
Abstract: Low-light images typically suffer from two problems. First, they have low visibility (i.e., small pixel values). Second, noise becomes significant and disrupts the image content, due to low signal-to-noise ratio. Most existing lowlight image enhancement methods, however, learn from noise-negligible datasets. They rely on users having good photographic skills in taking images with low noise. Unfortunately, this is not the case for majority of the low-light images. While concurrently enhancing a low-light image and removing its noise is ill-posed, we observe that noise exhibits different levels of contrast in different frequency layers, and it is much easier to detect noise in the lowfrequency layer than in the high one. Inspired by this observation, we propose a frequency-based decompositionand- enhancement model for low-light image enhancement. Based on this model, we present a novel network that first learns to recover image objects in the low-frequency layer and then enhances high-frequency details based on the recovered image objects. In addition, we have prepared a new low-light image dataset with real noise to facilitate learning. Finally, we have conducted extensive experiments to show that the proposed method outperforms state-of-the-art approaches in enhancing practical noisy low-light images.

167 citations


Journal ArticleDOI
TL;DR: This paper introduces a robust low-light enhancement approach, aiming at well enhancing low- light images/videos and suppressing intensive noise jointly, based on the proposed Low-Rank Regularized Retinex Model (LR3M), which is the first to inject low-rank prior into a RetineX decomposition process to suppress noise in the reflectance map.
Abstract: Noise causes unpleasant visual effects in low-light image/video enhancement. In this paper, we aim to make the enhancement model and method aware of noise in the whole process. To deal with heavy noise which is not handled in previous methods, we introduce a robust low-light enhancement approach, aiming at well enhancing low-light images/videos and suppressing intensive noise jointly. Our method is based on the proposed Low-Rank Regularized Retinex Model (LR3M), which is the first to inject low-rank prior into a Retinex decomposition process to suppress noise in the reflectance map. Our method estimates a piece-wise smoothed illumination and a noise-suppressed reflectance sequentially, avoiding remaining noise in the illumination and reflectance maps which are usually presented in alternative decomposition methods. After getting the estimated illumination and reflectance, we adjust the illumination layer and generate our enhancement result. Furthermore, we apply our LR3M to video low-light enhancement. We consider inter-frame coherence of illumination maps and find similar patches through reflectance maps of successive frames to form the low-rank prior to make use of temporal correspondence. Our method performs well for a wide variety of images and videos, and achieves better quality both in enhancing and denoising, compared with the state-of-the-art methods.

165 citations


Book ChapterDOI
23 Aug 2020
TL;DR: Zhang et al. as mentioned in this paper proposed a unified framework to simultaneously deal with the real-world image noise removal and noise generation tasks, which learns the joint distribution of the clean-noisy image pairs.
Abstract: Real-world image noise removal is a long-standing yet very challenging task in computer vision. The success of deep neural network in denoising stimulates the research of noise generation, aiming at synthesizing more clean-noisy image pairs to facilitate the training of deep denoisers. In this work, we propose a novel unified framework to simultaneously deal with the noise removal and noise generation tasks. Instead of only inferring the posteriori distribution of the latent clean image conditioned on the observed noisy image in traditional MAP framework, our proposed method learns the joint distribution of the clean-noisy image pairs. Specifically, we approximate the joint distribution with two different factorized forms, which can be formulated as a denoiser mapping the noisy image to the clean one and a generator mapping the clean image to the noisy one. The learned joint distribution implicitly contains all the information between the noisy and clean images, avoiding the necessity of manually designing the image priors and noise assumptions as traditional. Besides, the performance of our denoiser can be further improved by augmenting the original training dataset with the learned generator. Moreover, we propose two metrics to assess the quality of the generated noisy image, for which, to the best of our knowledge, such metrics are firstly proposed along this research line. Extensive experiments have been conducted to demonstrate the superiority of our method over the state-of-the-arts both in the real noise removal and generation tasks. The training and testing code is available at https://github.com/zsyOAOA/DANet.

137 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work presents a method for training a neural network to perform image denoising without access to clean training examples or access to paired noisy training examples, which produces results which are competitive with other learned methods which require richer training data, and outperforms traditional non-learned Denoising methods.
Abstract: We present a method for training a neural network to perform image denoising without access to clean training examples or access to paired noisy training examples. Our method requires only a single noisy realization of each training example and a statistical model of the noise distribution, and is applicable to a wide variety of noise models, including spatially structured noise. Our model produces results which are competitive with other learned methods which require richer training data, and outperforms traditional non-learned denoising methods. We present derivations of our method for arbitrary additive noise, an improvement specific to Gaussian additive noise, and an extension to multiplicative Bernoulli noise.

136 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A highly accurate noise formation model based on the characteristics of CMOS photosensors is presented, thereby enabling us to synthesize realistic samples that better match the physics of image formation process.
Abstract: Lacking rich and realistic data, learned single image denoising algorithms generalize poorly in real raw images that not resemble the data used for training. Although the problem can be alleviated by the heteroscedastic Gaussian noise model, the noise sources caused by digital camera electronics are still largely overlooked, despite their significant effect on raw measurement, especially under extremely low-light condition. To address this issue, we present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process. Given the proposed noise model, we additionally propose a method to calibrate the noise parameters for available modern digital cameras, which is simple and reproducible for any new device. We systematically study the generalizability of a neural network trained with existing schemes, by introducing a new low-light denoising dataset that covers many modern digital cameras from diverse brands. Extensive empirical results collectively show that by utilizing our proposed noise formation model, a network can reach the capability as if it had been trained with rich real data, which demonstrates the effectiveness of our noise formation model.

129 citations


Journal ArticleDOI
TL;DR: A novel “Noisy-As-Clean” (NAC) strategy of training self-supervised denoising networks, where the corrupted test image is directly taken as the “clean” target, while the inputs are synthetic images consisted of this corrupted image and a second yet similar corruption.
Abstract: Supervised deep networks have achieved promising performance on image denoising, by learning image priors and noise statistics on plenty pairs of noisy and clean images. Unsupervised denoising networks are trained with only noisy images. However, for an unseen corrupted image, both supervised and unsupervised networks ignore either its particular image prior, the noise statistics, or both. That is, the networks learned from external images inherently suffer from a domain gap problem: the image priors and noise statistics are very different between the training and test images. This problem becomes more clear when dealing with the signal dependent realistic noise. To circumvent this problem, in this work, we propose a novel “Noisy-As-Clean” (NAC) strategy of training self-supervised denoising networks. Specifically, the corrupted test image is directly taken as the “clean” target, while the inputs are synthetic images consisted of this corrupted image and a second yet similar corruption. A simple but useful observation on our NAC is: as long as the noise is weak, it is feasible to learn a self-supervised network only with the corrupted image, approximating the optimal parameters of a supervised network learned with pairs of noisy and clean images. Experiments on synthetic and realistic noise removal demonstrate that, the DnCNN and ResNet networks trained with our self-supervised NAC strategy achieve comparable or better performance than the original ones and previous supervised/unsupervised/self-supervised networks. The code is publicly available at https://github.com/csjunxu/Noisy-As-Clean .

109 citations


Book ChapterDOI
23 Aug 2020
TL;DR: In this article, a spatial-adaptive denoising network (SADNet) is proposed to adapt to changes in spatial textures and edges, and a residual spatialadaptive block is introduced to sample the spatially related features for weighting.
Abstract: Previous works have shown that convolutional neural networks can achieve good performance in image denoising tasks. However, limited by the local rigid convolutional operation, these methods lead to oversmoothing artifacts. A deeper network structure could alleviate these problems, but at the cost of additional computational overhead. In this paper, we propose a novel spatial-adaptive denoising network (SADNet) for efficient single image blind noise removal. To adapt to changes in spatial textures and edges, we design a residual spatial-adaptive block. Deformable convolution is introduced to sample the spatially related features for weighting. An encoder-decoder structure with a context block is introduced to capture multiscale information. By conducting noise removal from coarse to fine, a high-quality noise-free image is obtained. We apply our method to both synthetic and real noisy image datasets. The experimental results demonstrate that our method outperforms the state-of-the-art denoising methods both quantitatively and visually.

90 citations


Proceedings ArticleDOI
06 Jul 2020
TL;DR: A novel three-branch convolution neural network, namely RRDNet (short for Robust Retinex Decomposition Network), is proposed to decompose the input image into three components, illumination, reflectance and noise.
Abstract: Underexposed images often suffer from serious quality degradation such as poor visibility and latent noise in the dark. Most previous methods for underexposed images restoration ignore the noise and amplify it during stretching contrast. We predict the noise explicitly to achieve the goal of denoising while restoring the underexposed image. Specifically, a novel three-branch convolution neural network, namely RRDNet (short for Robust Retinex Decomposition Network), is proposed to decompose the input image into three components, illumination, reflectance and noise. As an image-specific network, RRDNet doesn’t need any prior image examples or prior training. Instead, the weights of RRDNet will be updated by a zero-shot scheme of iteratively minimizing a specially designed loss function. Such a loss function is devised to evaluate the current decomposition of the test image and guide noise estimation. Experiments demonstrate that RRDNet can achieve robust correction with overall naturalness and pleasing visual quality. To make the results reproducible, the source code has been made publicly available at https://aaaaangel.github.io/RRDNet-Homepage.

88 citations


Journal ArticleDOI
TL;DR: A content-adaptive algorithm for the automatic correction of sCMOS-related noise (ACsN) for fluorescence microscopy that improves the camera performance, enabling fast, low-light and quantitative optical microscopy with video-rate denoising for a broad range of imaging conditions and modalities.
Abstract: The rapid development of scientific CMOS (sCMOS) technology has greatly advanced optical microscopy for biomedical research with superior sensitivity, resolution, field-of-view, and frame rates. However, for sCMOS sensors, the parallel charge-voltage conversion and different responsivity at each pixel induces extra readout and pattern noise compared to charge-coupled devices (CCD) and electron-multiplying CCD (EM-CCD) sensors. This can produce artifacts, deteriorate imaging capability, and hinder quantification of fluorescent signals, thereby compromising strategies to reduce photo-damage to live samples. Here, we propose a content-adaptive algorithm for the automatic correction of sCMOS-related noise (ACsN) for fluorescence microscopy. ACsN combines camera physics and layered sparse filtering to significantly reduce the most relevant noise sources in a sCMOS sensor while preserving the fine details of the signal. The method improves the camera performance, enabling fast, low-light and quantitative optical microscopy with video-rate denoising for a broad range of imaging conditions and modalities. Scientific complementary metal-oxide semiconductor (sCMOS) cameras have advanced the imaging field, but they often suffer from additional noise compared to CCD sensors. Here the authors present a content-adaptive algorithm for the automatic correction of sCMOS-related noise for fluorescence microscopy.

84 citations


Journal ArticleDOI
TL;DR: This work proposes a theoretically-grounded blind and universal deep learning image denoiser for additive Gaussian noise removal, based on an optimal denoising solution, which it is derived theoretically with a Gaussian image prior assumption.
Abstract: Blind and universal image denoising consists of using a unique model that denoises images with any level of noise. It is especially practical as noise levels do not need to be known when the model is developed or at test time. We propose a theoretically-grounded blind and universal deep learning image denoiser for additive Gaussian noise removal. Our network is based on an optimal denoising solution, which we call fusion denoising. It is derived theoretically with a Gaussian image prior assumption. Synthetic experiments show our network’s generalization strength to unseen additive noise levels. We also adapt the fusion denoising network architecture for image denoising on real images. Our approach improves real-world grayscale additive image denoising PSNR results for training noise levels and further on noise levels not seen during training. It also improves state-of-the-art color image denoising performance on every single noise level, by an average of $0.1dB$ , whether trained on or not.

Book ChapterDOI
Xiaohe Wu1, Ming Liu1, Yue Cao1, Dongwei Ren, Wangmeng Zuo1 
23 Aug 2020
TL;DR: A two-stage scheme to facilitate unpaired learning of denoising network by incorporating self-supervised learning and knowledge distillation is presented, which performs favorably on both synthetic noisy images and real-world noisy photographs.
Abstract: We investigate the task of learning blind image denoising networks from an unpaired set of clean and noisy images. Such problem setting generally is practical and valuable considering that it is feasible to collect unpaired noisy and clean images in most real-world applications. And we further assume that the noise can be signal dependent but is spatially uncorrelated. In order to facilitate unpaired learning of denoising network, this paper presents a two-stage scheme by incorporating self-supervised learning and knowledge distillation. For self-supervised learning, we suggest a dilated blind-spot network (D-BSN) to learn denoising solely from real noisy images. Due to the spatial independence of noise, we adopt a network by stacking \(1\times 1\) convolution layers to estimate the noise level map for each image. Both the D-BSN and image-specific noise model (\(\text {CNN}_{\text {est}}\)) can be jointly trained via maximizing the constrained log-likelihood. Given the output of D-BSN and estimated noise level map, improved denoising performance can be further obtained based on the Bayes’ rule. As for knowledge distillation, we first apply the learned noise models to clean images to synthesize a paired set of training images, and use the real noisy images and the corresponding denoising results in the first stage to form another paired set. Then, the ultimate denoising model can be distilled by training an existing denoising network using these two paired sets. Experiments show that our unpaired learning method performs favorably on both synthetic noisy images and real-world noisy photographs in terms of quantitative and qualitative evaluation. Code is available at https://github.com/XHWXD/DBSN.

Journal ArticleDOI
TL;DR: Extensive experiments support the proposed method over earlier crude approximations used by image denoising filters such as Block-Matching and 3D-filtering, demonstrating dramatic improvement in many challenging conditions.
Abstract: Collaborative filters perform denoising through transform-domain shrinkage of a group of similar patches extracted from an image. Existing collaborative filters of stationary correlated noise have all used simple approximations of the transform noise power spectrum adopted from methods which do not employ patch grouping and instead operate on a single patch. We note the inaccuracies of these approximations and introduce a method for the exact computation of the noise power spectrum. Unlike earlier methods, the calculated noise variances are exact even when noise in one patch is correlated with noise in any of the other patches. We discuss the adoption of the exact noise power spectrum within shrinkage, in similarity testing (patch matching), and in aggregation. We also introduce effective approximations of the spectrum for faster computation. Extensive experiments support the proposed method over earlier crude approximations used by image denoising filters such as Block-Matching and 3D-filtering (BM3D), demonstrating dramatic improvement in many challenging conditions.

Journal ArticleDOI
TL;DR: An adaptive wavelet packet denoising algorithm applicable to numerous SHM technologies including acoustics, vibrations, and acoustic emission is outlined, which incorporates a blend of non-traditional approaches for noise estimation, threshold selection, and threshold application to augment theDenoising performance of real-time structural health monitoring measurements.

Journal ArticleDOI
TL;DR: An efficient method based on generative adversarial network is proposed to reduce the speckle noise and preserve the texture details in OCT images and achieves a better denoising effectiveness.

Proceedings ArticleDOI
Shitong Luo1, Wei Hu1
12 Oct 2020
TL;DR: An autoencoder-like neural network is presented, aiming to capture intrinsic structures in point clouds and significantly outperforms state-of-the-art denoising methods under both synthetic noise and real world noise.
Abstract: 3D point clouds are often perturbed by noise due to the inherent limitation of acquisition equipments, which obstructs downstream tasks such as surface reconstruction, rendering and so on. Previous works mostly infer the displacement of noisy points from the underlying surface, which however are not designated to recover the surface explicitly and may lead to sub-optimal denoising results. To this end, we propose to learn the underlying manifold of a noisy point cloud from differentiably subsampled points with trivial noise perturbation and their embedded neighborhood feature, aiming to capture intrinsic structures in point clouds. Specifically, we present an autoencoder-like neural network. The encoder learns both local and non-local feature representations of each point, and then samples points with low noise via an adaptive differentiable pooling operation. Afterwards, the decoder infers the underlying manifold by transforming each sampled point along with the embedded feature of its neighborhood to a local surface centered around the point. By resampling on the reconstructed manifold, we obtain a denoised point cloud. Further, we design an unsupervised training loss, so that our network can be trained in either an unsupervised or supervised fashion. Experiments show that our method significantly outperforms state-of-the-art denoising methods under both synthetic noise and real world noise. The code and data are available at https://github.com/luost26/DMRDenoise

Journal ArticleDOI
TL;DR: A new hyperspectral image denoising method is introduced that is able to cope with additive mixed noise, i.e., mixture of Gaussian noise, impulse noise, and stripes, and fully exploits a compact and sparse HSI representation based on its low-rank and self-similarity characteristics.
Abstract: This article introduces a new hyperspectral image (HSI) denoising method that is able to cope with additive mixed noise, i.e., mixture of Gaussian noise, impulse noise, and stripes, which usually corrupt hyperspectral images in the acquisition process. The proposed method fully exploits a compact and sparse HSI representation based on its low-rank and self-similarity characteristics. In order to deal with mixed noise having a complex statistical distribution, we propose to use the robust $\ell _1$ data fidelity instead of using the $\ell _2$ data fidelity, which is commonly employed for Gaussian noise removal. In a series of experiments with simulated and real datasets, the proposed method competes with state-of-the-art methods, yielding better results for mixed noise removal.

Journal ArticleDOI
Ting Xie1, Shutao Li1, Bin Sun1
TL;DR: A nonconvex regularized low-rank and sparse matrix decomposition (NonRLRS) method is proposed for HSI denoising, which can simultaneously remove the Gaussian noise, impulse noise, dead lines, and stripes.
Abstract: Hyperspectral images (HSIs) are often degraded by a mixture of various types of noise during the imaging process, including Gaussian noise, impulse noise, and stripes Such complex noise could plague the subsequent HSIs processing Generally, most HSI denoising methods formulate sparsity optimization problems with convex norm constraints, which over-penalize large entries of vectors, and may result in a biased solution In this paper, a nonconvex regularized low-rank and sparse matrix decomposition (NonRLRS) method is proposed for HSI denoising, which can simultaneously remove the Gaussian noise, impulse noise, dead lines, and stripes The NonRLRS aims to decompose the degraded HSI, expressed in a matrix form, into low-rank and sparse components with a robust formulation To enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions, a novel nonconvex regularizer named as normalized $\varepsilon $ -penalty, is presented, which can adaptively shrink each entry In addition, an effective algorithm based on the majorization minimization (MM) is developed to solve the resulting nonconvex optimization problem Specifically, the MM algorithm first substitutes the nonconvex objective function with the surrogate upper-bound in each iteration, and then minimizes the constructed surrogate function, which enables the nonconvex problem to be solved in the framework of reweighted technique Experimental results on both simulated and real data demonstrate the effectiveness of the proposed method

Journal ArticleDOI
Gao Fan1, Jun Li1, Hong Hao1
TL;DR: The developed ResNet extracts high-level features from the vibration signal and learns the modal information of structures automatically, therefore it can well preserve the most important vibration characteristics in vibration signals, and can assist in distinguishing the physical modes from the spurious modes in structural modal identification.

Journal ArticleDOI
TL;DR: Compared with other noise reduction techniques, the validity of CEEMDAN-MMSVC-LMSAF can be proved by the analysis of simulation signals and real underwater acoustic signals, which has the better noise reduction effect and has practical application value.

Journal ArticleDOI
TL;DR: This paper presents a two-stage deep convolutional neural network to model both the noise and the medical image simultaneously and introduces both the short-term and long-term connections in the network which could promote the information propagation between different layers efficiently.
Abstract: Most of the existing medical image denoising methods focus on estimating either the image or the residual noise. Moreover, they are usually designed for one specific noise with a strong assumption of the noise distribution. However, not only the random independent Gaussian or speckle noise but also the structurally correlated ring or stripe noise is ubiquitous in various medical imaging instruments. Explicitly modeling the distributions of these complex noises in the medical image is extremely hard. They cannot be accurately held by the Gaussian or mixture of Gaussian model. To overcome the two drawbacks, in this paper, we propose to treat the image and noise components equally and convert the image denoising task into an image decomposition problem naturally. More precisely, we present a two-stage deep convolutional neural network (CNN) to model both the noise and the medical image simultaneously. On the one hand, we utilize both the image and noise to separate them better. On the other hand, the noise subnetwork serves as a noise estimator which guides the image subnetwork with sufficient information about the noise, thus we could easily handle different noise distributions and noise levels. To better cope with the gradient vanishing problem in this very deep network, we introduce both the short-term and long-term connections in the network which could promote the information propagation between different layers efficiently. Extensive experiments have been performed on several kinds of medical noise images, such as the computed tomography and ultrasound images, and the proposed method has consistently outperformed state-of-the-art denoising methods.

Journal ArticleDOI
TL;DR: A noise-free maximum correntropy criterion (NFMCC) algorithm is proposed for system identification in non-Gaussian environments which shows significant property in reducing the detrimental effects of outliers and impulsive noise with different input signals.
Abstract: In this brief, a noise-free maximum correntropy criterion (NFMCC) algorithm is proposed for system identification in non-Gaussian environments. The proposed algorithm utilizes correntropy theory to construct a cost function which is realized based on a normalized Gaussian kernel. In addition, a new dynamic step size scheme is proposed to enhance the performance of the proposed algorithm, which is implemented by minimizing the noise-free a posteriori error signal, and the mean square deviation (MSD) is greatly decreased. The proposed NFMCC algorithm shows significant property in reducing the detrimental effects of outliers and impulsive noise with different input signals. Moreover, a Students’ T distributed noise is employed to evaluate the effectiveness of the proposed algorithm in terms of the MSD and convergence for heavy tailed noising environment. The parameter effects on the NFMCC algorithm are also presented, and its performance is investigated on a real-life channel that is measured in underwater. Simulation results prove the effectiveness of the proposed algorithm which provides a considerable computational complexity and an acceptable running time.

Journal ArticleDOI
TL;DR: The results validated that the optimal trilateral filtering approach outperforms other conventional methods in terms of Mean-Square Error (MSE) and the Peak Signal-to-Noise Ratio (PSNR).
Abstract: In this paper, a novel denoising approach based on optimal trilateral filtering using Grey Wolf Optimization (GWO) is proposed. At first, a database of noisy images are generated by adding Gaussian noise, Salt & Pepper noise and Random noise to the captured image. The filtering of noisy images are performed by Block-matching and 3D filtering (BM3D) algorithm over the components of image obtained through the moving frame approach. Then, using optimal trilateral filtering, the denoised images are reconstructed. Therefore, by using a two-level filtering approach such as Moving frame-based Block-matching and 3D filtering (BM3D) and Optimal trilateral filtering the noisy images are decomposed. The proposed optimal trilateral filter employs Grey Wolf Optimization algorithm for selecting the parameters optimally to improve the efficiency of filtering method which also reduces the time required for manual computation. The performance of the proposed image denoising algorithm is analyzed using multiple datasets and the analysis of results were done in contrast with existing conventional approaches. The results validated that the optimal trilateral filtering approach outperforms other conventional methods in terms of Mean-Square Error (MSE) and the Peak Signal-to-Noise Ratio (PSNR).

Proceedings ArticleDOI
14 Jun 2020
TL;DR: Wang et al. as discussed by the authors proposed wavelet integrated convolutional neural networks (WaveCNets) for image classification, where feature maps are decomposed into low-frequency and high-frequency components during the down-sampling.
Abstract: Convolutional Neural Networks (CNNs) are generally prone to noise interruptions, i.e., small image noise can cause drastic changes in the output. To suppress the noise effect to the final predication, we enhance CNNs by replacing max-pooling, strided-convolution, and average-pooling with Discrete Wavelet Transform (DWT). We present general DWT and Inverse DWT (IDWT) layers applicable to various wavelets like Haar, Daubechies, and Cohen, etc., and design wavelet integrated CNNs (WaveCNets) using these layers for image classification. In WaveCNets, feature maps are decomposed into the low-frequency and high-frequency components during the down-sampling. The low-frequency component stores main information including the basic object structures, which is transmitted into the subsequent layers to extract robust high-level features. The high-frequency components, containing most of the data noise, are dropped during inference to improve the noise-robustness of the WaveCNets. Our experimental results on ImageNet and ImageNet-C (the noisy version of ImageNet) show that WaveCNets, the wavelet integrated versions of VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness than their vanilla versions.

Journal ArticleDOI
TL;DR: The proposed HSI denoising framework is modeled as a convolutional neural network (CNN) constrained non-negative matrix factorization problem, which has a relatively good performance on the removal of the Gaussian and mixed Gaussian impulse noises.
Abstract: Deep learning has been successfully introduced for 2D-image denoising, but it is still unsatisfactory for hyperspectral image (HSI) denoising due to the unacceptable computational complexity of the end-to-end training process and the difficulty of building a universal 3D-image training dataset. In this paper, instead of developing an end-to-end deep learning denoising network, we propose an HSI denoising framework for the removal of mixed Gaussian impulse noise, in which the denoising problem is modeled as a convolutional neural network (CNN) constrained non-negative matrix factorization problem. Using the proximal alternating linearized minimization, the optimization can be divided into three steps: the update of the spectral matrix, the update of the abundance matrix, and the estimation of the sparse noise. Then, we design the CNN architecture and proposed two training schemes, which can allow the CNN to be trained with a 2D-image dataset. Compared with the state-of-the-art denoising methods, the proposed method has a relatively good performance on the removal of the Gaussian and mixed Gaussian impulse noises. More importantly, the proposed model can be only trained once by a 2D-image dataset but can be used to denoise HSIs with different numbers of channel bands.

Journal ArticleDOI
TL;DR: Noise2Inverse is proposed, a deep CNN-based denoising method for linear image reconstruction algorithms that does not require any additional clean or noisy data and demonstrates an improvement in peak signal-to-noise ratio and structural similarity index compared to state-of-the-art image Denoising methods, and conventional reconstruction methods, such as Total-Variation Minimization.
Abstract: Recovering a high-quality image from noisy indirect measurements is an important problem with many applications. For such inverse problems, supervised deep convolutional neural network (CNN)-based denoising methods have shown strong results, but the success of these supervised methods critically depends on the availability of a high-quality training dataset of similar measurements. For image denoising, methods are available that enable training without a separate training dataset by assuming that the noise in two different pixels is uncorrelated. However, this assumption does not hold for inverse problems, resulting in artifacts in the denoised images produced by existing methods. Here, we propose Noise2Inverse, a deep CNN-based denoising method for linear image reconstruction algorithms that does not require any additional clean or noisy data. Training a CNN-based denoiser is enabled by exploiting the noise model to compute multiple statistically independent reconstructions. We develop a theoretical framework which shows that such training indeed obtains a denoising CNN, assuming the measured noise is element-wise independent, and zero-mean. On simulated CT datasets, Noise2Inverse demonstrates an improvement in peak signal-to-noise ratio and structural similarity index compared to state-of-the-art image denoising methods, and conventional reconstruction methods, such as Total-Variation Minimization. We also demonstrate that the method is able to significantly reduce noise in challenging real-world experimental datasets.

Journal ArticleDOI
TL;DR: A filtered-x generalized maximum correntropy criterion (FxGMCC) algorithm is proposed, which adopts the generalized Gaussian density (GGD) function as its kernel, which is superior to most of the existing robust adaptive algorithms.
Abstract: As a robust nonlinear similarity measure, the maximum correntropy criterion (MCC) has been successfully applied to active noise control (ANC) for impulsive noise. The default kernel function of the filtered-x maximum correntropy criterion (FxMCC) algorithm is the Gaussian kernel, which is desirable in many cases for its smooth and strict positive-definite. However, it is not always the best choice. In this study, a filtered-x generalized maximum correntropy criterion (FxGMCC) algorithm is proposed, which adopts the generalized Gaussian density (GGD) function as its kernel. The FxGMCC algorithm has greater robust ability against non-Gaussian environments, but, it still adopts a single error norm which exhibits poor convergence rate and noise reduction performance. To surmount this problem, an improved FxGMCC (IFxGMCC) algorithm with continuous mixed Lp- norm is proposed. Moreover, to make a trade-off between fast convergence rate and low steady-state misalignment, a convexly combined IFxGMCC (C-IFxGMCC) algorithm is further developed. The stability mechanism and computational complexity of the proposed algorithms are analyzed. Simulation results in the context of different impulsive noises as well as the real noise signals verify that the proposed algorithms are superior to most of the existing robust adaptive algorithms.

Journal ArticleDOI
TL;DR: A novel Convolutional Neural Network, viz.

Journal ArticleDOI
01 Apr 2020-Optik
TL;DR: The adaptive TV denoising method is developed based on the general regularized image restoration model with L1 fidelity for handling salt and pepper noise model and results indicate the authors obtain artifact free edge preserving restorations.

Journal ArticleDOI
TL;DR: A new multi-patch collaborative method for point cloud denoising, which is solved as a low-rank matrix recovery problem and outperforms state-of-the-art methods in both noise removal and feature preservation.
Abstract: Point cloud is the primary source from 3D scanners and depth cameras. It usually contains more raw geometric features, as well as higher levels of noise than the reconstructed mesh. Although many mesh denoising methods have proven to be effective in noise removal, they hardly work well on noisy point clouds. We propose a new multi-patch collaborative method for point cloud denoising, which is solved as a low-rank matrix recovery problem. Unlike the traditional single-patch based denoising approaches, our approach is inspired by the geometric statistics which indicate that a number of surface patches sharing approximate geometric properties always exist within a 3D model. Based on this observation, we define a rotation-invariant height-map patch (HMP) for each point by robust Bi-PCA encoding bilaterally filtered normal information, and group its non-local similar patches together. Within each group, all patches are geometrically similar, while suffering from noise. We pack the height maps of each group into an HMP matrix, whose initial rank is high, but can be significantly reduced. We design an improved low-rank recovery model, by imposing a graph constraint to filter noise. Experiments on synthetic and raw datasets demonstrate that our method outperforms state-of-the-art methods in both noise removal and feature preservation.