scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2019"


Proceedings ArticleDOI
15 Jun 2019
TL;DR: CBDNet as discussed by the authors proposes to train a convolutional blind denoising network with more realistic noise model and real-world clean image pairs to improve the generalization ability of deep CNN denoisers.
Abstract: While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy pho- tographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative met- rics and visual quality. The code has been made available at https://github.com/GuoShi28/CBDNet.

745 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: In this paper, a single-stage blind real image denoising network (RIDNet) was proposed by employing a modular architecture, which uses residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies.
Abstract: Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, its performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of the denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.

285 citations


Posted Content
TL;DR: A general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data is proposed, which allows us to calibrate $\mathcal{J}$-invariant versions of any parameterised Denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network.
Abstract: We propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data. The only assumption is that the noise exhibits statistical independence across different dimensions of the measurement, while the true signal exhibits some correlation. For a broad class of functions ("$\mathcal{J}$-invariant"), it is then possible to estimate the performance of a denoiser from noisy data alone. This allows us to calibrate $\mathcal{J}$-invariant versions of any parameterised denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network. We demonstrate this on natural image and microscopy data, where we exploit noise independence between pixels, and on single-cell gene expression data, where we exploit independence between detections of individual molecules. This framework generalizes recent work on training neural nets from noisy images and on cross-validation for matrix factorization.

267 citations


Posted Content
TL;DR: A novel single-stage blind real image denoising network (RIDNet) is proposed by employing a modular architecture that uses residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies.
Abstract: Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.

243 citations


Journal ArticleDOI
TL;DR: The deep convolutional neural network (CNN) is introduced to achieve the HSI denoising method (HSI-DeNet), which can be regarded as a tensor-based method by directly learning the filters in each layer without damaging the spectral-spatial structures.
Abstract: The spectral and the spatial information in hyperspectral images (HSIs) are the two sides of the same coin. How to jointly model them is the key issue for HSIs’ noise removal, including random noise, structural stripe noise, and dead pixels/lines. In this paper, we introduce the deep convolutional neural network (CNN) to achieve this goal. The learned filters can well extract the spatial information within their local receptive filed. Meanwhile, the spectral correlation can be depicted by the multiple channels of the learned 2-D filters, namely, the number of filters in each layer. The consequent advantages of our CNN-based HSI denoising method (HSI-DeNet) over previous methods are threefold. First, the proposed HSI-DeNet can be regarded as a tensor-based method by directly learning the filters in each layer without damaging the spectral-spatial structures. Second, the HSI-DeNet can simultaneously accommodate various kinds of noise in HSIs. Moreover, our method is flexible for both single image and multiple images by slightly modifying the channels of the filters in the first and last layers. Last but not least, our method is extremely fast in the testing phase, which makes it more practical for real application. The proposed HSI-DeNet is extensively evaluated on several HSIs, and outperforms the state-of-the-art HSI-DeNets in terms of both speed and performance.

219 citations


Proceedings Article
24 May 2019
TL;DR: In this article, the authors propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data.
Abstract: We propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data. The only assumption is that the noise exhibits statistical independence across different dimensions of the measurement, while the true signal exhibits some correlation. For a broad class of functions ("$\mathcal{J}$-invariant"), it is then possible to estimate the performance of a denoiser from noisy data alone. This allows us to calibrate $\mathcal{J}$-invariant versions of any parameterised denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network. We demonstrate this on natural image and microscopy data, where we exploit noise independence between pixels, and on single-cell gene expression data, where we exploit independence between detections of individual molecules. This framework generalizes recent work on training neural nets from noisy images and on cross-validation for matrix factorization.

158 citations


Journal ArticleDOI
TL;DR: The article preprocesses the composite fault with ensemble empirical mode decomposition (EEMD) and then reconstructs the intrinsic mode function with the same time scale and proposes kurtosis spectral entropy as the objective function and uses the proposed method to search the complex fault pulse signals in strong noise environment.

146 citations


Journal ArticleDOI
TL;DR: Fractional Fourier entropy (FrFE)-based hyperspectral anomaly detection method can significantly distinguish signal from background and noise, and is implemented in the optimal fractional domain.
Abstract: Anomaly detection is an important task in hyperspectral remote sensing. Most widely used detectors, such as Reed–Xiaoli (RX), have been developed only using original spectral signatures, which may lack the capability of signal enhancement and noise suppression. In this article, an effective alternative approach, fractional Fourier entropy (FrFE)-based hyperspectral anomaly detection method, is proposed. First, fractional Fourier transform (FrFT) is employed as preprocessing, which obtains features in an intermediate domain between the original reflectance spectrum and its Fourier transform with complementary strengths by space-frequency representations. It is desirable for noise removal so as to enhance the discrimination between anomalies and background. Furthermore, an FrFE-based step is developed to automatically determine an optimal fractional transform order. With a more flexible constraint, i.e., Shannon entropy uncertainty principle on FrFT, the proposed method can significantly distinguish signal from background and noise. Finally, the proposed FrFE-based anomaly detection method is implemented in the optimal fractional domain. Experimental results obtained on real hyperspectral datasets demonstrate that the proposed method is quite competitive.

142 citations


Posted Content
TL;DR: This work proposes a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for blind image Denoising, and presents an approximate posterior, parameterized by deep neural networks, presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image.
Abstract: Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for blind image denoising. Specifically, an approximate posterior, parameterized by deep neural networks, is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be easily implemented for blind image denoising with automatic noise estimation for the test noisy image. On one hand, as other data-driven deep learning methods, our method, namely variational denoising network (VDN), can perform denoising efficiently due to its explicit form of posterior expression. On the other hand, VDN inherits the advantages of traditional model-driven approaches, especially the good generalization capability of generative models. VDN has good interpretability and can be flexibly utilized to estimate and remove complicated non-i.i.d. noise collected in real scenarios. Comprehensive experiments are performed to substantiate the superiority of our method in blind image denoising.

141 citations


Journal ArticleDOI
TL;DR: An inventive bio-inspired optimization based filtering system is considered for the MI denoising process, the filter named as Bilateral Filter (BF), and Gaussian and spatial weights are chosen by utilizing swarm based optimization that is Dragonfly (DF) and Modified Firefly (MFF) algorithm.

129 citations


Journal ArticleDOI
TL;DR: In this article, a spatial-spectral gradient network (SSGN) is proposed for mixed noise removal in hyperspectral images. But the proposed method employs a spatial gradient learning strategy, in consideration of the unique spatial structure directionality of sparse noise and spectral differences with additional complementary information for effectively extracting intrinsic and deep features of HSIs.
Abstract: The existence of hybrid noise in hyperspectral images (HSIs) severely degrades the data quality, reduces the interpretation accuracy of HSIs, and restricts the subsequent HSI applications. In this paper, the spatial–spectral gradient network (SSGN) is presented for mixed noise removal in HSIs. The proposed method employs a spatial–spectral gradient learning strategy, in consideration of the unique spatial structure directionality of sparse noise and spectral differences with additional complementary information for effectively extracting intrinsic and deep features of HSIs. Based on a fully cascaded multiscale convolutional network, SSGN can simultaneously deal with different types of noise in different HSIs or spectra by the use of the same model. The simulated and real-data experiments undertaken in this study confirmed that the proposed SSGN outperforms at mixed noise removal compared with the other state-of-the-art HSI denoising algorithms, in evaluation indices, visual assessments, and time consumption.

Proceedings ArticleDOI
16 Jun 2019
TL;DR: A deep iterative down-up convolutional neural network (DIDN) for image denoising, which repeatedly decreases and increases the resolution of the feature maps.
Abstract: Networks using down-scaling and up-scaling of feature maps have been studied extensively in low-level vision research owing to efficient GPU memory usage and their capacity to yield large receptive fields. In this paper, we propose a deep iterative down-up convolutional neural network (DIDN) for image denoising, which repeatedly decreases and increases the resolution of the feature maps. The basic structure of the network is inspired by U-Net which was originally developed for semantic segmentation. We modify the down-scaling and up-scaling layers for image denoising task. Conventional denoising networks are trained to work with a single-level noise, or alternatively use noise information as inputs to address multi-level noise with a single model. Conversely, because the efficient memory usage of our network enables it to handle multiple parameters, it is capable of processing a wide range of noise levels with a single model without requiring noise-information inputs as a work-around. Consequently, our DIDN exhibits state-of-the-art performance using the benchmark dataset and also demonstrates its superiority in the NTIRE 2019 real image denoising challenge.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: This paper constructs a dataset - the Fluorescence Microscopy Denoising (FMD) dataset - that is dedicated to Poisson-Gaussian denoising and uses this dataset to benchmark 10 representative denoised algorithms and finds that deep learning methods have the best performance.
Abstract: Fluorescence microscopy has enabled a dramatic development in modern biology. Due to its inherently weak signal, fluorescence microscopy is not only much noisier than photography, but also presented with Poisson-Gaussian noise where Poisson noise, or shot noise, is the dominating noise source. To get clean fluorescence microscopy images, it is highly desirable to have effective denoising algorithms and datasets that are specifically designed to denoise fluorescence microscopy images. While such algorithms exist, no such datasets are available. In this paper, we fill this gap by constructing a dataset - the Fluorescence Microscopy Denoising (FMD) dataset - that is dedicated to Poisson-Gaussian denoising. The dataset consists of 12,000 real fluorescence microscopy images obtained with commercial confocal, two-photon, and wide-field microscopes and representative biological samples such as cells, zebrafish, and mouse brain tissues. We use image averaging to effectively obtain ground truth images and 60,000 noisy images with different noise levels. We use this dataset to benchmark 10 representative denoising algorithms and find that deep learning methods have the best performance. To our knowledge, this is the first real microscopy image dataset for Poisson-Gaussian denoising purposes and it could be an important tool for high-quality, real-time denoising applications in biomedical research.

Journal ArticleDOI
TL;DR: In this article, a new method for fault feature extraction of rolling bearing based on singular value decomposition (SVD) and frequency band entropy (OFBE) was proposed, which is based on the principle of maximum kurtosis.

Proceedings ArticleDOI
16 Jun 2019
TL;DR: This study introduces a densely connected hierarchical image denoising network (DHDN), which exceeds the performances of state-of-the-art image Denoising solutions and establishes that the proposed network outperforms conventional methods.
Abstract: Recently, deep convolutional neural networks have been applied in numerous image processing researches and have exhibited drastically improved performances. In this study, we introduce a densely connected hierarchical image denoising network (DHDN), which exceeds the performances of state-of-the-art image denoising solutions. Our proposed network improves the image denoising performance by applying the hierarchical architecture of the modified U-Net; this makes our network to use a larger number of parameters than other methods. In addition, we induce feature reuse and solve the vanishing-gradient problem by applying dense connectivity and residual learning to our convolution blocks and network. Finally, we successfully apply the model ensemble and self-ensemble methods; this enable us to improve the performance of the proposed network. The performance of the proposed network is validated by winning the second place in the NTRIE 2019 real image denoising challenge sRGB track and the third place in the raw-RGB track. Additional experimental results on additive white Gaussian noise removal also establishes that the proposed network outperforms conventional methods; this is notwithstanding the fact that the proposed network handles a wide range of noise levels with a single set of trained parameters.

Journal ArticleDOI
TL;DR: With this extensive review, researchers in image processing will be able to ascertain which of these denoising methods will be best applicable to their research needs and the application domain where such methods are contemplated for implementation.

Journal ArticleDOI
TL;DR: The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images and to provide a review of the following important Poisson removal methods.
Abstract: In medical imaging systems, denoising is one of the important image processing tasks. Automatic noise removal will improve the quality of diagnosis and requires careful treatment of obtained imagery. Com-puted tomography (CT) and X-Ray imaging systems use the X radiation to capture images and they are usually corrupted by noise following a Poisson distribution. Due to the importance of Poisson noise re-moval in medical imaging, there are many state-of-the-art methods that have been studied in the image processing literature. These include methods that are based on total variation (TV) regularization, wave-lets, principal component analysis, machine learning etc. In this work, we will provide a review of the following important Poisson removal methods: the method based on the modified TV model, the adaptive TV method, the adaptive non-local total variation method, the method based on the higher-order natural image prior model, the Poisson reducing bilateral filter, the PURE-LET method, and the variance stabi-lizing transform-based methods. Our task focuses on methodology overview, accuracy, execution time and their advantage/disadvantage assessments. The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images. The integration of several high-quality denoising methods in image processing software for medical imaging systems will be always excellent option and help further image analysis for computer-aided diagnosis.

Journal ArticleDOI
TL;DR: An improved feed-forward denoising convolution neural network (DnCNN) is proposed to suppress random noise in desert seismic data and can open a new direction in the area of seismic data processing.
Abstract: High-quality seismic data are the basis for stratigraphic imaging and interpretation, but the existence of random noise can greatly affect the quality of seismic data. At present, most understanding and processing of random noise still stay at the level of Gaussian white noise. With the reduction of resource, the acquired seismic data have lower signal-to-noise ratio and more complex noise natures. In particular, the random noise in the desert area has the characteristics of low frequency, non-Gaussian, nonstationary, high energy, and serious aliasing between effective signal and random noise in the frequency domain, which has brought great difficulties to the recovery of seismic events by conventional denoising methods. To solve this problem, an improved feed-forward denoising convolution neural network (DnCNN) is proposed to suppress random noise in desert seismic data. DnCNN has the characteristics of automatic feature extraction and blind denoising. According to the characteristics of desert noise, we modify the original DnCNN from the aspects of patch size, convolution kernel size, network depth, and training set to make it suitable for low-frequency and non-Gaussian desert noise suppression. Both simulation and practical experiments prove that the improved DnCNN has obvious advantages in terms of desert noise and surface wave suppression as well as effective signal amplitude preservation. In addition, the improved DnCNN, in contrast to existing methods, has considerable potential to benefit from large data sets. Therefore, we believe that it can open a new direction in the area of seismic data processing.

Journal ArticleDOI
TL;DR: DeepDenoiser as discussed by the authors uses a deep neural network to simultaneously learn a sparse representation of data in the time-frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise.
Abstract: Frequency filtering is widely used in routine processing of seismic data to improve the signal-to-noise ratio (SNR) of recorded signals and by doing so to improve subsequent analyses. In this paper, we develop a new denoising/decomposition method, DeepDenoiser, based on a deep neural network. This network is able to simultaneously learn a sparse representation of data in the time–frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise (defined as any non-seismic signal). We show that DeepDenoiser achieves impressive denoising of seismic signals even when the signal and noise share a common frequency band. Because the noise statistics are automatically learned from data and require no assumptions, our method properly handles white noise, a variety of colored noise, and non-earthquake signals. DeepDenoiser can significantly improve the SNR with minimal changes in the waveform shape of interest, even in the presence of high noise levels. We demonstrate the effect of our method on improving earthquake detection. There are clear applications of DeepDenoiser to seismic imaging, micro-seismic monitoring, and preprocessing of ambient noise data. We also note that the potential applications of our approach are not limited to these applications or even to earthquake data and that our approach can be adapted to diverse signals and applications in other settings.

Proceedings ArticleDOI
15 Oct 2019
TL;DR: A novel progressive Retinex framework is presented, in which illumination and noise of low-light image are perceived in a mutually reinforced manner, leading to noise reduction low- light enhancement results.
Abstract: Contrast enhancement and noise removal are coupled problems for low-light image enhancement. The existing Retinex based methods do not take the coupling relation into consideration, resulting in under or over-smoothing of the enhanced images. To address this issue, this paper presents a novel progressive Retinex framework, in which illumination and noise of low-light image are perceived in a mutually reinforced manner, leading to noise reduction low-light enhancement results. Specifically, two fully pointwise convolutional neural networks are devised to model the statistical regularities of ambient light and image noise respectively, and to leverage them as constraints to facilitate the mutual learning process. The proposed method not only suppresses the interference caused by the ambiguity between tiny textures and image noises, but also greatly improves the computational efficiency. Moreover, to solve the problem of insufficient training data, we propose an image synthesis strategy based on camera imaging model, which generates color images corrupted by illumination-dependent noises. Experimental results on both synthetic and real low-light images demonstrate the superiority of our proposed approaches against the State-Of-The-Art (SOTA) low-light enhancement methods.

Proceedings Article
29 Aug 2019
TL;DR: In this paper, a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, is proposed for blind image denoing, where an approximate posterior, parameterized by deep neural networks, is presented by taking the intrinsic clean image and noise variance as latent variables conditioned on the input noisy image.
Abstract: Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for blind image denoising. Specifically, an approximate posterior, parameterized by deep neural networks, is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be easily implemented for blind image denoising with automatic noise estimation for the test noisy image. On one hand, as other data-driven deep learning methods, our method, namely variational denoising network (VDN), can perform denoising efficiently due to its explicit form of posterior expression. On the other hand, VDN inherits the advantages of traditional model-driven approaches, especially the good generalization capability of generative models. VDN has good interpretability and can be flexibly utilized to estimate and remove complicated non-i.i.d. noise collected in real scenarios. Comprehensive experiments are performed to substantiate the superiority of our method in blind image denoising.

Journal ArticleDOI
TL;DR: In this article, an expansion-chamber muffler is designed with the analysis of its noise reduction effect, the results show that after the muffler was installed, the noise reduction in the low-frequency ranges reaches up to 37.5 dB, which controls the maximum noise to around 82 dB.
Abstract: Manifolds play a role of pressure balance, buffering and rectification for different branch pipelines, the flow noise of manifolds has been a serious problem all this time in natural gas transmission station. By changing the number of outlet pipes of manifolds and the different positions of intake pipes, the distribution of the Sound Pressure Level (SPL) of the manifold flow noise is analyzed based on the Ffowcs Williams-Hawkings (FW-H) acoustic analogy theory and Large Eddy Simulations (LESs). The three-dimensional simulation analysis of the flow field shows that pressure pulsation is the mainly source of manifold noise, as the number of outlet pipe increases, the SPLs of fluid dynamic noise at the end of inlet pipes are significantly reduced by about 10 dB on average, when the inlet and outlet piping are oppositely connected, the SPL is 2 dB~3 dB lower than that in staggered connections. An expansion-chamber muffler is designed with the analysis of its noise reduction effect, the results show that after the muffler is installed, the noise reduction in the low-frequency ranges reaches up to 37.5 dB, which controls the maximum noise to around 82 dB.

Proceedings ArticleDOI
16 Jun 2019
TL;DR: This work proposes ViDeNN: a CNN for Video Denoising without prior knowledge on the noise distribution (blind denoising), and demonstrates the importance of the data used for CNNs training, creating for this purpose a specific dataset for low-light conditions.
Abstract: We propose ViDeNN: a CNN for Video Denoising without prior knowledge on the noise distribution (blind denoising). The CNN architecture uses a combination of spatial and temporal filtering, learning to spatially denoise the frames first and at the same time how to combine their temporal information, handling objects motion, brightness changes, low-light conditions and temporal inconsistencies. We demonstrate the importance of the data used for CNNs training, creating for this purpose a specific dataset for low-light conditions. We test ViDeNN on common benchmarks and on self-collected data, achieving good results comparable with the state-of-the-art.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed scheme increases visibility in extreme weather conditions without amplifying the noise, and the effectiveness of the algorithm is verified.
Abstract: This paper presents a joint dehazing and denoising scheme for an image taken in hazy conditions. Conventional image dehazing methods may amplify the noise depending on the distance and density of the haze. To suppress the noise and improve the dehazing performance, an imaging model is modified by adding the process of amplifying the noise in hazy conditions. This model offers depth-chromaticity compensation regularization for the transmission map and chromaticity-depth compensation regularization for dehazing the image. The proposed iterative image dehazing method with polarization uses these two joint regularization schemes and the relationship between the transmission map and dehazed image. The transmission map and irradiance image are used to promote each other. To verify the effectiveness of the algorithm, polarizing images of different scenes in different days are collected. Different algorithms are applied to the original images. Experimental results demonstrate that the proposed scheme increases visibility in extreme weather conditions without amplifying the noise.

Journal ArticleDOI
TL;DR: A deep convolutional neural network with residual learning for seismic data denoising adaptively and effectively suppresses noise of different levels and exhibits a competitive performance in comparison with the traditional transform-based methods.
Abstract: Over the last decades, seismic random noise attenuation has been dominated by transform-based denoising methods over the last decades. However, these methods usually need to estimate the noise level and select an optimal transformation in advance, and they may generate some artifacts in the denoising result (e.g., nonsmooth edges and pseudo-Gibbs phenomena). To overcome these disadvantages, we trained a deep convolutional neural network (CNN) with residual learning for seismic data denoising. We used synthetic seismic data for network training rather than seismic images, and we adopted a method to preprocess the seismic data before it was inputted in the network to help network training. We demonstrate the performance of the deep CNN in seismic random noise attenuation based on the synthetic seismic data. Results of numerical experiments show that our network adaptively and effectively suppresses noise of different levels and exhibits a competitive performance in comparison with the traditional transform-based methods.

Journal ArticleDOI
TL;DR: A novel subspace-based nonlocal low-rank and sparse factorization (SNLRSF) method is proposed to remove the mixture of several types of noise in HSI and outperforms the related state-of-the-art methods in terms of visual quality and quantitative evaluation.
Abstract: Hyperspectral images (HSIs) are unavoidably contaminated by different types of noise during data acquisition and transmission, e.g., Gaussian noise, impulse noise, stripes, and deadlines. A variety of mixed noise reduction approaches are developed for HSI, in which the subspace-based methods have achieved comparable performance. In this paper, a novel subspace-based nonlocal low-rank and sparse factorization (SNLRSF) method is proposed to remove the mixture of several types of noise. The SNLRSF method explores spectral low rank based on the fact that spectral signatures of pixels lie in a low-dimensional subspace and employs the nonlocal low-rank factorization to take the spatial nonlocal self-similarity into consideration. At the same time, the successive singular value decomposition (SVD) low-rank factorization algorithm is used to estimate three-dimensional (3-D) tensor generated by nonlocal similar 3-D patches. Moreover, the well-known augmented Lagrangian method is adopted to solve final denoising model efficiently. The experimental results over simulated and real datasets demonstrate that the proposed approach outperforms the related state-of-the-art methods in terms of visual quality and quantitative evaluation.

Journal ArticleDOI
TL;DR: A deep neural network is presented to reduce coherent noise in three-dimensional quantitative phase imaging and is applied to reduce the temporally changing noise emerging from focal drift in time-lapse imaging of biological cells.
Abstract: We present a deep neural network to reduce coherent noise in three-dimensional quantitative phase imaging. Inspired by the cycle generative adversarial network, the denoising network was trained to learn a transform between two image domains: clean and noisy refractive index tomograms. The unique feature of this network, distinct from previous machine learning approaches employed in the optical imaging problem, is that it uses unpaired images. The learned network quantitatively demonstrated its performance and generalization capability through denoising experiments of various samples. We concluded by applying our technique to reduce the temporally changing noise emerging from focal drift in time-lapse imaging of biological cells. This reduction cannot be performed using other optical methods for denoising.

Journal ArticleDOI
TL;DR: A partial discharge-based novel adaptive ensemble empirical mode decomposition (Novel Adaptive EEMD, NAEEMD) method is proposed in this paper for noise reduction, which provides a new strategy for pre-processing the PD signal of the switchgear.
Abstract: The elimination of a variety of noises such as the narrow-band interference in the detection of partial discharge (PD) signals in switchgear is an intractable issue. Furthermore, the self-adaptation in the denoising process is weak. A partial discharge-based novel adaptive ensemble empirical mode decomposition (Novel Adaptive EEMD, NAEEMD) method is proposed in this paper for noise reduction. First, the signal is decomposed using the EEMD, only the first-order natural mode is decomposed until the signal margin reaches the EEMD decomposed termination condition. After removing the first-order mode, noise is added to the residual signal, and the remaining signal components are decomposed in the next stage. At last, the intrinsic mode function (IMF) of the noise reduction reconstruction is adaptively selected. The latter is accomplished by combining the energy density and the average period of the IMF correlation coefficient method. Meanwhile, the proposed method provides a new strategy for pre-processing the PD signal of the switchgear. The outcomes of the proposed NAEEMD de-noising method have been compared with the conventional wavelet denoising algorithm (WDA) and EMD-based threshold denoising for validation. The simulation results showed a good denoising effect and effectiveness of the proposed method compared to the WDA and EMD-based threshold denoising. Furthermore, an experimental simulation utilizing actual switchgear PD signal has been performed to verify the noise reduction effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A deep pixel-to-pixel networks model for underwater image enhancement is proposed by designing an encoding-decoding framework that outperforms the state-of-the-art image restoration methods in underwater image defogging, denoising and colour enhancement.
Abstract: Turbid underwater environment poses great difficulties for the applications of vision technologies. One of the biggest challenges is the complicated noise distribution of the underwater images due to the serious scattering and absorption. To alleviate this problem, this work proposes a deep pixel-to-pixel networks model for underwater image enhancement by designing an encoding-decoding framework. It employs the convolution layers as encoding to filter the noise, while uses deconvolution layers as decoding to recover the missing details and refine the image pixel by pixel. Moreover, skip connection is introduced in the networks model in order to avoid low-level features losing while accelerating the training process. The model achieves the image enhancement in a self-adaptive data-driven way rather than considering the physical environment. Several comparison experiments are carried out with different datasets. Results show that it outperforms the state-of-the-art image restoration methods in underwater image defogging, denoising and colour enhancement.

Journal ArticleDOI
TL;DR: A speckle suppression algorithm based on weighted nuclear norm minimization (WNNM) and Grey theory is proposed that not only effectively improves the visual effect of the denoised image and preserves the local structure of the image better but also improves the objective indexes values of the Denoise image.
Abstract: Coherent imaging systems are greatly affected by speckle noise, which makes visual analysis and features extraction a difficult task. In this paper, we propose a speckle suppression algorithm based on weighted nuclear norm minimization (WNNM) and Grey theory. First, we use logarithmic transformation to the noisy images such that the speckle noise is transformed into additive noise. Second, by matching the local blocks based on Grey theory, we will get approximate low-rank matrices grouped by the similar blocks of the reference patches. We then estimate the noise variance of the noisy images with the wavelet transform. Finally, we use WNNM method to denoise the image. The results show that our algorithm not only effectively improves the visual effect of the denoised image and preserves the local structure of the image better but also improves the objective indexes values of the denoised image.