scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2021"


Proceedings ArticleDOI
20 Jun 2021
TL;DR: Neighbor2Neighbor as mentioned in this paper uses a random neighbor sub-sampler for the generation of training image pairs, satisfying the requirement that paired pixels of paired images are neighbors and have very similar appearance with each other.
Abstract: In the last few years, image denoising has benefited a lot from the fast development of neural networks. However, the requirement of large amounts of noisy-clean image pairs for supervision limits the wide use of these models. Although there have been a few attempts in training an image denoising model with only single noisy images, existing self-supervised denoising approaches suffer from inefficient network training, loss of useful information, or dependence on noise modeling. In this paper, we present a very simple yet effective method named Neighbor2Neighbor to train an effective image denoising model with only noisy images. Firstly, a random neighbor sub-sampler is proposed for the generation of training image pairs. In detail, input and target used to train a network are images sub-sampled from the same noisy image, satisfying the requirement that paired pixels of paired images are neighbors and have very similar appearance with each other. Secondly, a denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance. The proposed Neighbor2Neighbor framework is able to enjoy the progress of state-of-the-art supervised denoising networks in network architecture design. Moreover, it avoids heavy dependence on the assumption of the noise distribution. We explain our approach from a theoretical perspective and further validate it through extensive experiments, including synthetic experiments with different noise distributions in sRGB space and real-world experiments on a denoising benchmark dataset in raw-RGB space.

144 citations


Journal ArticleDOI
TL;DR: A novelDenoising framework with deep convolutional neural networks (CNNs) of transforming the TEM signal denoising task into an image denoised task (namely, TEMDnet) is proposed in this article and can achieve much better performance compared with other state-of-the-art approaches on both simulated signals and real-world signals from a landfill leachate treatment plant in Chengdu, Sichuan, China.
Abstract: The considerable prospecting depth and accurate subsurface characteristics can be obtained by the transient electromagnetic method (TEM) in geophysics. Nevertheless, the time-domain TEM signal received by the coil is easily disturbed by environmental background noise, artificial noise, and electronic noise of the equipment. Recently, deep neural networks (DNNs) have been used to solve the TEM denoising problem and have achieved better performance than traditional methods. However, the existing denoising method with DNN adopts fully connected neural networks and is therefore not flexible enough to deal with various signal scales. To address these issues, a novel denoising framework with deep convolutional neural networks (CNNs) of transforming the TEM signal denoising task into an image denoising task (namely, TEMDnet) is proposed in this article. Specifically, a novel signal-to-image transformation method is developed first to preserve the structural features of TEM signals. Then, a novel deep CNN-based denoiser is proposed to further perform feature learning, in which the residual learning mechanism is adopted to model the noise estimation image for different signal features. Extensive experiments demonstrate that the proposed framework can achieve much better performance compared with other state-of-the-art approaches on both simulated signals and real-world signals from a landfill leachate treatment plant in Chengdu, Sichuan, China. Models and code are available at https://github.com/tonyckc/TEMDnet_demo.

143 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: NBNet as mentioned in this paper proposes a non-local attention module to explicitly learn the basis generation as well as subspace projection, which achieves state-of-the-art performance on PSNR and SSIM with significantly less computational cost.
Abstract: In this paper, we introduce NBNet, a novel framework for image denoising. Unlike previous works, we propose to tackle this challenging problem from a new perspective: noise reduction by image-adaptive projection. Specifically, we propose to train a network that can separate signal and noise by learning a set of reconstruction basis in the feature space. Subsequently, image denosing can be achieved by selecting corresponding basis of the signal subspace and projecting the input into such space. Our key insight is that projection can naturally maintain the local structure of input signal, especially for areas with low light or weak textures. Towards this end, we propose SSA, a non-local attention module we design to explicitly learn the basis generation as well as subspace projection. We further incorporate SSA with NBNet, a UNet structured network designed for end-to-end image denosing based. We conduct evaluations on benchmarks, including SIDD and DND, and NBNet achieves state-of-the-art performance on PSNR and SSIM with significantly less computational cost.

110 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper designed an end-to-end signal prior-guided layer separation and data-driven mapping network with layer-specified constraints for single-image low-light enhancement.
Abstract: Due to the absence of a desirable objective for low-light image enhancement, previous data-driven methods may provide undesirable enhanced results including amplified noise, degraded contrast and biased colors. In this work, inspired by Retinex theory, we design an end-to-end signal prior-guided layer separation and data-driven mapping network with layer-specified constraints for single-image low-light enhancement. A Sparse Gradient Minimization sub-Network (SGM-Net) is constructed to remove the low-amplitude structures and preserve major edge information, which facilitates extracting paired illumination maps of low/normal-light images. After the learned decomposition, two sub-networks (Enhance-Net and Restore-Net) are utilized to predict the enhanced illumination and reflectance maps, respectively, which helps stretch the contrast of the illumination map and remove intensive noise in the reflectance map. The effects of all these configured constraints, including the signal structure regularization and losses, combine together reciprocally, which leads to good reconstruction results in overall visual quality. The evaluation on both synthetic and real images, particularly on those containing intensive noise, compression artifacts and their interleaved artifacts, shows the effectiveness of our novel models, which significantly outperforms the state-of-the-art methods.

101 citations


Proceedings ArticleDOI
19 Jun 2021
TL;DR: InvDN as mentioned in this paper replaces the noisy latent representation with another one sampled from a prior distribution during reversion to discard noise and restore the clean image, achieving a new state-of-the-art result for the SIDD dataset.
Abstract: Invertible networks have various benefits for image de-noising since they are lightweight, information-lossless, and memory-saving during back-propagation. However, applying invertible models to remove noise is challenging because the input is noisy, and the reversed output is clean, following two different distributions. We propose an invertible denoising network, InvDN, to address this challenge. InvDN transforms the noisy input into a low-resolution clean image and a latent representation containing noise. To discard noise and restore the clean image, InvDN replaces the noisy latent representation with another one sampled from a prior distribution during reversion. The de-noising performance of InvDN is better than all the existing competitive models, achieving a new state-of-the-art result for the SIDD dataset while enjoying less run time. Moreover, the size of InvDN is far smaller, only having 4.2% of the number of parameters compared to the most recently proposed DANet. Further, via manipulating the noisy latent representation, InvDN is also able to generate noise more similar to the original one. Our code is available at: https://github.com/Yang-Liu1082/InvDN.git.

99 citations


Journal ArticleDOI
Yuxing Zhao1, Yue Li1, Ning Wu1
TL;DR: The denoising results show that the proposed method can effectively suppress a variety of common noise in DAS VSP data and the effective signal has almost no energy attenuation.
Abstract: Distributed acoustic sensing (DAS) is a novel technology, which has the advantages of full well coverage, high sampling density, and strong tolerance to harsh environments. However, compared with conventional geophones, the signal-to-noise ratio (SNR) of vertical seismic profile (VSP) data obtained using DAS is low, and there are many types of noise (such as random noise, coupled noise, fading noise, background abnormal interference, horizontal noise, and checkerboard noise). These noises bring great difficulties to the interpretation of seismic data. Existing DAS VSP data denoising methods generally can only suppress one type of noise. Faced with DAS VSP data with many types of noise, the denoising process is extremely complicated. To solve the above problems, we propose a DAS VSP data denoiser based on the convolutional neural network (CNN), which can suppress a variety of common noise at one time, and the denoising process is more convenient and efficient. In addition, since there is currently no publicly available training set for DAS VSP data, we also use field data and synthetic data to construct a training set for the denoiser. The denoising results show that the proposed method can effectively suppress a variety of common noise in DAS VSP data and the effective signal has almost no energy attenuation. Both the shallow layer signal affected by strong noise and the deep layer signal with weak energy are well recovered.

89 citations


Journal ArticleDOI
TL;DR: A deep spatial-spectral global reasoning network to consider both the local and global information for HSI noise removal and can help tackle complex noise by exploiting multiple representations, e.g., hierarchical local feature, global spatial coherence, cross-channel correlation, and multi-scale abstract representation.
Abstract: Although deep neural networks (DNNs) have been widely applied to hyperspectral image (HSI) denoising, most DNN-based HSI denoising methods are designed by stacking convolution layer, which can only model and reason local relations, and thus ignore the global contextual information. To address this issue, we propose a deep spatial-spectral global reasoning network to consider both the local and global information for HSI noise removal. Specifically, two novel modules are proposed to model and reason global relational information. The first one aims to model global spatial relations between pixels in feature maps, and the second one models the global relations across the channels. Compared to traditional convolution operations, the two proposed modules enable the network to extract representations from new dimensions. For the HSI denoising task, the two modules, as well as the densely connected structures, are embedded into the U-Net architecture. Thus, the new-designed global reasoning network can help tackle complex noise by exploiting multiple representations, e.g., hierarchical local feature, global spatial coherence, cross-channel correlation, and multi-scale abstract representation. Experiments on both synthetic and real HSI data demonstrate that our proposed network can obtain comparable or even better denoising results than other state-of-the-art methods.

67 citations


Journal ArticleDOI
TL;DR: A novel denoising method based on ensemble empirical mode decomposition (EEMD) and grey theory, named EEMD-Grey, is proposed and can effectively remove noise and retain useful information.

65 citations


Journal ArticleDOI
TL;DR: In this article, a residual dense neural network (RDUNet) was proposed for image denoising based on the densely connected hierarchical network, where the encoding and decoding layers consist of densely connected convolutional layers to reuse the feature maps and local residual learning to avoid the vanishing gradient problem and speed up the learning process.
Abstract: In recent years, convolutional neural networks have achieved considerable success in different computer vision tasks, including image denoising. In this work, we present a residual dense neural network (RDUNet) for image denoising based on the densely connected hierarchical network. The encoding and decoding layers of the RDUNet consist of densely connected convolutional layers to reuse the feature maps and local residual learning to avoid the vanishing gradient problem and speed up the learning process. Moreover, global residual learning is adopted such that, instead of directly predicting the denoised image, the model predicts the residual noise of the corrupted image. The algorithm was trained for the case of additive white Gaussian noise and using a wide range of noise levels. Hence, one advantage of the proposal is that the denoising process does not require prior knowledge about the noise level. In order to evaluate the model, we conducted several experiments with natural image databases available online, achieving competitive results compared with state-of-the-art networks for image denoising. For comparison purpose, we use additive Gaussian noise with levels 10, 30, 50. In the case of grayscale images, we achieved PSNR of 34.39, 29.11, 26.99, and SSIM of 0.9297, 0.8193, 0.7491. For color images we obtained PSNR of 36.68, 31.43, 29.12, and SSIM of 0.9600, 0.8961, 0.8465.

65 citations


Posted Content
Chitwan Saharia1, Jonathan Ho1, William Chan1, Tim Salimans1, David J. Fleet1, Mohammad Norouzi1 
TL;DR: SR3 as discussed by the authors adapts denoising diffusion probabilistic models to conditional image generation and performs super-resolution through a stochastic denoizing process, which achieves a fool rate close to 50%, suggesting photo-realistic outputs.
Abstract: We present SR3, an approach to image Super-Resolution via Repeated Refinement. SR3 adapts denoising diffusion probabilistic models to conditional image generation and performs super-resolution through a stochastic denoising process. Inference starts with pure Gaussian noise and iteratively refines the noisy output using a U-Net model trained on denoising at various noise levels. SR3 exhibits strong performance on super-resolution tasks at different magnification factors, on faces and natural images. We conduct human evaluation on a standard 8X face super-resolution task on CelebA-HQ, comparing with SOTA GAN methods. SR3 achieves a fool rate close to 50%, suggesting photo-realistic outputs, while GANs do not exceed a fool rate of 34%. We further show the effectiveness of SR3 in cascaded image generation, where generative models are chained with super-resolution models, yielding a competitive FID score of 11.3 on ImageNet.

57 citations


Journal ArticleDOI
TL;DR: The results indicate the ability of the proposed algorithm in attenuating the random noise and preserving the seismic signal effectively despite the existence of a large amount of random noise, for example, when the input signal‐to‐noise ratio is as low as −14.2 dB.
Abstract: In this study, we proposed a deep learning algorithm (PATCHUNET) to suppress random noise and preserve the coherent seismic signal. The input data are divided into several patches, and each patch is encoded to extract the meaningful features. Following this, the extracted features are decompressed to retrieve the seismic signal. Skip connections are used between the encoder and decoder parts, allowing the proposed algorithm to extract high‐order features without losing important information. Besides, dropout layers are used as regularization layers. The dropout layers preserve the most meaningful features belonging to the seismic signal and discard the remaining features. The proposed algorithm is an unsupervised approach that does not require prior information about the clean signal. The input patches are divided into 80% for training and 20% for testing. However, it is interesting to find that the proposed algorithm can be trained with only 30% of the input patches with an effective denoising performance. Four synthetic and four field examples are used to evaluate the proposed algorithm performance, and compared to the f−x deconvolution and the f−x singular spectrum analysis. The results indicate the ability of the proposed algorithm in attenuating the random noise and preserving the seismic signal effectively despite the existence of a large amount of random noise, for example, when the input signal‐to‐noise ratio is as low as −14.2 dB.

Journal ArticleDOI
TL;DR: In this article, a new denoising method for ship radiated noise based on Spearman variational mode decomposition (SVMD), spatial-dependence recurrence sample entropy (SdrSampEn), improved wavelet threshold denoing (IWTD), and Savitzky-Golay filter (SG) is proposed.
Abstract: Ship radiated noise denoising is the basis and premise of underwater acoustic signal processing. To obtain better denoising effect, a new denoising method for ship radiated noise based on Spearman variational mode decomposition (SVMD), spatial-dependence recurrence sample entropy (SdrSampEn), improved wavelet threshold denoising (IWTD) and Savitzky-Golay filter (SG) is proposed. Firstly, SVMD is proposed, ship radiated noise is decomposed a series of intrinsic mode functions (IMFs) by SVMD, and the SdrSampEn value of every IMF is counted. Then, according to the SdrSampEn value, these IMFs are divided into noise-dominated IMFs and real signal-dominated IMFs. Noise-dominated IMFs are denoised by IWTD, and real signal-dominated IMFs are denoised by SG. Finally, the processed IMFs are reconstructed, and the noise-reduced signal is acquired. The proposed method has three main advantages: (i) compared with empirical mode decomposition (EMD), variational mode decomposition (VMD) as a new non-recursive decomposition algorithm, overcomes the defect of mode mixing; (ii) the proposed SVMD method overcomes the problem that VMD needs to preset the number of decomposition levels K; (iii) real signal-dominated IMFs have also been denoised and the method improves signal-to-noise ratio (SNR) by 2 dB to 4 dB. The denoising experiments with the Lorenz signal and the Chen signal show that the proposed method can improve the SNR by 8 dB to 13 dB. Applying the proposed method to denoise ship radiated noise from the official website of National Park Administration ( https://www.nps.gov/glba/learn/nature/soundclips.htm ), the results show that the proposed method makes chaotic attractor phase waveform clearer and smoother, and can effective restrain marine environmental noise in ship radiated noise.

Journal ArticleDOI
TL;DR: The flexible filter design and superior noise reduction abilities of the IWPT and the passband denoise ability of the ISVD are organicly combined to form enhanced singular value decomposition (E-SVD) method, which is verified by the analysis of simulated data and actual cases of rolling bearing.
Abstract: For the two shortcomings of singular value decomposition (SVD), the determination of the reconstruction order and the poor noise reduction ability, an enhanced SVD is introduced in this article. The core ideas include: first, an efficient method to determine the reconstructed order of SVD and the relative-change rate of the singular envelope kurtosis is presented, composed of improved SVD (ISVD). Then, the method to select the optimal node of wavelet packet transform (WPT) by the criterion of envelope kurtosis maximum is presented, composed of improved WPT (IWPT). The flexible filter design and superior noise reduction abilities of the IWPT and the passband denoise ability of the ISVD are organicly combined to form enhanced singular value decomposition (E-SVD) method. In addition, an indicator is introduced to evaluate the performance of the results. First, the reconstructed signal is obtained by performing ISVD on the original signal. Second, IWPT is executed on the reconstructed signal to achieve the optimal node. Finally, the filtered signal is combined with the envelope power spectrum to extract the bearing fault characteristic frequency. The method's validity and superiority are verified by the analysis of simulated data and actual cases of rolling bearing.

Proceedings ArticleDOI
Xiangyu Chen1, Yihao Liu1, Zhengwen Zhang1, Yu Qiao1, Chao Dong1 
01 Jun 2021
TL;DR: This work proposes a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction with denoising and dequantization, which achieves the state-of-the-art performance in quantitative comparisons and visual quality.
Abstract: Most consumer-grade digital cameras can only capture a limited range of luminance in real-world scenes due to sensor constraints. Besides, noise and quantization errors are often introduced in the imaging process. In order to obtain high dynamic range (HDR) images with excellent visual quality, the most common solution is to combine multiple images with different exposures. However, it is not always feasible to obtain multiple images of the same scene and most HDR reconstruction methods ignore the noise and quantization loss. In this work, we propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction with denoising and dequantization. The network consists of a UNet-style base network to make full use of the hierarchical multi-scale information, a condition network toperform pattern-specific modulation and a weighting network for selectively retaining information. Moreover, we propose a Tanh_L 1 loss function to balance the impact of over-exposed values and well-exposed values on the network learning. Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality. The proposed HDRUNet model won the second place in the single frame track of NITRE2021 High Dynamic Range Challenge. The code is available at https://github.com/chxy95/HDRUNet.

Journal ArticleDOI
TL;DR: The adaptive CNN proposed in this article can more effectively attenuate the noise and reconstruct the seismic waveform by comparing the noise reduction results of some classic denoising algorithms.
Abstract: Because a high signal-to-noise ratio (SNR) is beneficial to the subsequent processing procedures, the noise attenuation is important We propose an adaptive random noise attenuation framework based on convolutional neural networks (CNNs) The framework transforms the target function from effective signal learning to noise learning through residual learning, so as to improve the training efficiency After sufficient training, the network transfers the learned seismic data features using a large synthetic data set to the testing of complex field data with unknown noise levels and, thus, attenuates the noise in an unsupervised way Unsupervised noise reduction requires certain representativeness of the training data and a sufficient amount of training data sets In the network architecture, we introduce residual learning and batch normalization (BN) to reduce the training parameters of the network, thereby shortening the time for feature learning The activation function with leakage correction function can effectively retain negative information, and its combination with the double convolutional residual block can enhance the generalization ability and feature extraction performance of the network In the test of synthetic data and complex field data with unknown noise levels, by comparing the noise reduction results of some classic denoising algorithms, the adaptive CNN proposed in this article can more effectively attenuate the noise and reconstruct the seismic waveform

Journal ArticleDOI
TL;DR: In this article, a hybrid Discrete Wavelet Transform (DWT) and edge information removal based algorithm is proposed to estimate the strength of Gaussian noise in digital images, where the wavelet coefficients corresponding to spatial domain edges are excluded from noise estimate calculation.
Abstract: Noise type and strength estimation are important in many image processing applications like denoising, compression, video tracking, etc. There are many existing methods for estimation of the type of noise and its strength in digital images. These methods mostly rely on the transform or spatial domain information of images. We propose a hybrid Discrete Wavelet Transform (DWT) and edge information removal based algorithm to estimate the strength of Gaussian noise in digital images. The wavelet coefficients corresponding to spatial domain edges are excluded from noise estimate calculation using a Sobel edge detector. The accuracy of the proposed algorithm is further increased using polynomial regression. Parseval’s theorem mathematically validates the proposed algorithm. The performance of the proposed algorithm is evaluated on a standard LIVE image dataset. Benchmarking results show that the proposed algorithm outperforms all other state of the art algorithms by a large margin over a wide range of noise.

Journal ArticleDOI
TL;DR: A deep neural network based on graph-convolutional layers that can elegantly deal with the permutation-invariance problem encountered by learning-based point cloud processing methods is proposed and significantly outperforms state-of-the-art methods on a variety of metrics.
Abstract: Point clouds are an increasingly relevant geometric data type but they are often corrupted by noise and affected by the presence of outliers. We propose a deep learning method that can simultaneously denoise a point cloud and remove outliers in a single model. The core of the proposed method is a graph-convolutional neural network able to efficiently deal with the irregular domain and the permutation invariance problem typical of point clouds. The network is fully-convolutional and can build complex hierarchies of features by dynamically constructing neighborhood graphs from similarity among the high-dimensional feature representations of the points. The proposed approach outperforms state-of-the-art denoising methods showing robust performance in the challenging setup of high noise levels and in presence of structured noise.

Journal ArticleDOI
Xintong Dong1, Yue Li1
TL;DR: Experimental results have demonstrated that CADN can suppress most of the DAS noise and enhance the SNR of DAS seismic data; also, it can recover the effective signals completely, even the extremely weak effective signals reflected by deep layers.
Abstract: Distributed optical fiber acoustic sensing (DAS) is a new and rapid-developing detection technology in seismic exploration. Unfortunately, due to the weak energy of scattered optical signals and the inferior coupling between DAS cable and receiving interface, the seismic data received by DAS are often characterized by low signal-to-noise ratio (SNR); this low SNR is likely to affect some subsequent analysis, such as inversion, imaging, and interpretation. In addition, the noise caused by the inferior coupling is a new kind of noise not presented on conventional seismic data. To enhance the SNR of DAS seismic data and suppress the DAS noise effectively, we propose a convolutional adversarial denoising network (CADN) based on the basic strategy of generative adversarial network (GAN) and the usage of a denoiser to replace the original generator in GAN. In CADN, the performance of denoiser is significantly strengthened via its own mean square error (MSE) loss and the adversarial loss between it and the discriminator. To balance the two losses and thus ensure the optimization of denoiser, we construct a novel loss function, where the optimal ratio of MSE and adversarial losses is determined by quantifying the denoising performance. Both real and synthetic examples are included to testify the denoising performance of CADN. Experimental results have demonstrated that CADN can suppress most of the DAS noise and enhance the SNR of DAS seismic data; also, it can recover the effective signals completely, even the extremely weak effective signals reflected by deep layers.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an image fusion method based on three-layer decomposition and sparse representation, where the source image is first decomposed into the high-frequency and low-frequency components, and the sparse reconstruct error parameter is adaptively designed according with the noise level.
Abstract: Image fusion has been received much attentions in recent years. However, solving both noise-free image fusion and noise-perturbed image fusion problems remains a big challenge. To solve the weak performance and low computational efficiency for current image fusion methods when dealing with the case of noisy source images, an image fusion method based on three-layer decomposition and sparse representation is proposed in this paper. In view of the high-pass characteristics of noise, the source image is first decomposed into the high-frequency and low-frequency components, and the sparse reconstruct error parameter is adaptively designed according with the noise level, so as to realize the fusion and denoising for high-frequency components simultaneously. To make full use of the details and energy in the low-frequency component, the structure–texture​ decomposition model is carried out and two fusion rules are carefully designed to fuse them. The fused image can be reconstructed by the perfused high-frequency, low-frequency structure and low-frequency texture layers. Experimental results demonstrate that the proposed method can effectively address the clean and noisy image fusion problems, and yield better performance than some state-of-the-art methods in terms of subjective visual and quantitative evaluations.

Journal ArticleDOI
TL;DR: An adaptive weighted symplectic geometry decomposition (AWSGD) method is proposed for noise reduction, which is adaptive without setting parameters manually and can avoid the defect of the traditional noise reduction method by energy size.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a multidirectional LR modeling and spatial-spectral total variation (MLR-SSTV) model for removing HSI mixed noise.
Abstract: Conventional low-rank (LR)-based hyperspectral image (HSI) denoising models generally convert high-dimensional data into 2-D matrices or just treat this type of data as 3-D tensors. However, these pure LR or tensor low-rank (TLR)-based methods lack flexibility for considering different correlation information from different HSI directions, which leads to the loss of comprehensive structure information and inherent spatial–spectral relationship. To overcome these shortcomings, we propose a novel multidirectional LR modeling and spatial–spectral total variation (MLR-SSTV) model for removing HSI mixed noise. By incorporating the weighted nuclear norm, we obtain the weighted sum of weighted nuclear norm minimization (WSWNNM) and the weighted sum of weighted tensor nuclear norm minimization (WSWTNNM) to estimate the more accurate LR tensor, especially, to remove the dead-line noise better. Gaussian noise is further denoised and the local spatial–spectral smoothness is preserved effectively by SSTV regularization. We develop an efficient algorithm for solving the derived optimization based on the alternating direction method of multipliers (ADMM). Extensive experiments on both synthetic data and real data demonstrate the superior performance of the proposed MLR-SSTV model for HSI mixed noise removal.

Proceedings ArticleDOI
Andong Li1, Wenzhe Liu1, Xiaoxue Luo1, Chengshi Zheng1, Xiaodong Li1 
06 Jun 2021
TL;DR: In this paper, a two-stage network and a post-processing module are proposed for denoising in complicated speech applications, which is mainly comprised of two pipelines, namely a twostage network, which decouple the optimization problem w.r.t. magnitude and phase, i.e., only the magnitude is estimated in the first stage and both are further refined in the second stage.
Abstract: It remains a tough challenge to recover the speech signals contaminated by various noises under real acoustic environments. To this end, we propose a novel system for denoising in the complicated applications, which is mainly comprised of two pipelines, namely a two-stage network and a post-processing module. The first pipeline is proposed to decouple the optimization problem w.r.t. magnitude and phase, i.e., only the magnitude is estimated in the first stage and both of them are further refined in the second stage. The second pipeline aims to further suppress the remaining unnatural distorted noise, which is demonstrated to sufficiently improve the subjective quality. In the ICASSP 2021 Deep Noise Suppression (DNS) Challenge, our submitted system ranked top-1 for the real-time track 1 in terms of Mean Opinion Score (MOS) with ITU-T P.808 framework.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a collaborative attention network (COLA-Net) for image restoration, which combines local and non-local attention mechanisms to restore image content in the areas with complex textures and with highly repetitive details respectively.
Abstract: Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance. However, most of the existing methods solely focus on one type of attention mechanism (local or non-local). Furthermore, by exploiting the self-similarity of natural images, existing pixel-wise non-local attention operations tend to give rise to deviations in the process of characterizing long-range dependence due to image degeneration. To overcome these problems, in this paper we propose a novel collaborative attention network (COLA-Net) for image restoration, as the first attempt to combine local and non-local attention mechanisms to restore image content in the areas with complex textures and with highly repetitive details respectively. In addition, an effective and robust patch-wise non-local attention model is developed to capture long-range feature correspondences through 3D patches. Extensive experiments on synthetic image denoising, real image denoising and compression artifact reduction tasks demonstrate that our proposed COLA-Net is able to achieve state-of-the-art performance in both peak signal-to-noise ratio and visual perception, while maintaining an attractive computational complexity.

Journal ArticleDOI
TL;DR: A novel semi-supervised learning based method suitable for retinal OCT images collected from different OCT devices and can achieve better performance even only using half of the training data is proposed.
Abstract: Speckle noise is the main cause of poor optical coherence tomography (OCT) image quality. Convolutional neural networks (CNNs) have shown remarkable performances for speckle noise reduction. However, speckle noise denoising still meets great challenges because the deep learning-based methods need a large amount of labeled data whose acquisition is time-consuming or expensive. Besides, many CNNs-based methods design complex structure based networks with lots of parameters to improve the denoising performance, which consume hardware resources severely and are prone to overfitting. To solve these problems, we propose a novel semi-supervised learning based method for speckle noise denoising in retinal OCT images. First, to improve the model’s ability to capture complex and sparse features in OCT images, and avoid the problem of a great increase of parameters, a novel capsule conditional generative adversarial network (Caps-cGAN) with small number of parameters is proposed to construct the semi-supervised learning system. Then, to tackle the problem of retinal structure information loss in OCT images caused by lack of detailed guidance during unsupervised learning, a novel joint semi-supervised loss function composed of unsupervised loss and supervised loss is proposed to train the model. Compared with other state-of-the-art methods, the proposed semi-supervised method is suitable for retinal OCT images collected from different OCT devices and can achieve better performance even only using half of the training data.

Journal ArticleDOI
TL;DR: In this paper, a novel underwater acoustic signal denoising algorithm called AWMF+GDES is proposed, which combines the symmetric α$ -stable (S $\alpha$ S) distribution and normal distribution.
Abstract: Gaussian/non-Gaussian impulsive noises in underwater acoustic (UWA) channel seriously impact the quality of underwater acoustic communication. The common denoising algorithms are based on Gaussian noise model and are difficult to apply to the coexistence of Gaussian/non-Gaussian impulsive noises. Therefore, a new UWA noise model is described in this paper by combining the symmetric $\alpha$ -stable (S $\alpha$ S) distribution and normal distribution. Furthermore, a novel underwater acoustic signal denoising algorithm called AWMF+GDES is proposed. First, the non-Gaussian impulsive noise is adaptively suppressed by the adaptive window median filter (AWMF). Second, an enhanced wavelet threshold optimization algorithm with a new threshold function is proposed to suppress the Gaussian noise. The optimal threshold parameters are obtained based on good point set and dynamic elite group guidance combined simulated annealing selection artificial bee colony (GDES-ABC) algorithm. The numerical simulations demonstrate that the convergence speed and the convergence precision of the proposed GDES-ABC algorithm can be increased by 25% $\sim$ 66% and 21% $\sim$ 73%, respectively, compared with the existing algorithms. Finally, the experimental results verify the effectiveness of the proposed underwater acoustic signal denoising algorithm and demonstrate that both the proposed wavelet threshold optimization method based on GDES-ABC and the AWMF+GDES algorithm can obtain higher output signal-to-noise ratio (SNR), noise suppression ratio (NSR), and smaller root mean square error (RMSE) compared with the other algorithms.

Journal ArticleDOI
TL;DR: The architecture of deep Convolutional Networks (ConvNets) for seismic data denoising is investigated and a stopping criterion is designed for the data fitting process to obtain the latent clean seismic data automatically.
Abstract: Denoising is an indispensable step in seismic data processing Deep-learning-based seismic data denoising has been recently attracting attentions due to its outstanding performance In this letter, we investigate the architecture of deep Convolutional Networks (ConvNets) for seismic data denoising The untrained ConvNets are served as a generative network to a single seismic data profile with Gaussian noise Starting with random initialized parameters, the generative networks with various handcrafted architectures have different ability to map the seismic data at iterations and can separate the Gaussian noise as residuals For the purpose of exploring the ability of Gaussian noise separation, the depth, width, and skip connection as the main components of generative network are assembled as various architectures to fit Gaussian noise, clean, and noisy seismic data, respectively Then, the favorable network architecture with high and low impedance (an ability to hinder data reconstruction) to noise and seismic data is adopted as prior model to seismic data denoising task Furthermore, a stopping criterion is designed for the data fitting process to obtain the latent clean seismic data automatically The proposed method does not need data sets for training and it makes use of network architecture as prior Extensive experiments both on synthetic and field data demonstrate the effectiveness of the selected ConvNet and the advantages are evaluated by comparing the denoising results with f-x multi-channel singular spectrum analysis (MSSA) and state-of-the-art unsupervised neural network (NN)-based method

Journal ArticleDOI
TL;DR: In this paper, the authors compared and quantified noise emissions between the historical and epidemic periods, and found that the reduction in noise levels observed at all monitoring stations coincides with the reduced shipping traffic.

Journal ArticleDOI
TL;DR: DeepInterpolation as discussed by the authors is a self-supervised deep learning-based denoising approach for calcium imaging, electrophysiology and functional magnetic resonance imaging (fMRI) data.
Abstract: Progress in many scientific disciplines is hindered by the presence of independent noise. Technologies for measuring neural activity (calcium imaging, extracellular electrophysiology and functional magnetic resonance imaging (fMRI)) operate in domains in which independent noise (shot noise and/or thermal noise) can overwhelm physiological signals. Here, we introduce DeepInterpolation, a general-purpose denoising algorithm that trains a spatiotemporal nonlinear interpolation model using only raw noisy samples. Applying DeepInterpolation to two-photon calcium imaging data yielded up to six times more neuronal segments than those computed from raw data with a 15-fold increase in the single-pixel signal-to-noise ratio (SNR), uncovering single-trial network dynamics that were previously obscured by noise. Extracellular electrophysiology recordings processed with DeepInterpolation yielded 25% more high-quality spiking units than those computed from raw data, while DeepInterpolation produced a 1.6-fold increase in the SNR of individual voxels in fMRI datasets. Denoising was attained without sacrificing spatial or temporal resolution and without access to ground truth training data. We anticipate that DeepInterpolation will provide similar benefits in other domains in which independent noise contaminates spatiotemporally structured datasets. DeepInterpolation is a self-supervised deep learning-based denoising approach for calcium imaging, electrophysiology and fMRI data. The approach increases the signal-to-noise ratio and allows extraction of more information from the processed data than from the raw data.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a content-noise complementary learning (CNCL) strategy, in which two deep learning predictors are used to learn the respective content and noise of the image dataset complementarily.
Abstract: Medical imaging denoising faces great challenges, yet is in great demand. With its distinctive characteristics, medical imaging denoising in the image domain requires innovative deep learning strategies. In this study, we propose a simple yet effective strategy, the content-noise complementary learning (CNCL) strategy, in which two deep learning predictors are used to learn the respective content and noise of the image dataset complementarily. A medical image denoising pipeline based on the CNCL strategy is presented, and is implemented as a generative adversarial network, where various representative networks (including U-Net, DnCNN, and SRDenseNet) are investigated as the predictors. The performance of these implemented models has been validated on medical imaging datasets including CT, MR, and PET. The results show that this strategy outperforms state-of-the-art denoising algorithms in terms of visual quality and quantitative metrics, and the strategy demonstrates a robust generalization capability. These findings validate that this simple yet effective strategy demonstrates promising potential for medical image denoising tasks, which could exert a clinical impact in the future. Code is available at: https://github.com/gengmufeng/CNCL-denoising.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed approach can remove noise well and retain the fine signatures of signal as much as possible, and Euclidean weight function and Minimax threshold can achieve desired denoising capability when combined with soft threshold function or hard threshold function.