scispace - formally typeset
Search or ask a question

Showing papers on "Dark-frame subtraction published in 2022"


Journal ArticleDOI
TL;DR: In this article , a double low-rank (DLR) matrix decomposition method was proposed for HSI denoising and destriping. But the proposed DLR model cannot completely remove stripe noise when the stripe noise is no longer sparse.
Abstract: Hyperspectral images (HSIs) have a wealth of applications in many areas, due to their fine spectral discrimination ability. However, in the practical imaging process, HSIs are often degraded by a mixture of various types of noise, for example, Gaussian noise, impulse noise, dead pixels, dead lines, and stripe noise. Low-rank matrix decomposition theory has been widely used in HSI denoising, and has achieved competitive results by modeling the impulse noise, dead pixels, dead lines, and stripe noise as sparse components. However, the existing low-rank-based methods for HSI denoising cannot completely remove stripe noise when the stripe noise is no longer sparse. In this article, we extend the HSI observation model and propose a double low-rank (DLR) matrix decomposition method for HSI denoising and destriping. By simultaneously exploring the low-rank characteristic of the lexicographically ordered noise-free HSI and the low-rank structure of the stripe noise on each band of the HSI, the two low-rank constraints are formulated into one unified framework, to achieve separation of the noise-free HSI, stripe noise, and other mixed noise. The proposed DLR model is then solved by the augmented Lagrange multiplier (ALM) algorithm efficiently. Both simulation and real HSI data experiments were carried out to verify the superiority of the proposed DLR method.

12 citations



Proceedings ArticleDOI
01 Jun 2022
TL;DR: Zhang et al. as mentioned in this paper combine both noise modeling and estimation, and propose an innovative noise model estimation and noise synthesis pipeline for realistic noisy image generation, which learns a noise estimation model with fine-grained statistical noise model in a contrastive manner, and uses the estimated noise parameters to model camera-specific noise distribution, and synthesize realistic noisy training data.
Abstract: Image denoising has achieved unprecedented progress as great efforts have been made to exploit effective deep denoisers. To improve the denoising performance in real-world, two typical solutions are used in recent trends: devising better noise models for the synthesis of more realistic training data, and estimating noise level function to guide non-blind denoisers. In this work, we combine both noise modeling and estimation, and propose an innovative noise model estimation and noise synthesis pipeline for realistic noisy image generation. Specifically, our model learns a noise estimation model with fine-grained statistical noise model in a contrastive manner. Then, we use the estimated noise parameters to model camera-specific noise distribution, and synthesize realistic noisy training data. The most striking thing for our work is that by calibrating noise models of several sensors, our model can be extended to predict other cameras. In other words, we can estimate camera-specific noise models for unknown sensors with only testing images, without laborious calibration frames or paired noisy/clean data. The proposed pipeline endows deep denoisers with competitive performances with state-of-the-art real noise modeling methods.

2 citations


Journal ArticleDOI
TL;DR: In this article , the spectral correlation in the noise is considered and a method for per-pixel noise estimation that is also able to deal with spectral correlation is proposed. But the method makes no assumptions on the behavior of the underlying noise.
Abstract: Modeling of the underlying noise in a hyperspectral image reveals important information about the characteristics of the hyperspectral sensor and the image itself. While the focus in the literature has mostly been on the estimation of noise statistics, it is also of interest to estimate the actual noise present in each pixel, which can not only directly contribute to denoising of the image but can also aid other image processing algorithms exploiting such information. In this letter, we propose a novel method for per-pixel noise estimation that is also able to deal with spectral correlation in the noise. The method makes no assumptions on the behavior of the underlying noise. Simulation results show that the proposed method performs significantly better than the existing methods in cases where there is a correlation in the noise.

1 citations


Journal ArticleDOI
TL;DR: In this article , the linear space from the focal point of the camera through each pixel to the existing object is divided into equally spaced grids and the difference from each grid to the object surface is obtained from multiple tracked depth images, which have noisy depth values of the respective image pixels.
Abstract: High-quality depth images are required for stable and accurate computer vision. Depth images captured by depth cameras tend to be noisy, incomplete, and of low-resolution. Therefore, increasing the accuracy and resolution of depth images is desirable. We propose a method for reducing the noise and holes from depth images pixel by pixel, and increasing resolution. For each pixel in the target image, the linear space from the focal point of the camera through each pixel to the existing object is divided into equally spaced grids. In each grid, the difference from each grid to the object surface is obtained from multiple tracked depth images, which have noisy depth values of the respective image pixels. Then, the coordinates of the correct object surface are obtainable by reducing the depth random noise. The missing values are completed. The resolution can also be increased by creating new pixels between existing pixels and by then using the same process as that used for noise reduction. Evaluation results have demonstrated that the proposed method can do processing with less GPU memory. Furthermore, the proposed method was able to reduce noise more accurately, especially around edges, and was able to process more details of objects than the conventional method. The super-resolution of the proposed method also produced a high-resolution depth image with smoother and more accurate edges than the conventional methods.

1 citations


Journal ArticleDOI
TL;DR: An effective denoising filter is proposed that can restore the image effectively in terms of quality and speed with less complexity for high density noise level and is quantitatively and visually comparable to these algorithms when the noise intensity is up.
Abstract: Version: ..2022 Abstract: Today, thanks to the rapid development of technology, the importance of digital images is increasing. However, sensor errors that may occur during the acquisition, interruptions in the transmission of images and errors in storage cause noise that degrades data quality. Salt and pepper noise, a common impulse noise, is one of the most well-known types of noise in digital images. This noise negatively affects the detailed analysis of the image. It is very important that pixels affected by noise are restored without loss of image fine details, especially at high level of noise density. Although many filtering algorithms have been proposed to remove noise, the enhancement of images with high noise levels is still complex, not efficient or requires very long runtime. In this paper, we propose an effective denoising filter that can restore the image effectively in terms of quality and speed with less complexity for high density noise level. In the experimental studies, we compare the denoising results of the proposed method with other state-of-the-art methods and the proposed algorithm is quantitatively and visually comparable to these algorithms when the noise intensity is up

1 citations


Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this article , the authors propose a new sRGB-domain noise model based on normalizing flows that is capable of learning the complex noise distribution found in sRGB images under various ISO levels.
Abstract: Noise modeling and reduction are fundamental tasks in low-level computer vision. They are particularly important for smartphone cameras relying on small sensors that exhibit visually noticeable noise. There has recently been renewed interest in using data-driven approaches to improve camera noise models via neural networks. These data-driven approaches target noise present in the raw-sensor image before it has been processed by the camera's image signal processor (ISP). Modeling noise in the RAW-rgb domain is useful for improving and testing the in-camera denoising algorithm; however, there are situations where the camera's ISP does not apply denoising or additional denoising is desired when the RAW-rgb domain image is no longer available. In such cases, the sensor noise propagates through the ISP to the final rendered image encoded in standard RGB (sRGB). The nonlinear steps on the ISP culminate in a significantly more complex noise distribution in the sRGB domain and existing raw-domain noise models are unable to capture the sRGB noise distribution. We propose a new sRGB-domain noise model based on normalizing flows that is capable of learning the complex noise distribution found in sRGB images under various ISO levels. Our normalizing flows-based approach outperforms other models by a large margin in noise modeling and synthesis tasks. We also show that image denoisers trained on noisy images synthesized with our noise model outperforms those trained with noise from baselines models.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a convolutional neural network (CNN) was proposed to estimate the noise variance in the case of additive colored Gaussian noise (ACGN) or for noises with other distributions.
Abstract: Noise parameters estimation is needed for many tasks of digital image processing. Many efficient algorithms of noise variance estimation were proposed during last two decades. However, most of those estimators are efficient only for a specific kind of noise for which they were designed. For example, methods of estimation of variance of white additive Gaussian noise (AWGN) fail in the case of additive colored Gaussian noise (ACGN) or for noises with other distributions. In this paper a new fully blind method of noise level estimation is proposed. For a given image, a distorted image with a removed part of pixels (around 10%) is generated. Then an inpainting (or impulse noise removal) method is used to recover missed pixels values. The difference between true and recovered values is used for a robust estimation of noise level. The algorithm is applied for different image scales to estimate noise spectrum. In the paper we propose a convolutional neural network PIXPNet for effective prediction of values of missing pixels. A comparative analysis shows that the proposed PIXPNet provides smallest error of recovered pixels values among all existing methods. A good efficiency of usage of the proposed approach in both AWGN and spatially correlated noise suppression is demonstrated.

Journal ArticleDOI
TL;DR: Experimental results demonstrated that the proposed method outperformed other denoising algorithms in terms of noise detection and image restoration in the vast majority of the cases.
Abstract: Random-valued impulse noise removal from images is a challenging task in the field of image processing and computer vision. In this paper, an effective three-step noise removal method was proposed using local statistics of grayscale images. Unlike most existing denoising algorithms that assume the noise density is known, our method estimated the noise density in the first step. Based on the estimated noise density, a noise detector was implemented to detect corrupted pixels in the second step. Finally, a modified weighted mean filter was utilized to restore the detected noisy pixels while leaving the noise-free pixels unchanged. The noise removal performance of our method was compared with 10 well-known denoising algorithms. Experimental results demonstrated that our proposed method outperformed other denoising algorithms in terms of noise detection and image restoration in the vast majority of the cases. Keywords—Random-valued impulse noise; noise detection; image restoration; modified weighted mean filter

Journal ArticleDOI
TL;DR: In this article , a combined method for noise reduction from scanned document images is proposed, where speckle noise is considered for modeling the noise during transmission, Gaussian noise during scanning procedures considering the thermal radiations of the scanning mechanism, and salt and pepper noise (impulse valued noise) for representing the aging phenomenon.
Abstract: Document digitization is becoming popular with its enhanced portability, efficient storage, processability and easy retrieval. Document images acquired using the scanning process are filled with additional noise. These noises in document images are associated with document paper quality, the typing machine or printer, or the scanner during the scanning process. Aging, folded corners, stains, shadow-through, and bleed-through noises are also present in this process. During digitization, these noises may get amplified and make the digital representation further noisy. Noise removal methods, techniques, or algorithms refer to the process of removing noises from digital images utilizing image processing, image analysis, or filtering approaches. The transmission, scanning, and aging processes individually or in combination could lead to introducing noise in images. So, here speckle noise is considered for modeling the noise during transmission, Gaussian noise during scanning procedures considering the thermal radiations of the scanning mechanism, and the salt and pepper noise (impulse valued noise) for representing the aging phenomenon. To eliminate a certain kind of noise, a particular noise removal technique uses a special kind of filter. Based on the aforementioned noises, a combined method for noise reduction from scanned document images is proposed. The result of the proposed method is presented considering the resultant image quality. The metrics like Mean Square Error, Signal-to-Noise Ratio, Peak Signal to Noise Ratio, and Structural Similarity Index Metrics are used to evaluate the quality of resultant image.

Journal ArticleDOI
30 Jan 2022
TL;DR: In this paper , an adaptive hierarchical filter method was used to eliminate noise or dots in the X-ray image and clarifying the existing image shadows, which can be clearly seen when placed right in front of a light source.
Abstract: Salt & Paper Noise Is A Form Of Noise That Is Used To See Black Or White Dots On An Image The Cause Is An Error Or Damage To The Pixels In The Image. X-ray or radiography is the use of ionizing rays to form the image of the object being studied on film, radiography is generally used to see objects that are not transparent, for example, the inside of the human body. X-ray images often have unclear results, because the image is taken using X-Ray rays, the x-ray results usually have a black and white color, and can be clearly seen when placed right in front of a light source. The light source only helps to see the shadow of the image, but does not improve the quality of the image and reduce noise or dots in the image. For this reason, the author intends to try to implement the X-ray results using the Adaptive-Hierarchical filter method with the aim of eliminating noise or dots in the X-ray image and clarifying the existing image shadows. With the Adaptive-Hierarchical Filter Method I Use Two Noise Detection Methods Where Is Detection For Noise In Medium And Smooth Areas Where Noise Detects In Rusal Or Salt Pixels And Paper Detects For Noise In Strong Areas Or You Can Call It Edge Detection On Noise On The Edges Using Only Pixels That Have Been Defined

Journal ArticleDOI
TL;DR: A detailed outline of impulse noise and noise removal techniques is presented by looking at over a decade of research conducted to establish a fundamental understanding of the Boundary discriminative noise detector algorithm used in image denoising.
Abstract: Image denoising is an essential and complex activity that should be carried out before any other image processing because it checks for errors within the image(s) and rectifies them. There are ways to remove noise, the switching scheme is an outstanding method when equated to others, it initially segregates the noisy pixels and then filters them. Boundary Discriminative Noise Detection (BDND) is a type of algorithm that uses the switching method and is good for impulse noise detection, many works have been presented using several enhancements to detect noise from images using BDND. In this paper, we present a detailed outline of impulse noise and noise removal techniques by looking at over a decade of research conducted to establish a fundamental understanding of the Boundary discriminative noise detector algorithm used in image denoising. We analyzed 19 relevant papers through Google Scholar, focusing on three aspects: the methods for detecting noisy pixels, the type(s) of noise, and the major challenges. We found that many of the image denoising methods still use BDND and at least one algorithm is developed yearly except for 2017 to 2021, indicating the algorithm is significant in the field of image denoising. Furthermore, we wrap up the survey by highlighting some research challenges and offering a list of key recommendations to spur further research in this area.

Proceedings ArticleDOI
17 Jul 2022
TL;DR: In this article , a 2-stage sliding window-based algorithm is proposed to automatically detect pixel noise in satellite images and localise the noise pixels so that the correction algorithms can be applied in a localised fashion.
Abstract: The noise correction and image enhancements are done at the Data Processing Generation System (DPGS) at the National Remote Sensing Center (NRSC) satellite data production chain. Even after the noise correction algorithms, pixel noise is observed at the checkout by the Product Quality Control (PQC) system. These are detected manually at PQC and an alert is raised at the DPGS to recorrect the image. This process is time consuming due to the manual intervention and the application of noise correction algorithm on the entire image. This manuscript proposes a novel 2-stage, sliding window-based algorithm to automatically detect pixel noise in satellite images and localise the noise pixels so that the correction algorithms can be applied in a localised fashion. This local noise correction also restores the overall SNR of the image compared to global correction. This algorithm is realized and tested on data obtained from the Indian remote sensing (IRS) satellites like Cartosat-2S, Resourcesat-2/2A. The dataset is not open-sourced, and hence very minimal information is provided regarding the IRS data. However, we use Landsat-8 data to conduct a few analyses on the algorithm's performance. The role of patch size considered during the detection of pixel noise in satellite data is also analyzed.

Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: The proposed method, DE-G, can estimate additive noise, multiplicative noise, and impulsive noise from single-source images accurately and shows the capability of the proposed method in estimating multiple corruptions.
Abstract: Image noise is a variation of uneven pixel values that occurs randomly. A good estimation of image noise parameters is crucial in image noise modeling, image denoising, and image quality assessment. To the best of our knowledge, there is no single estimator that can predict all noise parameters for multiple noise types. The first contribution of our research was to design a noise data feature extractor that can effectively extract noise information from the image pair. The second contribution of our work leveraged other noise parameter estimation algorithms that can only predict one type of noise. Our proposed method, DE-G, can estimate additive noise, multiplicative noise, and impulsive noise from single-source images accurately. We also show the capability of the proposed method in estimating multiple corruptions.

Posted ContentDOI
01 Jun 2022
TL;DR: In this article , the authors propose a new sRGB-domain noise model based on normalizing flows that is capable of learning the complex noise distribution found in sRGB images under various ISO levels.
Abstract: Noise modeling and reduction are fundamental tasks in low-level computer vision. They are particularly important for smartphone cameras relying on small sensors that exhibit visually noticeable noise. There has recently been renewed interest in using data-driven approaches to improve camera noise models via neural networks. These data-driven approaches target noise present in the raw-sensor image before it has been processed by the camera's image signal processor (ISP). Modeling noise in the RAW-rgb domain is useful for improving and testing the in-camera denoising algorithm; however, there are situations where the camera's ISP does not apply denoising or additional denoising is desired when the RAW-rgb domain image is no longer available. In such cases, the sensor noise propagates through the ISP to the final rendered image encoded in standard RGB (sRGB). The nonlinear steps on the ISP culminate in a significantly more complex noise distribution in the sRGB domain and existing raw-domain noise models are unable to capture the sRGB noise distribution. We propose a new sRGB-domain noise model based on normalizing flows that is capable of learning the complex noise distribution found in sRGB images under various ISO levels. Our normalizing flows-based approach outperforms other models by a large margin in noise modeling and synthesis tasks. We also show that image denoisers trained on noisy images synthesized with our noise model outperforms those trained with noise from baselines models.

Journal ArticleDOI
27 Sep 2022-Sensors
TL;DR: The results show that the prediction method proposed in this paper can accomplish the prediction of gamma radiation image noise, which is beneficial to the elimination of image noise in this environment.
Abstract: The gamma radiation environment is one of the harshest operating environments for image acquisition systems, and the captured images are heavily noisy. In this paper, we improve the multi-frame difference method for the characteristics of noise and add an edge detection algorithm to segment the noise region and extract the noise quantization information. A Gaussian mixture model of the gamma radiation noise is then established by performing a specific statistical analysis of the amplitude and quantity information of the noise. The established model is combined with the random walk algorithm to generate noise and achieve the prediction of image noise under different accumulated doses. Evaluated by objective similarity matching, there is no significant difference between the predicted image noise and the actual noise in subjective perception. The ratio of similarity-matched images in the sample from the predicted noise to the actual noise reaches 0.908. To further illustrate the spillover effect of this research, in the discussion session, we used the predicted image noise as the training set input to a deep residual network for denoising. The network model was able to achieve a good denoising effect. The results show that the prediction method proposed in this paper can accomplish the prediction of gamma radiation image noise, which is beneficial to the elimination of image noise in this environment.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a dual tree wavelet based denoising algorithm to extract the noise from the green channel and compute the standard deviation of the noise for acquired and interpolated pixels, respectively.

Journal ArticleDOI
20 Dec 2022-Desimal
TL;DR: In this paper , a new filtering method is proposed that can improve the image by randomly exploring pixels, then collecting the surrounding pixel data from the processed pixels (kernel). The kernel will be enlarged if there are no free-noise pixels in the kernel.
Abstract: The digital image is one of the discoveries that play an important role in various aspects of modern human life. These findings are useful in various fields, including defense (military and non-military), security, health, education, and others. In practice, the image acquisition process often suffers from problems, both in the process of capturing and transmitting images. Among the problems is the appearance of noise which results in the degradation of information in the image and thus disrupts further processes of image processing. One type of noise that damages digital images is salt and pepper noise which randomly changes the pixel values to 0 (black) or 255 (white). Researchers have proposed several methods to deal with this type of noise, including median filter, adaptive mean filter, switching median filter, modified decision based unsymmetric trimmed median filter, and different applied median filter. However, this method suffers from a decrease in performance when applied to images with high-intensity noise. Therefore, in this research, a new filtering method is proposed that can improve the image by randomly exploring pixels, then collecting the surrounding pixel data from the processed pixels (kernel). The kernel will be enlarged if there are no free-noise pixels in the kernel. Furthermore, the damaged pixels will be replaced using the mean data centering statistic. Images enhanced using the proposed method have better quality than the previous methods, both quantitatively (SSIM and PSNR) and qualitatively.

Proceedings ArticleDOI
20 Oct 2022
TL;DR: In this paper , the authors combine the powerful set of Fuzzy rules with the genetic algorithm pattering recognition features to filter the image from impulsive noise yet retaining the image features unfiltered.
Abstract: A digital image is a two-dimensional representation of an image in the form of a numerical matrix. In grayscale images, pixels are represented by an integer numeric value that is between 0 and 255. One of the main problems encountered today is the appearance of noise in digital images. The main sources of noise appear during the image acquisition and transmission phases. There are many types of noise and among the best known are some such as Gaussian or impulsive. The main goals of applying filters are to smooth the image, remove noise, enhancement, and edge detection, in this paper we propose the combining of the powerful set of Fuzzy rules with the genetic algorithm pattering recognition features to filter the image from impulsive noise yet retaining the image features unfiltered thus performing a powerful noise filtering on the noise without and destruction to the image itself.

Journal ArticleDOI
TL;DR: In this article , the authors discuss methods used to de-speckle a digital image and the defects in de-spckling models which lead to problem formulation, and propose several factors that show the aggregate of speckled noise removed from the picture.
Abstract: A computerized image is a bidimensional image with a limited set of digital values, called pixels (or picture elements). Because of certain elements such as the system's physical attributes, picture capturing instruments etc, noise gets developed in the images. There are various sorts of noise, out of which speckle noise – a type of multiplicative noise, is very hard to extract and remove. Many devices to draw out speckled noise from pictures have been made. To de-speckle the picture, several factors are considered that show the aggregate of speckled noise removed from the picture. This paper discusses speckle noise, methods used to de-speckle a digital image and the defects in de-speckling models which lead to problem formulation.

Proceedings ArticleDOI
06 Jul 2022
TL;DR: In this article , the authors proposed the method of separating target and noise instead of removing noise, which can be seen from the results of the proposed method that the separation of target and the noise is possible.
Abstract: In the field of image processing, it is often necessary to separate the background and foreground of image for the application for image editing, moving-object recognition, machine learning and so on. In image recognition under low illumination, the random noise is so intense relative to the intensity of the signal. So “How to deal with the noise” is important problem. Therefore, in this research, we propose the method of separating target and noise instead of removing noise. It can be seen from the results of the proposed method that the separation of target and noise is possible.