An Efficient Image Denoising Approach for the Recovery of Impulse Noise
01 Sep 2017-Bulletin of Electrical Engineering and Informatics (Universitas Ahmad Dahlan)-Vol. 6, Iss: 3, pp 281-286
...read more
Citations
More filters
[...]
TL;DR: This article proposes the novel noise classification technique found on QTSD (quadruple threshold statistical detection) filter, which is ultimately improved from the outstanding T TSD (Triple Threshold Statistical Detection) filter.
Abstract: Because of the enormous necessity of contemporary noise suppressing algorithms, this article proposes the novel noise classification technique found on QTSD filter improved from the TTSD filter. The four thresholds for each auxiliary situations are incorporated into the proposed QTSD framework for dealing with the limitation of the earlier noise classification technique. The mathematical pattern is modeled by each photograph elements and is investigated in contradiction to the 1 st threshold for analyzing whether it is non-noise or noise photograph elements. Subsequently, the calculated photograph element is analyzed with the contradiction between the 2 nd threshold, which is modeled by using the normal distribution (mean and variance), and is analyzed with the contradiction between the 3 rd threshold, which is modeled by using the quartile distribution (median). Finally, the calculated photograph element is investigated in contradiction to the 4th threshold, which is modeled from maximum or minimum value for analyzing whether it is non-noise or noise photograph elements FIIN. For performance evaluation, extensive noisy photographs are made up of nine photographs under FIIN environment distribution, which are synthesized for investigating the proposed noise classification techniques found on QTSD filter in the objective indicators (noise classification, non-noise classification and overall classification correctness). From these results, the proposed noise classification technique can outstandingly produce the higher correctness than the earlier noise classification techniques.
3 citations
Cites background or methods from "An Efficient Image Denoising Approa..."
[...]
[...]
[...]
[...]
TL;DR: In many tested images, the proposed method indicates that the noisy pixels are detected efficiently and if the proposed filter is added as a preliminary stage to many filters, the final results will be improved.
Abstract: This paper proposes a new approach for restoring images distorted by fixed-valued impulse noise. The detection process is based on finding the probability of existence of the image pixel. Extensive investigations indicate that the probability of existence of a pixel in an original image is bounded and has a maximum limit. The tested pixel is judged as original if it has probability of existence less than the threshold boundary. In many tested images, the proposed method indicates that the noisy pixels are detected efficiently. Moreover, this method is very fast, easy to implement and has an outstanding performance when compared with other well-known methods. Therefore, if the proposed filter is added as a preliminary stage to many filters, the final results will be improved.
3 citations
Cites methods from "An Efficient Image Denoising Approa..."
[...]
[...]
TL;DR: The proposed deblurring method based on the Wiener filter improved the quality of iris pattern in the blurry image and recorded the fastest execution time to improve thequality of iri pattern compared to the other methods.
Abstract: Iris recognition used the iris features to verify and identify the identity of human. The iris has many advantages such as stability over time, easy to use and high recognition accuracy. However, the poor quality of iris images can degrade the recognition accuracy of iris recognition system. The recognition accuracy of this system is depended on the iris pattern quality captured during the iris acquisition. The iris pattern quality can degrade due to the blurry image. Blurry image happened due to the movement during image acquisition and poor camera resolution. Due to that, a deblurring method based on the Wiener filter was proposed to improve the quality of iris pattern. This work is significant since the proposed method can enhance the quality of iris pattern in the blurry image. Based to the results, the proposed method improved the quality of iris pattern in the blurry image. Moreover, it recorded the fastest execution time to improve the quality of iris pattern compared to the other methods.
3 citations
Cites methods from "An Efficient Image Denoising Approa..."
[...]
[...]
TL;DR: The proposed approach for automatic landmark identification in 3D cephalometric was capable of detecting 12 landmarks on 3D CBCT images which can be facilitate the use of 3Dcephalometry to orthodontists.
Abstract: This study proposes a new contribution to solve the problem of automatic landmarks detection in three-dimensional cephalometry. 3D images obtained from CBCT (cone beam computed tomography) equipment were used for automatic identification of twelve landmarks. The proposed method is based on a local geometry and intensity criteria of skull structures. After the step of preprocessing and binarization, the algorithm segments the skull into three structures using the geometry information of nasal cavity and intensity information of the teeth. Each targeted landmark was detected using local geometrical information of the volume of interest containing this landmark. The ICC and confidence interval (95% CI) for each direction were 0, 91 (0.75 to 0.96) for x- direction; 0.92 (0.83 to 0.97) for y-direction; 0.92 (0.79 to 0.97) for z-direction. The mean error of detection was calculated using the Euclidian distance between the 3D coordinates of manually and automatically detected landmarks. The overall mean error of the algorithm was 2.76 mm with a standard deviation of 1.43 mm. Our proposed approach for automatic landmark identification in 3D cephalometric was capable of detecting 12 landmarks on 3D CBCT images which can be facilitate the use of 3D cephalometry to orthodontists.
1 citations
Cites methods from "An Efficient Image Denoising Approa..."
[...]
[...]
TL;DR: The implementation of a novel Gaussian smoothing filter with low power approximate adders in Field Programmable Gate Array (FPGA) is discussed, applied to restore the noisy images in the proposed system.
Abstract: Smoothing filters are essential for noise removal and image restoration. Gaussian filters are used in many digital image and video processing systems. Hence the hardware implementation of the Gaussian filter becomes a reliable solution for real time image processing applications. This paper discusses the implementation of a novel Gaussian smoothing filter with low power approximate adders in Field Programmable Gate Array (FPGA). The proposed Gaussian filter is applied to restore the noisy images in the proposed system. Original test images with 512x512 pixels were taken and divided in to 4x4 blocks with 256x256 pixels. The proposed technique has been applied and the performance metrics were measured for various simulation criteria. The proposed algorithm is also implemented using approximate adders, since approximate adders had been recognized as a reliable alternate for error tolerant applications in circuit based metrics such as power, area and delay where the accuracy may be considered for trade off.
1 citations
References
More filters
[...]
TL;DR: A decision-based, signal-adaptive median filtering algorithm for removal of impulse noise, which achieves accurate noise detection and high SNR measures without smearing the fine details and edges in the image.
Abstract: We propose a decision-based, signal-adaptive median filtering algorithm for removal of impulse noise. Our algorithm achieves accurate noise detection and high SNR measures without smearing the fine details and edges in the image. The notion of homogeneity level is defined for pixel values based on their global and local statistical properties. The cooccurrence matrix technique is used to represent the correlations between a pixel and its neighbors, and to derive the upper and lower bound of the homogeneity level. Noise detection is performed at two stages: noise candidates are first selected using the homogeneity level, and then a refining process follows to eliminate false detections. The noise detection scheme does not use a quantitative decision measure, but uses qualitative structural information, and it is not subject to burdensome computations for optimization of the threshold values. Empirical results indicate that our scheme performs significantly better than other median filters, in terms of noise suppression and detail preservation.
279 citations
"An Efficient Image Denoising Approa..." refers background in this paper
[...]
[...]
TL;DR: A new algorithm that is especially developed for reducing all kinds of impulse noise: fuzzy impulse noise detection and reduction method (FIDRM), which can also be applied to images having a mixture of impulse Noise and other types of noise.
Abstract: Removing or reducing impulse noise is a very active research area in image processing. In this paper we describe a new algorithm that is especially developed for reducing all kinds of impulse noise: fuzzy impulse noise detection and reduction method (FIDRM). It can also be applied to images having a mixture of impulse noise and other types of noise. The result is an image quasi without (or with very little) impulse noise so that other filters can be used afterwards. This nonlinear filtering technique contains two separated steps: an impulse noise detection step and a reduction step that preserves edge sharpness. Based on the concept of fuzzy gradient values, our detection method constructs a fuzzy set impulse noise. This fuzzy set is represented by a membership function that will be used by the filtering method, which is a fuzzy averaging of neighboring pixels. Experimental results show that FIDRM provides a significant improvement on other existing filters. FIDRM is not only very fast, but also very effective for reducing little as well as very high impulse noise.
256 citations
"An Efficient Image Denoising Approa..." refers methods in this paper
[...]
[...]
TL;DR: This work develops two adaptive restoration techniques, one operates in light space, where the relationship between the incident light and light space values is linear, while the second method uses the transformed noise model to operate in image space.
Abstract: In this work, we propose a denoising scheme to restore images degraded by CCD noise. The CCD noise model, measured in the space of incident light values (light space), is a combination of signal-independent and signal-dependent noise terms. This model becomes more complex in image brightness space (normal camera output) due to the nonlinearity of the camera response function that transforms incoming data from light space to image space. We develop two adaptive restoration techniques, both accounting for this nonlinearity. One operates in light space, where the relationship between the incident light and light space values is linear, while the second method uses the transformed noise model to operate in image space. Both techniques apply multiple adaptive filters and merge their outputs to give the final restored image. Experimental results suggest that light space denoising is more efficient, since it enables the design of a simpler filter implementation. Results are given for real images with synthetic noise added, and for images with real noise
151 citations
"An Efficient Image Denoising Approa..." refers methods in this paper
[...]
[...]
TL;DR: The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Abstract: We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
46 citations
[...]
TL;DR: Zhang et al. as discussed by the authors proposed a normalized dichromatic model for the pixels with identical diffuse color, which is a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively.
Abstract: Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an $L_{2}$ chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.
38 citations
Related Papers (5)
[...]
[...]
[...]