scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Internal noise-induced contrast enhancement of dark images

TL;DR: The proposed contrast enhancement technique using scaling of internal noise of a dark image in discrete cosine transform (DCT) domain adopts a local adaptive processing and significantly enhances the image contrast and color information while ascertaining good perceptual quality.
Abstract: A contrast enhancement technique using scaling of internal noise of a dark image in discrete cosine transform (DCT) domain has been proposed in this paper. The mechanism of enhancement is attributed to noise-induced transition of DCT coefficients from a poor state to an enhanced state. This transition is effected by the internal noise present due to lack of sufficient illumination and can be modeled by a general bistable system exhibiting dynamic stochastic resonance. The proposed technique adopts a local adaptive processing and significantly enhances the image contrast and color information while ascertaining good perceptual quality. When compared with the existing enhancement techniques such as adaptive histogram equalization, gamma correction, single-scale retinex, multi-scale retinex, modified high-pass filtering, multi-contrast enhancement, multi-contrast enhancement with dynamic range compression, color enhancement by scaling, edge-preserving multi-scale decomposition and automatic controls of popular imaging tool, the proposed technique gives remarkable performance in terms of relative contrast enhancement, colorfulness and visual quality of enhanced image.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, two stochastic resonance (SR)-based techniques are introduced for enhancement of very low-contrast images, and an expression for optimum noise standard deviation σoptimum that maximises signal-to-noise ratio (SNR) is derived.
Abstract: Two stochastic resonance (SR)-based techniques are introduced for enhancement of very low-contrast images. In the proposed SR-based image enhancement technique-1, an expression for optimum threshold has been derived. Gaussian noise of increasing standard deviation has been added iteratively to the low-contrast image until the quality of enhanced image reaches maximum. A quantitative parameter ‘distribution separation measure (DSM)’ is used to measure the enhancement quality. In order to reduce the required number of iterations in the second enhancement technique the author's have derived an expression for optimum noise standard deviation σoptimum that maximises signal-to-noise ratio (SNR). Image enhancement is obtained by iterating only with few noise standard deviations around σoptimum. This reduces number of iterations drastically. Comparison with the existing methods shows the superiority of the proposed method.

47 citations

Journal ArticleDOI
TL;DR: The proposed modified neuron model based stochastic resonance approach applied for the enhancement of T1 weighted, T2 weighted, fluid-attenuated inversion recovery (FLAIR) and diffusion-weighted imaging (DWI) sequences of magnetic resonance imaging performs well and has been found helpful in the better diagnosis of MR images.

42 citations

Journal ArticleDOI
TL;DR: Suitability of the proposed RGB YCbCr Processing method is validated by real-time implementation during the testing of the Autonomous Underwater Vehicle (AUV-150) developed indigenously by CSIR-CMERI.
Abstract: An RGB YCbCr Processing method (RYPro) is proposed for underwater images commonly suffering from low contrast and poor color quality. The degradation in image quality may be attributed to absorption and backscattering of light by suspended underwater particles. Moreover, as the depth increases, different colors are absorbed by the surrounding medium depending on the wavelengths. In particular, blue/green color is dominant in the underwater ambience which is known as color cast. For further processing of the image, enhancement remains an essential preprocessing operation. Color equalization is a widely adopted approach for underwater image enhancement. Traditional methods normally involve blind color equalization for enhancing the image under test. In the present work, processing sequence of the proposed method includes noise removal using linear and non-linear filters followed by adaptive contrast correction in the RGB and YCbCr color planes. Performance of the proposed method is evaluated and compared with three golden methods, namely, Gray World (GW), White Patch (WP), Adobe Photoshop Equalization (APE) and a recently developed method entitled “Unsupervised Color Correction Method (UCM)”. In view of its simplicity and computational ease, the proposed method is recommended for real-time applications. Suitability of the proposed method is validated by real-time implementation during the testing of the Autonomous Underwater Vehicle (AUV-150) developed indigenously by CSIR-CMERI.

31 citations


Cites background or methods from "Internal noise-induced contrast enh..."

  • ...It is reported [26] that the quality of an enhanced image may be termed as good if the value of PQM is close to 10....

    [...]

  • ...Metric of RCEF signifies the dynamic range of histogram calculated using the global variance (σ (2)) and mean (μ) of enhanced and original images [26] as defined in (11)....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method perceptually enhances contrast in low-light images while successfully minimizing distortions and preserving details, and performs perceptual noise suppression using the JND model.
Abstract: Low-light images are seriously corrupted by noise due to the low signal-to-noise ratio. In low intensity, just-noticeable-difference (JND) is high, and thus the noise is not perceived well by human eyes. However, after contrast enhancement, the noise becomes obvious and severe, because JND decreases as intensity increases. Thus, contrast enhancement without considering human visual perception causes serious noise amplification in low-light images. In this paper, we propose perceptual enhancement of low-light images based on two-step noise suppression. We adopt two-step noise suppression based on noise characteristics corresponding to human visual perception. First, we perform noise aware contrast enhancement using a noise-level function. However, the increase of the intensity caused by contrast enhancement reduces JND in low intensity, which makes noise much more visible by human eyes. Second, we perceptually reduce noise in images while preserving details using a JND model, which represents noise visibility in contrast enhancement. We estimate the noise visibility based on the intensity change using luminance adaptation. Also, we extract image details by contrast masking and visual regularity, because textural regions have higher visibility thresholds than the smooth ones. Based on the human visual characteristics, we perform perceptual noise suppression using the JND model. Experimental results show that the proposed method perceptually enhances contrast in low-light images while successfully minimizing distortions and preserving details.

28 citations


Cites background from "Internal noise-induced contrast enh..."

  • ...[26] performed adaptive stochastic resonance-based enhancement in terms of perceptual quality in color and contrast, respectively....

    [...]

Proceedings ArticleDOI
01 Sep 2015
TL;DR: Experimental results show that the proposed denoising-enhancement-completion algorithm removes noise and enhances the contrast of low-light images more effectively than conventional algorithms.
Abstract: A robust contrast enhancement algorithm for noisy low-light images, called the denoising-enhancement-completion (DEC), is proposed in this work. We observe that noise components in low-light images degrade the performance of the contrast enhancement. Therefore, we first reduce noise components in an input image. Then, we compute the reliability weight for each pixel, by measuring the difference between the input image and the denoised image, and categorize each pixel into one of two classes: noise-free or noisy. We perform the selective histogram equalization to enhance the contrast of the noise-free pixels only. Finally, we restore missing values of the noisy pixels using the enhanced noise-free pixel values, by employing a low-rank matrix completion scheme. Experimental results show that the proposed DEC algorithm removes noise and enhances the contrast of low-light images more effectively than conventional algorithms.

15 citations


Cites background from "Internal noise-induced contrast enh..."

  • ...[5] exploited noise in a lowlight image to boost details and obtain an enhanced brighter image....

    [...]

References
More filters
Book ChapterDOI

2,671 citations

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations

Journal ArticleDOI
TL;DR: A practical implementation of the retinex is defined without particular concern for its validity as a model for human lightness and color perception, and the trade-off between rendition and dynamic range compression that is governed by the surround space constant is described.
Abstract: The last version of Land's (1986) retinex model for human vision's lightness and color constancy has been implemented and tested in image processing experiments. Previous research has established the mathematical foundations of Land's retinex but has not subjected his lightness theory to extensive image processing experiments. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. We describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. Further, unlike previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unlike previous results, we find the best rendition for a "canonical" gain/offset applied after the retinex operation. Various functional forms for the retinex surround are evaluated, and a Gaussian form is found to perform better than the inverse square suggested by Land. Images that violate the gray world assumptions (implicit to this retinex) are investigated to provide insight into cases where this retinex fails to produce a good rendition.

1,674 citations


"Internal noise-induced contrast enh..." refers background in this paper

  • ...Index Terms— Contrast enhancement, dynamic stochastic resonance, bistable system, double well parameters...

    [...]

Journal ArticleDOI
TL;DR: The system consists of a novel device for online palmprint image acquisition and an efficient algorithm for fast palmprint recognition, and a robust image coordinate system is defined to facilitate image alignment for feature extraction.
Abstract: Biometrics-based personal identification is regarded as an effective method for automatically recognizing, with a high confidence, a person's identity. This paper presents a new biometric approach to online personal identification using palmprint technology. In contrast to the existing methods, our online palmprint identification system employs low-resolution palmprint images to achieve effective personal identification. The system consists of two parts: a novel device for online palmprint image acquisition and an efficient algorithm for fast palmprint recognition. A robust image coordinate system is defined to facilitate image alignment for feature extraction. In addition, a 2D Gabor phase encoding scheme is proposed for palmprint feature extraction and representation. The experimental results demonstrate the feasibility of the proposed system.

1,416 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper advocates the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction.
Abstract: Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts.In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.

1,381 citations