scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Enhancement of dark and low-contrast images using dynamic stochastic resonance

TL;DR: The internal noise of an image has been utilised to produce a noise-induced transition of a dark image from a state of low contrast to that of high contrast.
Abstract: In this study, a dynamic stochastic resonance (DSR)-based technique in spatial domain has been proposed for the enhancement of dark- and low-contrast images. Stochastic resonance (SR) is a phenomenon in which the performance of a system (low-contrast image) can be improved by addition of noise. However, in the proposed work, the internal noise of an image has been utilised to produce a noise-induced transition of a dark image from a state of low contrast to that of high contrast. DSR is applied in an iterative fashion by correlating the bistable system parameters of a double-well potential with the intensity values of a low-contrast image. Optimum output is ensured by adaptive computation of performance metrics - relative contrast enhancement factor ( F ), perceptual quality measures and colour enhancement factor. When compared with the existing enhancement techniques such as adaptive histogram equalisation, gamma correction, single-scale retinex, multi-scale retinex, modified high-pass filtering, edge-preserving multi-scale decomposition and automatic controls of popular imaging tools, the proposed technique gives significant performance in terms of contrast and colour enhancement as well as perceptual quality. Comparison with a spatial domain SR-based technique has also been illustrated.
Citations
More filters
Journal ArticleDOI
TL;DR: A low intricacy technique for contrast enhancement is proposed, and its performance is exhibited against various versions of histogram-based enhancement technique using three advanced image quality assessment metrics of Universal Image Quality Index (UIQI), Structural Similarity Index (SSIM), and Feature Similarity index (FSIM).
Abstract: Image contrast is an essential visual feature that determines whether an image is of good quality. In computed tomography (CT), captured images tend to be low contrast, which is a prevalent artifact that reduces the image quality and hampers the process of extracting its useful information. A common tactic to process such artifact is by using histogram-based techniques. However, although these techniques may improve the contrast for different grayscale imaging applications, the results are mostly unacceptable for CT images due to the presentation of various faults, noise amplification, excess brightness, and imperfect contrast. Therefore, an ameliorated version of the contrast-limited adaptive histogram equalization (CLAHE) is introduced in this article to provide a good brightness with decent contrast for CT images. The novel modification to the aforesaid technique is done by adding an initial phase of a normalized gamma correction function that helps in adjusting the gamma of the processed image to avoid the common errors of the basic CLAHE of the excess brightness and imperfect contrast it produces. The newly developed technique is tested with synthetic and real-degraded low-contrast CT images, in which it highly contributed in producing better quality results. Moreover, a low intricacy technique for contrast enhancement is proposed, and its performance is also exhibited against various versions of histogram-based enhancement technique using three advanced image quality assessment metrics of Universal Image Quality Index (UIQI), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM). Finally, the proposed technique provided acceptable results with no visible artifacts and outperformed all the comparable techniques.

86 citations


Cites background or methods from "Enhancement of dark and low-contras..."

  • ...These methods consist of log and power-law transformations [14,15]; low-pass, highpass, homomorphic filtering [3]; histogram equalization [16]; contrast stretching [17]; normalization [18]; and sigmoid function [19]....

    [...]

  • ...As a result, it provides a better conception of murky images to enhance visual understanding and to enable precise interpretation [3]....

    [...]

Journal ArticleDOI
TL;DR: An adaptive coupled neurons-based method with multi-objective optimization to enhance incipient fault signature identification of machinery for overcoming the drawback that blind noise suppression and cancellation using unwanted noise techniques are prone to remove weak useful signature closely related with health states of machinery.
Abstract: Organisms can sense subtle changes in the environment around them such as temperature, vibration and magnetic field. That is because biological neural network interconnected millions of neurons by synapses is able to utilize noise to amplify such subtle changes, and then encode and transmit them to make the corresponding biological responses. Such favorable use of noise can be improved by coupling two different neurons like synapses in organisms to enhance weak useful signature identification. Inspired by above mechanism, we investigate the benefits of noise in the coupled neurons to weak useful signature identification and then propose an adaptive coupled neurons-based method with multi-objective optimization to enhance incipient fault signature identification of machinery for overcoming the drawback that blind noise suppression and cancellation using unwanted noise techniques not only are prone to remove weak useful signature closely related with health states of machinery but also are impossible to cancel strong background noise. In the proposed method, both signal-to-noise ratio (SNR) and residence-time distribution ratio are seen as the multi-objective function to optimize the adjusting parameters of the coupled neurons and rescaling factor simultaneously by using genetic algorithms. Finally, two rolling element bearing experiments including a double row bearing run-to-failure experiment and a high-speed train bearing experiment were performed to demonstrate the feasibility and effectiveness of the proposed method in mechanical incipient fault diagnosis. The experimental results show that the proposed method not only enhances weak fault signature identification of machinery by coupling two different neurons but also is superior to the filter-based methods.

53 citations

Journal ArticleDOI
TL;DR: The results showed that the recognition rates for abnormal cervical epithelial cells were significantly higher when the C4.5 classifier or LR (LR: logical regression) classifier was used individually; while the recognition rate was significantly lower when the two-level cascade integrated classifier system was used.
Abstract: We proposed a method for automatic detection of cervical cancer cells in images captured from thin liquid based cytology slides. We selected 20,000 cells in images derived from 120 different thin liquid based cytology slides, which include 5000 epithelial cells (normal 2500, abnormal 2500), lymphoid cells, neutrophils, and junk cells. We first proposed 28 features, including 20 morphologic features and 8 texture features, based on the characteristics of each cell type. We then used a two-level cascade integration system of two classifiers to classify the cervical cells into normal and abnormal epithelial cells. The results showed that the recognition rates for abnormal cervical epithelial cells were 92.7% and 93.2%, respectively, when C4.5 classifier or LR (LR: logical regression) classifier was used individually; while the recognition rate was significantly higher (95.642%) when our two-level cascade integrated classifier system was used. The false negative rate and false positive rate (both 1.44%) of the proposed automatic two-level cascade classification system are also much lower than those of traditional Pap smear review.

52 citations


Cites methods from "Enhancement of dark and low-contras..."

  • ...Therefore, it is necessary to do some preprocessing of the images before cell segmentation to assure the accuracy of the analysis [22, 23]....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm provides better visual quality and a consistent power-saving ratio without flickering artifacts, even for video sequences, and several optimization skills for real-time processing.
Abstract: This paper presents a power-constrained contrast enhancement algorithm for organic light-emitting diode display based on multiscale retinex (MSR). In general, MSR, which is the key component of the proposed algorithm, consists of power controllable log operation and subbandwise gain control. First, we decompose an input image to MSRs of different sub-bands, and compute a proper gain for each MSR. Second, we apply a coarse-to-fine power control mechanism, which recomputes the MSRs and gains. This step iterates until the target power saving is accurately accomplished. With video sequences, the contrast levels of adjacent images are determined consistently using temporal coherence in order to avoid flickering artifacts. Finally, we present several optimization skills for real-time processing. Experimental results show that the proposed algorithm provides better visual quality than previous methods, and a consistent power-saving ratio without flickering artifacts, even for video sequences.

48 citations

Journal ArticleDOI
TL;DR: The proposed modified neuron model based stochastic resonance approach applied for the enhancement of T1 weighted, T2 weighted, fluid-attenuated inversion recovery (FLAIR) and diffusion-weighted imaging (DWI) sequences of magnetic resonance imaging performs well and has been found helpful in the better diagnosis of MR images.

42 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it was shown that a dynamical system subject to both periodic forcing and random perturbation may show a resonance (peak in the power spectrum) which is absent when either the forcing or the perturbations is absent.
Abstract: It is shown that a dynamical system subject to both periodic forcing and random perturbation may show a resonance (peak in the power spectrum) which is absent when either the forcing or the perturbation is absent.

2,774 citations

Book ChapterDOI

2,671 citations

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations

Journal ArticleDOI
TL;DR: A practical implementation of the retinex is defined without particular concern for its validity as a model for human lightness and color perception, and the trade-off between rendition and dynamic range compression that is governed by the surround space constant is described.
Abstract: The last version of Land's (1986) retinex model for human vision's lightness and color constancy has been implemented and tested in image processing experiments. Previous research has established the mathematical foundations of Land's retinex but has not subjected his lightness theory to extensive image processing experiments. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. We describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. Further, unlike previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unlike previous results, we find the best rendition for a "canonical" gain/offset applied after the retinex operation. Various functional forms for the retinex surround are evaluated, and a Gaussian form is found to perform better than the inverse square suggested by Land. Images that violate the gray world assumptions (implicit to this retinex) are investigated to provide insight into cases where this retinex fails to produce a good rendition.

1,674 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper advocates the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction.
Abstract: Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts.In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.

1,381 citations