scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Contextual and Variational Contrast Enhancement

01 Dec 2011-IEEE Transactions on Image Processing (IEEE)-Vol. 20, Iss: 12, pp 3431-3441
TL;DR: An algorithm that enhances the contrast of an input image using interpixel contextual information and produces better or comparable enhanced images than four state-of-the-art algorithms is proposed.
Abstract: This paper proposes an algorithm that enhances the contrast of an input image using interpixel contextual information. The algorithm uses a 2-D histogram of the input image constructed using a mutual relationship between each pixel and its neighboring pixels. A smooth 2-D target histogram is obtained by minimizing the sum of Frobenius norms of the differences from the input histogram and the uniformly distributed histogram. The enhancement is achieved by mapping the diagonal elements of the input histogram to the diagonal elements of the target histogram. Experimental results show that the algorithm produces better or comparable enhanced images than four state-of-the-art algorithms.

Summary (2 min read)

Introduction

  • This paper proposes an algorithm which enhances the contrast of an input image using inter-pixel contextual information.
  • It does not always produce satisfactory enhancement for images with large spatial variation in contrast.
  • The enhancement process is based on the observation that contrast can be improved by increasing the grey-level differences between the pixels ofan input image and their neighbours.
  • In 2D histogram, for each grey-level in the input image, the distribu ion of other grey-levels in the neighbourhood of the corresponding pixel is computed.

A. Grey-Scale Image Enhancement

  • Fig. 1 shows the input image and its 2D histogram using7× 7 neighbourhood.
  • The matrix multiplication in Eq. (9) results in a matrix which holds differences between the horizontal elements of the matrixH.
  • The target histogram is obtained by minimizingf (H) according to Ht = argmin H f (H). (12) The closed form solution to minimizing Eq. (12) is obtained by replacing the norm operation with square of the Frobenius norm (also known as Euclidean norm) which is defined as the square root of the sum of the absolute squares of its elements.

B. Colour Image Enhancement

  • One approach to extend the contrast enhancement to colour images is to apply the algorithm to their luminance component (Y) only and preserve the chrominance components.
  • Another is to multiply the chrominance values with the ratio of their input and output luminance values to preserve the hue.
  • TheYUV colour space [1] is selected because the conversion betweenRGB andYUV colour spaces is linear which considerably reduces the computational complexity for contrast enhancement in colour images.
  • Fig.4 shows the enhancement of theBaboon colour image.
  • It shows that the contrast of the input image has been increased whilethe details of the input image are retained.

A. Dataset and Quantitative Measures

  • The authors use standard test images from the datasets in [19]–[21] toevaluate and compare CVC, both qualitatively and quantitatively, with their implementations of WTHE [9], FHSABP [15], the weighted histogram approximation of HMF [16], and CEBGA [17].
  • The tests of significance of the quantitative measuresa performed on 500 natural images of Berkeley dataset [21].
  • Nevertheless, in practice it is desirable to have both quantitative and subjective assessments.
  • For a contrast enhancement algorithm it is, at least, expected that EME (Y) > EME (X).

B. Qualitative Assessment

  • Some example contrast enhancement results for grey-scale im ges are shown in Fig. 5 and Fig. 6.
  • The input to output grey-level mapping functions resulted from different algorithms are shown in Fig. 7(a)-(b).
  • CVC improves the overall contrast while preserving the image details.
  • This is confirmed by its mapping function being almost parallel to the no-change mapping function.
  • 10 For theFishingboat image [20] in Fig. 8(a) with mean brightness value of 114, FHSABP darkens some areas of sky, sea and dock, and brightens the parts of boat and dock.

C. Quantitative Assessment

  • Sets ofMB, DE andEME are computed from the original and enhanced images.
  • The non-parametric two-sample Kolmogorov-Smirnov test [23] is used to reject one of the hypotheses.
  • In order to keep the visual correspondence between originaland enhanced images in terms of brightness, the mean brightness 12 values of original and enhanced images should be proportional.
  • In order to test if an algorithm achievesDE preservation, the sets{DE (Xi)}∀i and {DE (Yi)}∀i resulted from original and enhanced images, respectively,are used.
  • The resultingp-values are shown in Table I. According to 95% confidence level, all algorithms do not rejectH0.

D. The Effect of Different Parameter Settings

  • Further improvement can be achieved by varying the parameters.
  • In order to see the effects of the parameters on the performance of the enhancement using the quantitative measures, two parameters are set to their default values while the other two parameters are varied.
  • This similarity lowers the value ofAMBE, preserves the overall content which results in higherDE, however it also lowers theEME since there will be not much difference between the input and output images.
  • The increase in the value ofβ increases the contribution of the 2D uniformly distributedhistogram 15 Hu, thus the resultant image will have a higher contrast.
  • The plots forAMBE, DE, andEME suggest that CVC achieves better enhancement with a largerw ×w local support.

E. The Effect of Contrast Enhancement on Object Recognition

  • Contrast enhancement is often applied as a pre-processing for object recognition.
  • To demonstrate the effects, face is selected as an object, and face recognition is performed on images of the ORL face database [24] enhanced by different methods.
  • Eachtr ining image is projected onto the eigenvectors, and a set of projection vectors is stored for each subject.
  • For each subject in the database, fiveimages are used for training and the remaining five images for query images.
  • This indicates that not only CVC improves the contrast, it also preserves the overall content of the image.

F. Computational Complexity

  • The computational complexities of the different algorithms except CEBGA are analysed for an input image of sizeH ×W pixels withK distinct grey levels.
  • Using the tests of significance on 500 natural images from Berkel y dataset, it is shown that CVC achieves brightness preservation, discrete entropy preservation, and contrast improvement under 95% confidence level.
  • Y.-T. Kim, “Contrast enhancement using brightness prese ving bi-histogram equalization,”IEEE Trans.
  • His research interests are in the areas of biophysics, digital signal, image and video processing, pattern r cognition and artificial intelligence, unusual event detection, remote sensing, and global optimization techniques.

Did you find this useful? Give us your feedback

Figures (15)

Content maybe subject to copyright    Report

University of Warwick institutional repository: http://go.warwick.ac.uk/wrap
This paper is made available online in accordance with
publisher policies. Please scroll down to view the document
itself. Please refer to the repository record for this item and our
policy information available from the repository home page for
further information.
To see the final version of this paper please visit the publisher’s website.
Access to the published version may require a subscription.
Author(s): Celik, T. and Tjahjadi, T.
Article Title: Contextual and Variational Contrast Enhancement
Year of publication: 2011
Link to published article:
http://dx.doi.org/10.1109/TIP.2011.2157513
Publisher statement: © 2011 IEEE. Personal use of this material is
permitted. Permission from IEEE must be obtained for all other uses, in
any current or future media, including reprinting/republishing this
material for advertising or promotional purposes, creating new
collective works, for resale or redistribution to servers or lists, or reuse
of any copyrighted component of this work in other works.”

1
Contextual and Variational Contrast Enhancement
Turgay Celik and Tardi Tjahjadi Senior Member, IEEE
Abstract
This paper proposes an algorithm which enhances the contrast of an input image using inter-pixel contextual information.
The algorithm uses a two-dimensional (2D) histogram of the input image constructed using mutual relationship between each
pixel and its neighbouring pixels. A smooth 2D target histogram is obtained by minimizing the sum of Frobenius norms of the
differences from the input histogram, and the uniformly distributed histogram. The enhancement is achieved by mapping the
diagonal elements of the input histogram to the diagonal elements of the target histogram. Experimental results show that the
algorithm produces better or comparable enhanced images than four state-of-the-art algorithms.
Index Terms
Contrast enhancement, histogram equalization, image quality enhancement, face recognition.
I. INTRODUCTION
Contrast enhancement is used to either increase the contrast of an image with low dynamic range or to bring out image
details that would otherwise be hidden [1]. The enhanced image looks subjectively better than the original image as the grey
level differences (i.e., the contrast) among objects and background are increased.
The conventional approach to enhance the contrast in an image is to manipulate the grey-level of individual pixels. Global
histogram equalization (GHE) [1] uses an input-to-output mapping derived from the cumulative distribution function (CDF) of
the image histogram. Although GHE utilizes the available grey scale of the image, it tends to over-enhance the image if there
are large peaks in the histogram, resulting in a harsh and noisy appearance of the enhanced image. It does not always produce
satisfactory enhancement for images with large spatial variation in contrast. Local histogram equalization (LHE) algorithms
have been developed, e.g., [2], [3], to address the aforementioned problems. These algorithms use a small window that slides
over every image pixel sequentially and the histogram of pixels within the current position of the window is equalized. LHE
sometimes over-enhances some portion of the image and any noise, and may produce undesirable checkerboard effects.
Other algorithms that focus on improving GHE [4]–[9] can achieve satisfactory contrast enhancement, but the variation in
the grey-level distribution may result in image degradation [10]. Dynamic histogram specification (DHS) [10] uses the desired
histogram, generated dynamically from the input image, to modify the input image histogram. In order to retain the features
in the input image histogram, DHS extracts the differential information from the input image histogram and incorporates
additional parameters to control the enhancement such as the image original and the resultant gain control values. However,
the degree of enhancement that can be achieved is not significant. In order to address the artefacts due to over-enhancement
and saturation of grey levels of GHE, the original image histogram is modified by weighting and thresholding before the
histogram equalization in [9]. The weighting and thresholding are performed by clamping the original image histogram at an
This work was supported by Warwick University Vice Chancellor Scholarship.
Turgay Celik and Tardi Tjahjadi are with the School of Engineering, University of Warwick, Coventry, CV4 7AL, United Kingdom. e-mail: Turgay Celik
(celikturgay@gmail.com), Tardi Tjahjadi (t.tjahjadi@warwick.ac.uk).

2
upper threshold and at a lower threshold, and transforming all the values between these thresholds using a normalized power
law function with an index. We refer the algorithm as weighted thresholded histogram equalization (WTHE). WTHE provides
satisfactory enhancement with the carefully selected default parameter setting.
One group of algorithms decompose an input image into different subbands so as to modify, globally or locally, the magnitude
of the desired frequency components of the image data using multiscale analysis [11]–[14]. These algorithms enable the
simultaneous global and local contrast enhancement by transforming the appropriate subbands and in the appropriate scales.
For example, the centre-surround Retinex algorithm [11] achieves lightness and colour constancy in images. However, the
enhanced image may include “halo” artefacts, especially along boundaries between large uniform regions. A “greying out” can
also occur resulting in the image of the scene tending to middle grey.
Optimisation methods have also been used for contrast enhancement. Convex optimisation is used in flattest histogram
specification with accurate brightness preservation (FHSABP) [15] to transform the input image histogram into the flattest
histogram, subject to a mean brightness constraint. This is followed by applying an exact histogram specification algorithm
to preserve the image brightness. FHSABP behaves very similar to GHE when the grey levels of the input image are equally
distributed. Since it is designed to preserve the average brightness, FHSABP may produce low contrast results when the average
brightness is either too low or too high. Contrast enhancement in histogram modification framework (HMF) [16] minimizes a
cost function to compute a target histogram. The cost function is composed of penalty terms of minimum histogram deviation
from the original and uniform histograms, and histogram smoothness. Furthermore, the edge information is embedded into the
cost function to weight pixels around region boundaries to address noise and black/white stretching [16]. In order to design a
parameter free contrast enhancement algorithm, genetic algorithm (GA) is employed in [17] to find a target histogram which
maximizes a contrast measure based on edge information. We refer this algorithm as contrast enhancement based on GA
(CEBGA). CEBGA suffers from the drawbacks of GA based algorithms, namely dependency on initialization and convergence
to a local optimum. Furthermore, the convergence time is proportional to the number of distinct grey levels in the input image.
All the above approaches use a 1-dimensional (1D) histogram. Other than HMF [16], they do not take into account the
contextual information content in the image. HMF [16] uses the image edge information to weight the 1D histogram.
We propose a contextual and variational contrast enhancement algorithm (CVC) to improve the visual quality of input images
as follows. Images with low-contrast are improved in terms of an increase in dynamic range. Images with sufficiently high
contrast are also improved but not as much. The colour quality are improved in terms of colour consistency, higher contrast
between foreground and background objects, larger dynamic range and more image details are visible. The enhancement process
is based on the observation that contrast can be improved by increasing the grey-level differences between the pixels of an
input image and their neighbours. Furthermore, for the purpose of image equalization, grey-level differences should be equally
distributed over the entire input image. To realise these observations, a 2D histogram of the input image is constructed and
modified with a priori probability which assigns higher probability to the high grey-level differences, and vice versa. In 2D
histogram, for each grey-level in the input image, the distribution of other grey-levels in the neighbourhood of the corresponding
pixel is computed. A smooth 2D target histogram is obtained by minimizing the sum of Frobenius norms of the differences
from the 2D input histogram, and the 2D uniformly distributed histogram. The contrast enhancement is achieved by mapping
the diagonal elements of the 2D input histogram to the diagonal elements of the 2D target histogram.
The paper is organized as follows. Section II presents the proposed CVC. Section III presents the subjective and quantitative
comparisons of CVC with four state-of-the-art enhancement techniques. Section IV concludes the paper.

3
(a) (b)
Fig. 1. The input image (a) and its 2D histogram (b) using 7 × 7 neighbourhood. For display purpose, h
x
(m, n) is shown in logarithmic scale.
II. PROPOSED ALGORITHM (CVC)
A. Grey-Scale Image Enhancement
Consider an input image, X = {x (i, j)
1 i H, 1 j W } , of size H × W pixels, where x (i, j) [0, Z
+
] and
assume that X has a dynamic range of [x
d
, x
u
] where x (i, j) [x
d
, x
u
]. The objective of CVC is to generate an enhanced
image, Y = {y (i, j)
1 i H, 1 j W }, which has a better visual quality than X. The dynamic range of Y can be
stretched or compressed into the interval [y
d
, y
u
], where y (i, j) [y
d
, y
u
], y
d
< y
u
and {y
d
, y
u
} [0, Z
+
]. In this work, the
enhanced image utilizes the entire dynamic range, e.g., for an 8-bit image y
d
= 0, and y
u
= 2
8
1 = 255.
Let X = {x
1
, x
2
, . . . , x
K
} be the sorted set of all possible K grey-levels that can occur in an input image X where
x
1
< x
2
< . . . < x
K
, where K = 256 for an 8-bit image. The 2D histogram of the input image X is computed as
H
x
=
h
x
(m, n)
1 m K, 1 n K
, (1)
where h
x
(m, n) [0, Z
+
] is the number of occurrences of the n
th
grey-level (x
n
) in the neighbourhood of the m
th
grey-level
(x
m
). Different types of neighbourhood can be employed, however for a typical implementation of CVC w × w neighbourhood
around each pixel is considered. For example, Fig. 1 shows the input image and its 2D histogram using 7 × 7 neighbourhood.
The image has more bright regions than dark regions, thus its histogram has larger values located at higher grey-values. In
homogeneous regions, the neighbours of each pixel have very similar grey-levels which result in higher peaks at diagonal or
near-diagonal elements of the histogram.
For an improved contrast there should be larger grey-level differences between the pixel under consideration and its
neighbours. Thus, the 2D histogram is modified according to
h
x
(m, n) = h
x
(m, n) h
p
(x
m
, x
n
) (2)
and
h
p
(x
m
, x
n
) = (|x
m
x
n
| + 1) / (x
K
x
1
+ 1) , (3)
where h
p
(x
m
, x
n
) [0, 1] assigns a weight to the occurrences of (x
m
, x
n
) which is proportional to the modulus of the
grey-level difference between x
m
and x
n
. The 2D histogram shown in Fig. 1(b) is updated as shown in Fig. 2(b) using the
h
p
(x
m
, x
n
) shown in Fig. 2(a). It is clear from Fig. 2(a) that h
p
(x
m
, x
n
) assigns higher weights to the components according
to their distance from the diagonal elements. Thus, h
p
(x
m
, x
n
) enhances larger differences which results in greater contrast
in the overall image.

4
(a) (b)
Fig. 2. Updating the 2D histogram shown in Fig. 1(b): (a) h
p
(x
m
, x
n
) computed using Eq. (3); and (b) Updated 2D histogram h
x
(m, n) using Eq. (2).
For display purpose, h
x
(m, n) is shown in logarithmic scale.
The updated 2D histogram H
x
is normalized according to
h
x
(m, n) = h
x
(m, n)
.
K
X
i=1
K
X
j=1
h
x
(i, j) (4)
to give a CDF
P
x
=
P
x
(m) =
m
X
i=1
m
X
j=1
h
x
(i, j)
m = 1, . . . , K
. (5)
Let Y = {y
1
, y
2
, . . . , y
K
} be the sorted set of all possible K grey-levels that can occur in output image Y where y
1
< y
2
<
. . . < y
K
. In order to map the elements of X to the elements of Y, it is necessary to determine a K × K target histogram
H
t
and its CDF P
t
. In order to equally enhance every possible occurrence of grey-levels of the input image pixels and their
neighbours, H
t
can be selected as a 2D uniformly distributed histogram
H
u
=
h
u
(m
, n
) =
1
K
2
1 m
K, 1 n
K
. (6)
However, such a selection does not consider the contribution of the 2D input histogram. Instead, H
t
should have a minimum
distance from the input histogram, i.e.,
H
t
= argmin
H
kH H
x
k, (7)
where k·k computes the norm. Motivated by the maximum entropy principle, H
t
should also have a minimum distance from
the uniformly distributed histogram, i.e.,
H
t
= argmin
H
kH H
u
k. (8)
Furthermore, in order to satisfy a smooth mapping, H
t
should have minimum deviations between its components, i.e.,
H
t
= argmin
H
kHDk, (9)
where D R
K×K
is a K × K bidiagonal difference matrix
D =
d d 0 · · · 0 0 0
0 d d · · · 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 · · · 0 d d
0 0 0 · · · 0 0 d
, (10)
where d is a constant which is set to 1. The matrix multiplication in Eq. (9) results in a matrix which holds differences between
the horizontal elements of the matrix H. The vertical elements can also be considered, however the enhancement result will
not change significantly.

Citations
More filters
Journal ArticleDOI
TL;DR: Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Abstract: When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

1,364 citations


Cites methods from "Contextual and Variational Contrast..."

  • ...Further, variational methods aim to improve the HE performance by imposing different regularization terms on the histogram....

    [...]

Journal ArticleDOI
TL;DR: An automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels and uses temporal information regarding the differences between each frame to reduce computational complexity is presented.
Abstract: This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image-enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods.

795 citations


Cites background or methods from "Contextual and Variational Contrast..."

  • ...The Contextual and Variational Contrast (CVC) enhancement method is more effective at showing the visual quality of the image, because it directly constructs an a priori probability, which further represents details of the image [17]....

    [...]

  • ...To facilitate searching neighborhoods via the CVC method, the radius has been fixed to 3 [17]....

    [...]

  • ...Quantitative evaluation of contrast enhancement is not an easy task, because an acceptable criterion by which to quantify the improved perception has yet to be proposed [1], [17], [18]....

    [...]

  • ...This section summarizes the experimental results produced by nine HM and HM-based methods, including THE, BBHE [8], DSIHE [9], RSIHE [12], RSWHE [13], DCRGC [15], AWMHE [16], and CVC [17], along with our proposed AGCWD method....

    [...]

  • ...To alleviate the previously discussed problem, the two-dimensional (2-D) histogram is used to generate contextual and variational information in the image [17], while the Gaussian Mixture Model (GMM) can also be used to compensate for the gray-level distribution of the image [18]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes to use the convolutional neural network (CNN) to train a SICE enhancer, and builds a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-Exposure sequences with 4,413 images.
Abstract: Due to the poor lighting condition and limited dynamic range of digital imaging devices, the recorded images are often under-/over-exposed and with low contrast. Most of previous single image contrast enhancement (SICE) methods adjust the tone curve to correct the contrast of an input image. Those methods, however, often fail in revealing image details because of the limited information in a single image. On the other hand, the SICE task can be better accomplished if we can learn extra information from appropriately collected training data. In this paper, we propose to use the convolutional neural network (CNN) to train a SICE enhancer. One key issue is how to construct a training data set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this end, we build a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images. Thirteen representative multi-exposure image fusion and stack-based high dynamic range imaging algorithms are employed to generate the contrast enhanced images for each sequence, and subjective experiments are conducted to screen the best quality one as the reference image of each scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the advantages of our method over existing SICE methods with a significant margin.

632 citations


Cites background or methods from "Contextual and Variational Contrast..."

  • ...histogram-based methods (CVC [5] and AGCWD [6]), Retinex-based methods (NEP [8], SRIE [3] and LIME [1]) and Li’s method [59]....

    [...]

  • ...The codes of [1], [3], [8], and [59] are from the original authors, and [5], [6] are from a contrast enhancement toolbox....

    [...]

  • ...Histogram-based methods [4], [5] have been widely used because of their simplicity in enhancing low-contrast images....

    [...]

Journal ArticleDOI
TL;DR: A fusion-based method for enhancing various weakly illuminated images that requires only one input to obtain the enhanced image and represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image.
Abstract: We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.

464 citations


Cites background or methods from "Contextual and Variational Contrast..."

  • ...In this experiment, we use 40 different kinds of weakly illumination images and enhance them using [1,10,16,20,22,23] and our proposed method....

    [...]

  • ...Input images and other six methods [1,10,16,20,22,23] correspond to 0....

    [...]

  • ...Meanwhile, CVC [10] generates obvious over-enhanced results when the brightness darkens....

    [...]

  • ...For example, in [10] contextual and variational contrast enhancement (CVC) is performed by a histogrammapping that emphasizes large gray-level differences....

    [...]

  • ...The proposed method requires a slightly longer running time than LDR [1] and GUM [23], while significantly less time than CVC [10], NEPA [20] and GOLW [22]....

    [...]

Journal ArticleDOI
TL;DR: A novel contrast enhancement algorithm based on the layered difference representation of 2D histograms is proposed, which enhances images efficiently in terms of both objective quality and subjective quality.
Abstract: A novel contrast enhancement algorithm based on the layered difference representation of 2D histograms is proposed in this paper. We attempt to enhance image contrast by amplifying the gray-level differences between adjacent pixels. To this end, we obtain the 2D histogram h(k, k+l) from an input image, which counts the pairs of adjacent pixels with gray-levels k and k+l, and represent the gray-level differences in a tree-like layered structure. Then, we formulate a constrained optimization problem based on the observation that the gray-level differences, occurring more frequently in the input image, should be more emphasized in the output image. We first solve the optimization problem to derive the transformation function at each layer. We then combine the transformation functions at all layers into the unified transformation function, which is used to map input gray-levels to output gray-levels. Experimental results demonstrate that the proposed algorithm enhances images efficiently in terms of both objective quality and subjective quality.

445 citations


Cites background or methods from "Contextual and Variational Contrast..."

  • ...The layered difference representation (LDR): the transformation function x = [x0, x1, · · · , x255]T and the difference variables dlk’s are shown in a tree-like pyramidal structure....

    [...]

  • ...We then reconstruct the transformation function x from d and transform the input image to the output image....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
TL;DR: This paper investigates two fundamental problems in computer vision: contour detection and image segmentation and presents state-of-the-art algorithms for both of these tasks.
Abstract: This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.

5,068 citations


"Contextual and Variational Contrast..." refers background or methods in this paper

  • ...The input-to-output gray-level mapping functions that resulted from different algorithms are shown in Fig....

    [...]

  • ...However, for a contrast-enhancement algorithm it is, at least, expected that ....

    [...]

Journal ArticleDOI
TL;DR: A new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation that is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction.
Abstract: In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.

3,439 citations

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations

Journal ArticleDOI
TL;DR: Data Analysis by Resampling is a useful and clear introduction to resampling that would make an ambitious second course in statistics or a good third or later course and is quite well suited for self-study by an individual with just a few previous statistics courses.
Abstract: described and related to one another and to the different resampling methods is also notable. This is especially useful for the book’s target audience, for whom such concepts may not yet have taken root. On the computational side, the book may be a little less satisfying. Stepby-step computational algorithms are at some times inefŽ cient and at other times cryptic so that an individual with little programming experience might have difŽ culty applying them. This problem is substantially offset by the presence of numerous detailed examples solved using existing software, providing readers roughly equal exposure to S-PLUS, SC, and Resampling Stats. Unfortunately, these examples often require large, complex programs, demonstrating as much as anything a need for better resampling software. On the whole, Data Analysis by Resampling is a useful and clear introduction to resampling. It would make an ambitious second course in statistics or a good third or later course. It is quite well suited for self-study by an individual with just a few previous statistics courses. Although it would be miscast as a graduate-level textbook or as a research reference—for one thing it lacks a thorough bibliography to make up for its surface treatment of many of the topics it covers—it is a very nice book for any reader seeking an introductory book on resampling.

1,840 citations


"Contextual and Variational Contrast..." refers background in this paper

  • ...Its performance is similar to WTHE and HMF, except for lower gray levels where it provides brighter output, and for higher gray levels, it provides darker output....

    [...]