scispace - formally typeset
Search or ask a question

Showing papers on "Histogram equalization published in 1991"


Journal ArticleDOI
TL;DR: The authors develop algorithms for the design of hierarchical tree structured color palettes incorporating performance criteria which reflect subjective evaluations of image quality, which produce higher-quality displayed images and require fewer computations than previously proposed methods.
Abstract: The authors develop algorithms for the design of hierarchical tree structured color palettes incorporating performance criteria which reflect subjective evaluations of image quality. Tree structured color palettes greatly reduce the computational requirements of the palette design and pixel mapping tasks, while allowing colors to be properly allocated to densely populated areas of the color space. The algorithms produce higher-quality displayed images and require fewer computations than previously proposed methods. Error diffusion techniques are commonly used for displaying images which have been quantized to very few levels. Problems related to the application of error diffusion techniques to the display of color images are discussed. A modified error diffusion technique is shown to be easily implemented using the tree structured color palettes developed earlier. >

543 citations


Journal Article
TL;DR: This work applies the vector quantization algorithm proposed by Equitz to the problem of efficiently selecting colors for a limited image palette and incorporates activity measures both at the initial quantization step and at the merging step so that quantization is fine in smooth regions and coarse in active regions.

71 citations


Journal ArticleDOI
TL;DR: The exponential hull, a variation of the upper convex hull, is defined for a histogram and its properties allow it to be used in criteria for choosing the number and locations of thresholds for gray-level image segmentation from the image intensity histogram.

45 citations


Patent
09 Jul 1991
TL;DR: In this paper, the authors proposed a histogram projection system which automatically optimizes, tracks changes in luminance and adjusts in real time the display of wide dynamic range imagery from IR cameras.
Abstract: A histogram projection system which automatically optimizes, tracks changes in luminance and adjusts in real time the display of wide dynamic range imagery from IR cameras. It is computationally simpler than and offers markedly superior results to the standard available technique for this purpose, histogram equalization. The new technique assigns display dynamic range equally to each occupied intensity level in the raw data in contrast to the old procedure which assigns dynamic range in proportion to the number of pixels at given levels. Less shot noise and greater resolution of image detail for smaller objects or targets are the main improvements from the new algorithm. By the expedient of undersampling the image pixels in carrying out the histogram processing, one can in effect gradually increase the degree of dynamic range assigned to majority or background pixel levels, thereby enhancing the contrast in background regions when desired.

25 citations


Proceedings ArticleDOI
13 Oct 1991
TL;DR: A one-step histogram specification method is presented to overcome some weaknesses of the traditional method and reduce the contouring effect which was caused by the traditional two-step method.
Abstract: In the traditional algorithm, two steps are needed to construct a mapping function from an original histogram to an arbitrarily specified histogram. One of the problems observed with the traditional histogram specification technique is the contouring effect in the general image. A one-step histogram specification method is presented to overcome some weaknesses of the traditional method. First, the cause of the contouring effect in the traditional two-step histogram transformation method is analyzed. Then, the one-step histogram specification algorithm is developed to avoid this problem. From the analysis and experiment results, it can be seen that the proposed one-step histogram specification has reduced the contouring effect which was caused by the traditional two-step method. The advantage of the presented approach is to minimize the local errors between the desired histogram and the resulting histogram. >

21 citations


Proceedings ArticleDOI
08 Apr 1991
TL;DR: Histogram equalization is performed on the codebook of a tree-structured vector quantizer and encoding with the resulting codebook performs both compression and contrast enhancement.
Abstract: Histogram equalization is performed on the codebook of a tree-structured vector quantizer. Encoding with the resulting codebook performs both compression and contrast enhancement. It is also possible to perform intensity windowing on the codebook, or a combination of intensity windowing and histogram equalization so that these need not be separate post-processing steps. >

18 citations


Book ChapterDOI
07 Jul 1991
TL;DR: Two new methods based on the use of local structural information, in particular edge strengths, in defining contextual regions are presented and discussed, namely edge-affected unsharp masking followed by contrast-limited adaptive histogram equalization (AHE), and diffusive histograms equalization, a variant of AHE in which weighted contextual areas are calculated by edge- affected diffusion.
Abstract: Contrast enhancement is a fundamental step in the display of digital images. The end result of display is the perceived brightness occurring in the human observer; design of effective contrast enhancement mappings therefore requires understanding of human brightness perception. Recent advances in this area have emphasized the importance of image structure in determining our perception of brightnesses, and consequently contrast enhancement methods which attempt to use structural information are being widely investigated. In this paper we present two promising methods we feel are strong competitors to presently-used techniques. We begin with a survey of contrast enhancement techniques for use with medical images. Classical adaptive algorithms use one or more statistics of the intensity distribution of local image areas to compute the displayed pixel values. More recently, techniques which attempt to take direct account of local structural information have been developed. The use of this structural information, in particular edge strengths, in defining contextual regions seems especially important. Two new methods based on this idea are presented and discussed, namely edge-affected unsharp masking followed by contrast-limited adaptive histogram equalization (AHE), and diffusive histogram equalization, a variant of AHE in which weighted contextual regions are calculated by edge-affected diffusion. Results on typical medical images are given.

16 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: An approach for the compression of color images with limited palette size that does not require color quantization of the decoded image is presented and significantly reduces the decoder computational complexity.
Abstract: An approach for the compression of color images with limited palette size that does not require color quantization of the decoded image is presented. The technique restricts the pixels of the decoded image to take values only in the original palette. Thus, the decoded image can be readily displayed without having to be quantized. Results obtained with a typical image are included to compare a conventional coding scheme to the proposed one. For comparable quality and bit rates, the proposed technique significantly reduces the decoder computational complexity. >

12 citations


Patent
19 Aug 1991
TL;DR: In this article, a translation point is determined by selecting the approximate midpoint between the foreground information and the background information in each histogram as the translation point, or selecting the statistical average of the histogram's color data distribution as translation point.
Abstract: Replication of a two-color original image with foreground and background colors exchanged is effected on a received signal representing the color information of each successive pixel of a two-color original image by generating a histogram of the color data distribution for each color plane of the original image, determining a translation point within each histogram, and performing a histogram translation of the color plane image data about the translation point. The translation point may be determined by selecting the approximate midpoint between the foreground information and the background information in each histogram as the translation point, or by selecting the statistical average of each histogram as the translation point. Once the translation point is determined, a new color data value for each pixel in a color plane is selected by subtracting the pixel's original color data value from twice the color data value of the translation point.

9 citations


Journal ArticleDOI
TL;DR: A new technique is presented to address the problem of enhancement of significant features in relation to the irrelevant background information in the pre-processing stage of automatic pattern recognition systems, called background discriminant transformation (BDT).
Abstract: The success of automatic pattern recognition systems depends on the enhancement of significant features in relation to the irrelevant background information in the pre-processing stage. In images, the objects of relevance are generally enhanced by linear/non-linear stretching, histogram equalization and spatial filtering, which are all operations in a single band (univariate). In a multivariate space, linear transformations such as principal component analysis are very popular for this purpose. A simple rotation of axes, as in the principal component transformation, to the maximum variance direction is often insufficient to enhance objects camouflaged by the background. This is often due to the enhancement of the background together with the features of interest or non-background. In this paper a new technique is presented to address this problem. The images are modelled as having two classes, namely background and non-background. The technique, called background discriminant transformation (BDT)...

8 citations


Proceedings ArticleDOI
08 Dec 1991
TL;DR: A computer algorithm utilizing image enhancement methods to simplify the extraction of three-dimensional (3-D) volumes from ultrasound images has been developed and Binary masks defining the object borders were created from the enhanced images and used to extract the object from the original data set so that the original backscatter texture was preserved.
Abstract: A computer algorithm utilizing image enhancement methods to simplify the extraction of three-dimensional (3-D) volumes from ultrasound images has been developed. The contrast of the image was increased by using histogram sliding and stretching and adaptive histogram equalization to facilitate interactive thresholding. Artifacts were removed while anatomic details were preserved through the use of mathematical morphologic filtering. Median filtering helped to remove remaining noise and smooth edges. Binary masks defining the object borders were created from the enhanced images and used to extract the object from the original data set so that the original backscatter texture was preserved. The method is demonstrated with images acquired in vitro and in vivo. >

Journal Article
TL;DR: GelReader 1.0 is a microcomputer program designed to make precision, digital analysis of one-dimensional electrophoretic gels accessible to the molecular biology laboratory of modest means and strikes a balance between program autonomy and user intervention, in recognition of the variability in electrophic gel quality and users' analytical needs.
Abstract: We present GelReader 1.0, a microcomputer program designed to make precision, digital analysis of one-dimensional electrophoretic gels accessible to the molecular biology laboratory of modest means. Images of electrophoretic gels are digitized via a desktop flatbed scanner from instant photographs, autoradiograms or chromogenically stained blotting media. GelReader is then invoked to locate lanes and bands and generate a report of molecular weights of unknowns, based on specified sets of standards. Frequently used standards can be stored in the program. Lanes and bands can be added or removed, based upon users' subjective preferences. A unique lane histogram feature facilitates precise manual addition of bands missed by the software. Image enhancement features include palette manipulation, histogram equalization, shadowing and magnification. The user interface strikes a balance between program autonomy and user intervention, in recognition of the variability in electrophoretic gel quality and users' analytical needs.

Proceedings ArticleDOI
01 Nov 1991
TL;DR: By working with a family of contrast enhanced images the difficult task of selecting the single level of contrast enhancement appropriate for a particular image is avoided, increasing the usefulness of low contrast images.
Abstract: This paper describes how a family of images of varying levels of contrast can be efficiently calculated and displayed. We show that a contrast space defined in terms of histogram equalization can be specified in terms of two parameters: region size and histogram blurring level. Based on these observations, we describe one exact algorithm and two efficient algorithms for computing sequences of images within this contrast space. These precomputed images can be displayed and interactively explored to examine image features of interest. By working with a family of contrast enhanced images the difficult task of selecting the single level of contrast enhancement appropriate for a particular image is avoided, increasing the usefulness of low contrast images.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
Florent Raoul1
18 Dec 1991
TL;DR: In this article, local binary segmentation of a digital image I0 by histogram thresholding, characterised in that it comprises steps for constructing a cumulative histogram curve of the number Y of objects of the image I 0 by grey level.
Abstract: Local binary segmentation of a digital image I0 by histogram thresholding, characterised in that it comprises steps for constructing a cumulative histogram curve of the number Y of objects of the image I0 by grey level, this histogram being defined as the function which, at each grey level X associates the number Y of objects contained in the binary image, the number obtained by thresholding the image I0 at the said grey level X, and by counting the number of objects as a function of a reference distance, and steps for determining the segmentation threshold automatically from this histogram. Application: detection of road traffic.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: An experiment to compare the relevance of various image enhancement methods for improving the visibility of pathologies on digitized chest radiographs and recommends two or three acceptable transformations for each pathology.
Abstract: In this paper we report on an experiment to compare the relevance of various image enhancement methods for improving the visibility of pathologies on digitized chest radiographs. The five pathologies tested in our trial are pulmonary nodules, air bronchograms, paratracheal abnormalities, pneumothoraces, interstitial lung diseases. The first three are examples of situations where focus is put on shape, borders and content of the pathology, the next is an example of situations where the visualization of a subtle line is required and the last one is an example of diffuse disease where the perceivability of details is important. Eight image enhancements were tested and included both pixel based gray-level transformation such as, windowing, statistical differencing, polynomial transform, histogram equalization, histogram hyperbolization, and spatial enhancement such as, unsharp masking with different masks and a Sobel detector. For each pathology we recommend two or three acceptable transformations.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Journal Article
TL;DR: An interactive software package, performing some useful general purpose image processing operations and being used as a tool for the problem of the anatomical correlation of PET brain images, is under development.
Abstract: An interactive software package, performing some useful general purpose image processing operations and being used as a tool for the problem of the anatomical correlation of PET brain images, is under development. The software is developed as a comprehensive tool with a graphic user-interface allowing the display of the processed images through the use of a variety of colormaps. The implemented routines perform a lot of processing operations on the images: a) Local image processing, i.e. smoothing and sharpening, contours extraction, interactive expansion, shrinking and thresholding of the gray scale, histogram equalization. b) ROI handling, i.e., ROI drawing and computing, transformation of an image to a ROI, ROI editing. c) Additional operators include frequency space image processing such as FFT.

01 Apr 1991
TL;DR: This work presents work in which histogram equalization is performed on the codebook of a tree-structured vector quantizer, and encoding with the resulting codebook performs both compression and contrast enhancement.
Abstract: Combining Vector Quantization and Histogram Equalization Pamela C. Cosman Information Systems Laboratory Stanford University Stanford, CA 94305-4055 Eve A. Riskin Dept. of Electrical Engineering, FT-10 University of Washington Seattle, WA 98195 Robert M. Gray Information Systems Laboratory Stanford University Stanford, CA 94305-4055 Introduction Histogram equalization is a contrast enhancement technique in which each pixel is remapped to an intensity proportional to its rank among surrounding pixels. Histogram equalization is a competitor of interactive intensity windowing, which is the established contrast enhancement technique for medical images. We present work in which histogram equalization is performed on the codebook of a tree-structured vector quantizer. Encoding with the resulting codebook performs both compression and contrast enhancement. It is also possible to perform intensity windowing on the codebook, or a combination of intensity windowing and histogram equalization, so that these need not be done as separate post-processing steps. Histogram Equalization Histogram equalization refers to a set of contrast enhancement techniques which attempt to spread out the intensity levels occurring in an image over the full available range [2]. In global histogram equalization, one calculates the intensity histogram for the entire image and then remaps each pixel's intensity proportional to its rank among all the pixel intensities. In adaptive histogram equalization, the histogram is calculated only for pixels in a context region, usually a square, and the remapping is done for the center pixel in the square. This can be called pointwise histogram equalization because, for each point in the image, the histogram for the TH0373-1/91/0000/0113/$01 .OO 0 1991 IEEE

Proceedings ArticleDOI
01 Aug 1991
TL;DR: A suite of algorithms which match the neural response of the eye with the image-processing display by translating the raw 12-bit images into an enhanced 8-bit format for display are presented.
Abstract: With the advent of low-cost mid-range IR mosaic PtSi focal planes, there is an increasing need for accommodation of the human vision system in image exploitation workstation displays. This paper presents a suite of algorithms which match the neural response of the eye with the image-processing display by translating the raw 12-bit images into an enhanced 8-bit format for display. The tool box of translation algorithms includes histogram equalization, histogram projection, plateau projection, plateau equalization, modular projection, overlapping and non- overlapping zonal projection, sub-sampling projection, pseudocolor, and half gray scale/half pseudocolor. The operator/photointerpreter is presented a menu from which he may select an automatic mode which uses image statistics to enhance the image, or a manual mode optimized by the operator to his preference. The choice of the appropriate algorithm and operating mode compensates for the wide variance in IR gray scale and background clutter due to time of day, season, and atmospheric conditions. The tool box also includes standard image processing algorithms such as roam, zoom, sharpening, filtering, and convolution to manipulate and further enhance the translated images.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Jul 1991
TL;DR: The study shows that features extracted using the spatial gray-level dependence method were the most discriminate ones, indicating that it is possible to achieve a semi-automatic analysis of autoradiographic images.
Abstract: The ability of four methods to perform automatic texture discrimination of three cellular organelles (nucleus, mitochondria and lipid droplets) from autoradiographic images is investigated. The four methods studied are the first-order statistics of the gray-level histogram, the gray-level difference method, the gray-level run length method, and the spatial gray-level dependence method. The influence of parameters like the number of features, the number of gray-level classes, the orientation and step size of the analysis, and the effect of preprocessing the images by histogram equalization and image reduction were also analyzed to optimize the performance of the methods. The nearest neighbor pattern recognition algorithm using the Mahalanobis distance was used to evaluate the performance of the methods. First, a training set of 30 samples per organelle was chosen to train the classifier and to select the best discriminant features. The probability of error was estimated with the leave-one-out method and the results are expressed in percentage of correct classifications. The study shows that features extracted using the spatial gray-level dependence method were the most discriminate ones. The best features set was then applied to a test population of 734 cellular organelles to differentiate the three classes. Correct classifications occurred for 95% of cases, which indicates that it is possible to achieve a semi-automatic analysis of autoradiographic images.