Contrast limited adaptive histogram equalization
About: This article is published in Graphics gems.The article was published on 1994-08-01. It has received 2671 citations till now. The article focuses on the topics: Adaptive histogram equalization & Histogram matching.
01 Aug 2004
TL;DR: A system level realization of CLAHE is proposed, which is suitable for VLSI or FPGA implementation and the goal for this realization is to minimize the latency without sacrificing precision.
Abstract: Acquired real-time image sequences, in their original form may not have good viewing quality due to lack of proper lighting or inherent noise. For example, in X-ray imaging, when continuous exposure is used to obtain an image sequence or video, usually low-level exposure is administered until the region of interest is identified. In this case, and many other similar situations, it is desired to improve the image quality in real-time. One particular method of interest, which extensively is used for enhancement of still images, is Contrast Limited Adaptive Histogram Equalization (CLAHE) proposed in  and summarized in . This approach is computationally extensive and it is usually used for off-line image enhancement. Because of its performance, hardware implementation of this algorithm for enhancement of real-time image sequences is sought. In this paper, a system level realization of CLAHE is proposed, which is suitable for VLSI or FPGA implementation. The goal for this realization is to minimize the latency without sacrificing precision.
TL;DR: This paper constructs an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images and proposes an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs).
Abstract: Underwater image enhancement has been attracting much attention due to its significance in marine engineering and aquatic robotics. Numerous underwater image enhancement algorithms have been proposed in the last few years. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real-world images. It is thus unclear how these algorithms would perform on images acquired in the wild and how we could gauge the progress in the field. To bridge this gap, we present the first comprehensive perceptual study and analysis of underwater image enhancement using large-scale real-world images. In this paper, we construct an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images. We treat the rest 60 underwater images which cannot obtain satisfactory reference images as challenging data. Using this dataset, we conduct a comprehensive study of the state-of-the-art underwater image enhancement algorithms qualitatively and quantitatively. In addition, we propose an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs). The benchmark evaluations and the proposed Water-Net demonstrate the performance and limitations of state-of-the-art algorithms, which shed light on future research in underwater image enhancement. The dataset and code are available at https://li-chongyi.github.io/proj_benchmark.html .
TL;DR: ChromEMT enables the ultrastructure of individual chromatin chains, heterochromatin domains, and mitotic chromosomes to be resolved in serial slices and their 3D organization to be visualized as a continuum through large nuclear volumes in situ.
Abstract: INTRODUCTION In human cells, 2 m of DNA are compacted in the nucleus through assembly with histones and other proteins into chromatin structures, megabase three-dimensional (3D) domains, and chromosomes that determine the activity and inheritance of our genomes. The long-standing textbook model is that primary 11-nm DNA–core nucleosome polymers assemble into 30-nm fibers that further fold into 120-nm chromonema, 300- to 700-nm chromatids, and, ultimately, mitotic chromosomes. Further extrapolating from this model, silent heterochromatin is generally depicted as 30- and 120-nm fibers. The hierarchical folding model is based on the in vitro structures formed by purified DNA and nucleosomes and on chromatin fibers observed in permeabilized cells from which other components had been extracted. Unfortunately, there has been no method that enables DNA and chromatin ultrastructure to be visualized and reconstructed unambiguously through large 3D volumes of intact cells. Thus, a remaining question is, what are the local and global 3D chromatin structures in the nucleus that determine the compaction and function of the human genome in interphase cells and mitotic chromosomes? RATIONALE To visualize and reconstruct chromatin ultrastructure and 3D organization across multiple scales in the nucleus, we developed ChromEMT, which combines electron microscopy tomography (EMT) with a labeling method (ChromEM) that selectivity enhances the contrast of DNA. This technique exploits a fluorescent dye that binds to DNA, and upon excitation, catalyzes the deposition of diaminobenzidine polymers on the surface, enabling chromatin to be visualized with OsO 4 in EM. Advances in multitilt EMT allow us to reveal the chromatin ultrastructure and 3D packing of DNA in both human interphase cells and mitotic chromosomes. RESULTS ChromEMT enables the ultrastructure of individual chromatin chains, heterochromatin domains, and mitotic chromosomes to be resolved in serial slices and their 3D organization to be visualized as a continuum through large nuclear volumes in situ. ChromEMT stains and detects 30-nm fibers in nuclei purified from hypotonically lysed chicken erythrocytes and treated with MgCl 2 . However, we do not observe higher-order fibers in human interphase and mitotic cells in situ . Instead, we show that DNA and nucleosomes assemble into disordered chains that have diameters between 5 and 24 nm, with different particle arrangements, densities, and structural conformations. Chromatin has a more extended curvilinear structure in interphase nuclei and collapses into compact loops and interacting arrays in mitotic chromosome scaffolds. To analyze chromatin packing, we create 3D grid maps of chromatin volume concentrations (CVCs) in situ. We find that interphase nuclei have subvolumes with CVCs ranging from 12 to 52% and distinct spatial distribution patterns, whereas mitotic chromosome subvolumes have CVCs >40%. CONCLUSION We conclude that chromatin is a flexible and disordered 5- to 24-nm-diameter granular chain that is packed together at different concentration densities in interphase nuclei and mitotic chromosomes. The overall primary structure of chromatin polymers does not change in mitotic chromosomes, which helps to explain the rapid dynamics of chromatin condensation and how epigenetic interactions and structures can be inherited through cell division. In contrast to rigid fibers that have longer fixed persistence lengths, disordered 5- to 24-nm-diameter chromatin chains are flexible and can bend at various lengths to achieve different levels of compaction and high packing densities. The diversity of chromatin structures is exciting and provides a structural basis for how different combinations of DNA sequences, interactions, linker lengths, histone variants, and modifications can be integrated to fine-tune the function of genomic DNA in the nucleus to specify cell fate. Our data also suggest that the assembly of 3D domains in the nucleus with different chromatin concentrations, rather than higher-order folding, determines the global accessibility and activity of DNA.
TL;DR: Examination of transcriptional bursting in living Drosophila embryos shows that linked reporter genes exhibit coordinated bursting profiles when regulated by a shared enhancer, challenging conventional models of enhancer-promoter looping.
Abstract: Transcription is episodic, consisting of a series of discontinuous bursts. Using live-imaging methods and quantitative analysis, we examine transcriptional bursting in living Drosophila embryos. Different developmental enhancers positioned downstream of synthetic reporter genes produce transcriptional bursts with similar amplitudes and duration but generate very different bursting frequencies, with strong enhancers producing more bursts than weak enhancers. Insertion of an insulator reduces the number of bursts and the corresponding level of gene expression, suggesting that enhancer regulation of bursting frequency is a key parameter of gene control in development. We also show that linked reporter genes exhibit coordinated bursting profiles when regulated by a shared enhancer, challenging conventional models of enhancer-promoter looping.
••15 Jun 2019
TL;DR: A new neural network for enhancing underexposed photos is presented, which introduces intermediate illumination in its network to associate the input with expected enhancement result, which augments the network's capability to learn complex photographic adjustment from expert-retouched input/output image pairs.
Abstract: This paper presents a new neural network for enhancing underexposed photos. Instead of directly learning an image-to-image mapping as previous work, we introduce intermediate illumination in our network to associate the input with expected enhancement result, which augments the network's capability to learn complex photographic adjustment from expert-retouched input/output image pairs. Based on this model, we formulate a loss function that adopts constraints and priors on the illumination, prepare a new dataset of 3,000 underexposed image pairs, and train the network to effectively learn a rich variety of adjustment for diverse lighting conditions. By these means, our network is able to recover clear details, distinct contrast, and natural color in the enhancement results. We perform extensive experiments on the benchmark MIT-Adobe FiveK dataset and our new dataset, and show that our network is effective to deal with previously challenging images.
01 Sep 1987-Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing
TL;DR: It is concluded that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clip ahe can be made adequately fast to be routinely applied in the normal display sequence.
Abstract: Adaptive histogram equalization (ahe) is a contrast enhancement method designed to be broadly applicable and having demonstrated effectiveness. However, slow speed and the overenhancement of noise it produces in relatively homogeneous regions are two problems. We report algorithms designed to overcome these and other concerns. These algorithms include interpolated ahe, to speed up the method on general purpose computers; a version of interpolated ahe designed to run in a few seconds on feedback processors; a version of full ahe designed to run in under one second on custom VLSI hardware; weighted ahe, designed to improve the quality of the result by emphasizing pixels' contribution to the histogram in relation to their nearness to the result pixel; and clipped ahe, designed to overcome the problem of overenhancement of noise contrast. We conclude that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clipped ahe can be made adequately fast to be routinely applied in the normal display sequence.
TL;DR: By working with a family of contrast enhanced images the difficult task of selecting the single level of contrast enhancement appropriate for a particular image is avoided, increasing the usefulness of low contrast images.
Abstract: This paper describes how a family of images of varying levels of contrast can be efficiently calculated and displayed. We show that a contrast space defined in terms of histogram equalization can be specified in terms of two parameters: region size and histogram blurring level. Based on these observations, we describe one exact algorithm and two efficient algorithms for computing sequences of images within this contrast space. These precomputed images can be displayed and interactively explored to examine image features of interest. By working with a family of contrast enhanced images the difficult task of selecting the single level of contrast enhancement appropriate for a particular image is avoided, increasing the usefulness of low contrast images.
TL;DR: A new method, unsharp masking followed by contrast limited adaptive histogram equalization, now appears to overcome the problem of low contrast detail in portal film enhancement and be able to be read more accurately.
Abstract: We report on the results a 3-year project which had as its goal the development of methods to enhance radiation portal films to improve their readability. We had previously reported on a portal film enhancement technique, contrast limited adaptive histogram equalization, which could enhance low contrast detail, but degraded sharply contrasted edges. A new method, unsharp masking followed by contrast limited adaptive histogram equalization, now appears to overcome this problem. A clinical trial to test whether enhanced portal films could be read more accurately than standard ones was undertaken. The trial involved 12 readers from two institutions doing 276 readings. In this trial the enhanced films were judged to be of higher quality than the non-enhanced films (p < .001) and were read more accurately (p = .026). The usefulness and difficulties of routinely performing portal film enhancement in a busy radiation therapy department are discussed.
••07 Jul 1991
TL;DR: Two new methods based on the use of local structural information, in particular edge strengths, in defining contextual regions are presented and discussed, namely edge-affected unsharp masking followed by contrast-limited adaptive histogram equalization (AHE), and diffusive histograms equalization, a variant of AHE in which weighted contextual areas are calculated by edge- affected diffusion.
Abstract: Contrast enhancement is a fundamental step in the display of digital images. The end result of display is the perceived brightness occurring in the human observer; design of effective contrast enhancement mappings therefore requires understanding of human brightness perception. Recent advances in this area have emphasized the importance of image structure in determining our perception of brightnesses, and consequently contrast enhancement methods which attempt to use structural information are being widely investigated. In this paper we present two promising methods we feel are strong competitors to presently-used techniques. We begin with a survey of contrast enhancement techniques for use with medical images. Classical adaptive algorithms use one or more statistics of the intensity distribution of local image areas to compute the displayed pixel values. More recently, techniques which attempt to take direct account of local structural information have been developed. The use of this structural information, in particular edge strengths, in defining contextual regions seems especially important. Two new methods based on this idea are presented and discussed, namely edge-affected unsharp masking followed by contrast-limited adaptive histogram equalization (AHE), and diffusive histogram equalization, a variant of AHE in which weighted contextual regions are calculated by edge-affected diffusion. Results on typical medical images are given.
Related Papers (5)
27 Jun 2016
05 Oct 2015