scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Digital Color Imaging

TL;DR: A survey of color imaging can be found in this article, where the fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology, along with common mathematical models used for representing these devices.
Abstract: This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided.
Citations
More filters
Proceedings ArticleDOI
24 Nov 2003
TL;DR: The authors present the color segmentation approach which is based on a nonparametric pyramid of watersheds, with a comparative study of different color gradients.
Abstract: The paper deals with the use of the various color pieces of information for segmenting color images and sequences with mathematical morphology operators. It is divided in four parts. The first one is concerning the choice of the color space suitable for morphological processing. The choice of a connection which induces a specific segmentation is discussed. The authors then present the color segmentation approach which is based on a nonparametric pyramid of watersheds, with a comparative study of different color gradients. Another multiscale color segmentation algorithm is then introduced, relying on the merging of chromatic-achromatic partitions ordered by the saturation component.

70 citations

Journal ArticleDOI
07 Aug 2002
TL;DR: This paper reviews the sensor correlation algorithm for illuminant classification and discusses four changes that improve the algorithm's estimation accuracy and broaden its applicability, and develops the three-dimensional classification algorithms using all three-color channels.
Abstract: This paper describes practical algorithms and experimental results concerning illuminant classification. Specifically, we review the sensor correlation algorithm for illuminant classification and we discuss four changes that improve the algorithm's estimation accuracy and broaden its applicability. First, we space the classification illuminants evenly along the reciprocal scale of color temperature, called "mired," rather than the original color-temperature scale. This improves the perceptual uniformity of the illuminant classification set. Second, we calculate correlation values between the image color gamut and the reference illuminant gamut, rather than between the image pixels and the illuminant gamuts. This change makes the algorithm more reliable. Third, we introduce a new image scaling operation to adjust for overall intensity differences between images. Fourth, we develop the three-dimensional classification algorithms using all three-color channels and compare this with the original two algorithms from the viewpoint of accuracy and computational efficiency. The image processing algorithms incorporating these changes are evaluated using a real image database with calibrated scene illuminants.

69 citations

Journal ArticleDOI
TL;DR: In this paper, a closed-loop nonlinear scheme for precisely controlling the luminosity and correlated color temperature (CCT) of a bicolor adjustable light-emitting diode (LED) lamp was proposed.
Abstract: This paper proposes a closed-loop nonlinear scheme for precisely controlling the luminosity and correlated color temperature (CCT) of a bicolor adjustable light-emitting diode (LED) lamp. The main objective is to achieve a precise and fully independent dimming and CCT control of the light mixture emitted from a two-string LED lamp comprising warm-white and cool-white color LEDs, regardless of the operating conditions and throughout the long operating lifetime of the LED lamp. The proposed control method is formulated using the nonlinear empirical LED model of the bicolor white LED system. Experimental results show that with the proposed closed-loop nonlinear approach, both CCT and dimming control of the bicolor lamp is significantly more accurate and robust to ambient temperature variations, ambient light interference, and LED aging than the conventional linear approach used in existing products. The maximum error in luminous flux employing the proposed closed-loop nonlinear approach is 3%, compared with 20% using the closed-loop linear approach. The maximum deviation in CCT is only 1.78%, compared with 27.5% with its linear counterpart.

68 citations

Journal ArticleDOI
TL;DR: The microchannel immunoassays reliably detected H. pylori and E. coli O157:H7 antigens with biotinylated polyclonal antibodies in quantities on the order of 10 ng, which provides a sensitivity of detection comparable to those of conventional dot blot assays.
Abstract: The light-scattering properties of submicroscopic metal particles ranging from 40 to 120 nm in diameter have recently been investigated. These particles scatter incident white light to generate monochromatic light, which can be seen either by the naked eye or by dark-field microscopy. The nanoparticles are well suited for detection in microchannel-based immunoassays. The goal of the present study was to detect Helicobacter pylori- and Escherichia coli O157:H7-specific antigens with biotinylated polyclonal antibodies. Gold particles (diameter, 80 nm) functionalized with a secondary antibiotin antibody were then used as the readout. A dark-field stereomicroscope was used for particle visualization in poly(dimethylsiloxane) microchannels. A colorimetric quantification scheme was developed for the detection of the visual color changes resulting from immune reactions in the microchannels. The microchannel immunoassays reliably detected H. pylori and E. coli O157:H7 antigens in quantities on the order of 10 ng, which provides a sensitivity of detection comparable to those of conventional dot blot assays. In addition, the nanoparticles within the microchannels can be stored for at least 8 months without a loss of signal intensity. This strategy provides a means for the detection of nanoparticles in microchannels without the use of sophisticated equipment. In addition, the approach has the potential for use for further miniaturization of immunoassays and can be used for long-term archiving of immunoassays.

66 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed demosaicking framework exhibits excellent performance in terms of the commonly used objective criteria and at the same time it produces demosaicked images with impressive visual quality.
Abstract: A new demosaicking framework for single-sensor imaging devices operating on a Bayer color filter array (CFA) is introduced and analyzed. An efficient data adaptive filtering concept in conjunction with the refined spectral models constitutes the base for the proposed framework. Using a different form of the function mapping the aggregated absolute differences among the CFA inputs to the edge-sensing weighting coefficients, the framework allows to design fully automated demosaicking solutions suitable for common digital imaging apparatus, and alternatively, the proposed solutions can also be used to support PC-based demosaicking of the raw CFA images. Thus, the framework can be seen as a universal tool satisfying the needs of the end-users for i) the instant access and visualization of the captured images, and ii) the interactive processing of the raw sensor data. Moreover, the proposed framework is relatively easy to implement in either software or hardware. Experimental results indicate that the proposed framework exhibits excellent performance in terms of the commonly used objective criteria and at the same time it produces demosaicked images with impressive visual quality.

64 citations

References
More filters
01 Jan 1967
TL;DR: The k-means algorithm as mentioned in this paper partitions an N-dimensional population into k sets on the basis of a sample, which is a generalization of the ordinary sample mean, and it is shown to give partitions which are reasonably efficient in the sense of within-class variance.
Abstract: The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special

24,320 citations

Journal ArticleDOI
S. P. Lloyd1
TL;DR: In this article, the authors derived necessary conditions for any finite number of quanta and associated quantization intervals of an optimum finite quantization scheme to achieve minimum average quantization noise power.
Abstract: It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^{b} quanta, b=1,2, \cdots, 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes.

11,872 citations

Journal ArticleDOI
TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
Abstract: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data. The basic properties of the algorithm are discussed and demonstrated by examples. Quite general distortion measures and long blocklengths are allowed, as exemplified by the design of parameter vector quantizers of ten-dimensional vectors arising in Linear Predictive Coded (LPC) speech compression with a complicated distortion measure arising in LPC analysis that does not depend only on the error vector.

7,935 citations

Book
01 Jan 1991
TL;DR: The author explains the design and implementation of the Levinson-Durbin Algorithm, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing a Quantizer.
Abstract: 1 Introduction- 11 Signals, Coding, and Compression- 12 Optimality- 13 How to Use this Book- 14 Related Reading- I Basic Tools- 2 Random Processes and Linear Systems- 21 Introduction- 22 Probability- 23 Random Variables and Vectors- 24 Random Processes- 25 Expectation- 26 Linear Systems- 27 Stationary and Ergodic Properties- 28 Useful Processes- 29 Problems- 3 Sampling- 31 Introduction- 32 Periodic Sampling- 33 Noise in Sampling- 34 Practical Sampling Schemes- 35 Sampling Jitter- 36 Multidimensional Sampling- 37 Problems- 4 Linear Prediction- 41 Introduction- 42 Elementary Estimation Theory- 43 Finite-Memory Linear Prediction- 44 Forward and Backward Prediction- 45 The Levinson-Durbin Algorithm- 46 Linear Predictor Design from Empirical Data- 47 Minimum Delay Property- 48 Predictability and Determinism- 49 Infinite Memory Linear Prediction- 410 Simulation of Random Processes- 411 Problems- II Scalar Coding- 5 Scalar Quantization I- 51 Introduction- 52 Structure of a Quantizer- 53 Measuring Quantizer Performance- 54 The Uniform Quantizer- 55 Nonuniform Quantization and Companding- 56 High Resolution: General Case- 57 Problems- 6 Scalar Quantization II- 61 Introduction- 62 Conditions for Optimality- 63 High Resolution Optimal Companding- 64 Quantizer Design Algorithms- 65 Implementation- 66 Problems- 7 Predictive Quantization- 71 Introduction- 72 Difference Quantization- 73 Closed-Loop Predictive Quantization- 74 Delta Modulation- 75 Problems- 8 Bit Allocation and Transform Coding- 81 Introduction- 82 The Problem of Bit Allocation- 83 Optimal Bit Allocation Results- 84 Integer Constrained Allocation Techniques- 85 Transform Coding- 86 Karhunen-Loeve Transform- 87 Performance Gain of Transform Coding- 88 Other Transforms- 89 Sub-band Coding- 810 Problems- 9 Entropy Coding- 91 Introduction- 92 Variable-Length Scalar Noiseless Coding- 93 Prefix Codes- 94 Huffman Coding- 95 Vector Entropy Coding- 96 Arithmetic Coding- 97 Universal and Adaptive Entropy Coding- 98 Ziv-Lempel Coding- 99 Quantization and Entropy Coding- 910 Problems- III Vector Coding- 10 Vector Quantization I- 101 Introduction- 102 Structural Properties and Characterization- 103 Measuring Vector Quantizer Performance- 104 Nearest Neighbor Quantizers- 105 Lattice Vector Quantizers- 106 High Resolution Distortion Approximations- 107 Problems- 11 Vector Quantization II- 111 Introduction- 112 Optimality Conditions for VQ- 113 Vector Quantizer Design- 114 Design Examples- 115 Problems- 12 Constrained Vector Quantization- 121 Introduction- 122 Complexity and Storage Limitations- 123 Structurally Constrained VQ- 124 Tree-Structured VQ- 125 Classified VQ- 126 Transform VQ- 127 Product Code Techniques- 128 Partitioned VQ- 129 Mean-Removed VQ- 1210 Shape-Gain VQ- 1211 Multistage VQ- 1212 Constrained Storage VQ- 1213 Hierarchical and Multiresolution VQ- 1214 Nonlinear Interpolative VQ- 1215 Lattice Codebook VQ- 1216 Fast Nearest Neighbor Encoding- 1217 Problems- 13 Predictive Vector Quantization- 131 Introduction- 132 Predictive Vector Quantization- 133 Vector Linear Prediction- 134 Predictor Design from Empirical Data- 135 Nonlinear Vector Prediction- 136 Design Examples- 137 Problems- 14 Finite-State Vector Quantization- 141 Recursive Vector Quantizers- 142 Finite-State Vector Quantizers- 143 Labeled-States and Labeled-Transitions- 144 Encoder/Decoder Design- 145 Next-State Function Design- 146 Design Examples- 147 Problems- 15 Tree and Trellis Encoding- 151 Delayed Decision Encoder- 152 Tree and Trellis Coding- 153 Decoder Design- 154 Predictive Trellis Encoders- 155 Other Design Techniques- 156 Problems- 16 Adaptive Vector Quantization- 161 Introduction- 162 Mean Adaptation- 163 Gain-Adaptive Vector Quantization- 164 Switched Codebook Adaptation- 165 Adaptive Bit Allocation- 166 Address VQ- 167 Progressive Code Vector Updating- 168 Adaptive Codebook Generation- 169 Vector Excitation Coding- 1610 Problems- 17 Variable Rate Vector Quantization- 171 Variable Rate Coding- 172 Variable Dimension VQ- 173 Alternative Approaches to Variable Rate VQ- 174 Pruned Tree-Structured VQ- 175 The Generalized BFOS Algorithm- 176 Pruned Tree-Structured VQ- 177 Entropy Coded VQ- 178 Greedy Tree Growing- 179 Design Examples- 1710 Bit Allocation Revisited- 1711 Design Algorithms- 1712 Problems

7,015 citations

Journal ArticleDOI
TL;DR: The mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects is described.
Abstract: Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects

3,480 citations