scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Digital Color Imaging

TL;DR: A survey of color imaging can be found in this article, where the fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology, along with common mathematical models used for representing these devices.
Abstract: This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided.
Citations
More filters
Journal ArticleDOI
TL;DR: This work demonstrates a convenient, versatile approach to dynamically fine-tuning emission in the full colour range from a new class of core-shell upconversion nanocrystals by adjusting the pulse width of infrared laser beams and suggests that the unprecedented colour tunability from these nanocry crystals is governed by a non-steady-state upconverting process.
Abstract: Developing light-harvesting materials with tunable emission colours has always been at the forefront of colour display technologies. The variation in materials composition, phase and structure can provide a useful tool for producing a wide range of emission colours, but controlling the colour gamut in a material with a fixed composition remains a daunting challenge. Here, we demonstrate a convenient, versatile approach to dynamically fine-tuning emission in the full colour range from a new class of core-shell upconversion nanocrystals by adjusting the pulse width of infrared laser beams. Our mechanistic investigations suggest that the unprecedented colour tunability from these nanocrystals is governed by a non-steady-state upconversion process. These findings provide keen insights into controlling energy transfer in out-of-equilibrium optical processes, while offering the possibility for the construction of true three-dimensional, full-colour display systems with high spatial resolution and locally addressable colour gamut.

777 citations

Journal ArticleDOI
TL;DR: This article presents an overview of existing map processing techniques, bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.
Abstract: Maps depict natural and human-induced changes on earth at a fine resolution for large areas and over long periods of time. In addition, maps—especially historical maps—are often the only information source about the earth as surveyed using geodetic techniques. In order to preserve these unique documents, increasing numbers of digital map archives have been established, driven by advances in software and hardware technologies. Since the early 1980s, researchers from a variety of disciplines, including computer science and geography, have been working on computational methods for the extraction and recognition of geographic features from archived images of maps (digital map processing). The typical result from map processing is geographic information that can be used in spatial and spatiotemporal analyses in a Geographic Information System environment, which benefits numerous research fields in the spatial, social, environmental, and health sciences. However, map processing literature is spread across a broad range of disciplines in which maps are included as a special type of image. This article presents an overview of existing map processing techniques, with the goal of bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.

674 citations

Journal ArticleDOI
TL;DR: The proposed demosaicing algorithm estimates missing pixels by interpolating in the direction with fewer color artifacts, and the aliasing problem is addressed by applying filterbank techniques to 2-D directional interpolation.
Abstract: A cost-effective digital camera uses a single-image sensor, applying alternating patterns of red, green, and blue color filters to each pixel location. A way to reconstruct a full three-color representation of color images by estimating the missing pixel components in each color plane is called a demosaicing algorithm. This paper presents three inherent problems often associated with demosaicing algorithms that incorporate two-dimensional (2-D) directional interpolation: misguidance color artifacts, interpolation color artifacts, and aliasing. The level of misguidance color artifacts present in two images can be compared using metric neighborhood modeling. The proposed demosaicing algorithm estimates missing pixels by interpolating in the direction with fewer color artifacts. The aliasing problem is addressed by applying filterbank techniques to 2-D directional interpolation. The interpolation artifacts are reduced using a nonlinear iterative procedure. Experimental results using digital images confirm the effectiveness of this approach.

462 citations

Journal ArticleDOI
TL;DR: An overview of the image processing pipeline is presented, first from a signal processing perspective and later from an implementation perspective, along with the tradeoffs involved.
Abstract: Digital still color cameras (DSCs) have gained significant popularity in recent years, with projected sales in the order of 44 million units by the year 2005. Such an explosive demand calls for an understanding of the processing involved and the implementation issues, bearing in mind the otherwise difficult problems these cameras solve. This article presents an overview of the image processing pipeline, first from a signal processing perspective and later from an implementation perspective, along with the tradeoffs involved.

368 citations

Journal ArticleDOI
TL;DR: The proposed fully automated vector technique can be easily implemented in either hardware or software; and incorporated in any existing microarray image analysis and gene expression tool.
Abstract: Vector processing operations use essential spectral and spatial information to remove noise and localize microarray spots. The proposed fully automated vector technique can be easily implemented in either hardware or software; and incorporated in any existing microarray image analysis and gene expression tool.

348 citations

References
More filters
Journal ArticleDOI
TL;DR: Substantial improvements over single sample quantizing are attained with blocks of relatively short length, and the final selection of the optimal set of quantizers becomes a matter of a few simple trials.
Abstract: The paper analyzes a procedure for quantizing blocks of N correlated Gaussian random variables. A linear transformation (P) first converts the N dependent random variables into N independent random variables. These are then quantized, one at a time, in optimal fashion. The output of each quantizer is transmitted by a binary code. The total number of binary digits available for the block of N symbols is fixed. Finally, a second N \times N linear transformation (R) constructs from the quantized values the best estimate (in a mean-square sense) of the original variables. It is shown that the best choice of R is R = p^{-1} , regardless of other considerations. If R = P^{-1} , the best choice for P is the transpose of the orthogonal matrix wich diagonalizes the moment matrix of the original (correlated) random variables. An approximate expression is obtained for the manner in which the available binary digits should be assigned to the N quantized variables, i.e., the manner in which the number of levels for each quantizer should be chosen. The final selection of the optimal set of quantizers then becomes a matter of a few simple trials. A number of examples are worked out and substantial improvements over single sample quantizing are attained with blocks of relatively short length.

431 citations

Journal ArticleDOI
TL;DR: The spectral reflectance curves of 433 chips in the Munsell Book of Color have been found to depend on only three components which account for 99.18% of the variance.
Abstract: The spectral reflectance curves of 433 chips in the Munsell Book of Color have been found to depend on only three components which account for 99.18% of the variance. It is suggested that this three-component dependency may be a characteristic of all organic pigments, including those in the retina, and thus explain the trichromatic nature of color vision.

415 citations

Book
01 Jan 1991

394 citations

BookDOI
01 Dec 1990
TL;DR: Performance Bounds for Subband Coding, IIR Analysis/Synthesis Systems, and Medical Image Compression: Possible Applications of Sub band Coding.
Abstract: 1 Performance Bounds for Subband Coding.- 2 Multirate Filter Banks for Subband Coding.- 3 IIR Analysis/Synthesis Systems.- 4 Subband Transforms.- 5 Subband Coding of Color Images.- 6 Subband Coding of Video Signals.- 7 Advanced Television Coding Using Exact Reconstruction Filter Banks.- 8 Medical Image Compression: Possible Applications of Subband Coding.

390 citations

Journal ArticleDOI
TL;DR: By simply adding some noise to the signal before it is quantized and subtracting the same noise at the receiver, the quantization steps can be broken up and the source rate reduced to three bits per sample.
Abstract: In order to transmit television pictures over a digital channel, it is necessary to send a binary code which represents the intensity level at each point in the picture. For good picture quality using standard PCM transmission, at least six bits are required at each sample point, since the eye is very sensitive to the small intensity steps introduced by quantization. However, by simply adding some noise to the signal before it is quantized and subtracting the same noise at the receiver, the quantization steps can be broken up and the source rate reduced to three bits per sample. Pseudo-random number generators can be synchronized at the transmitter and receiver to provide the identical "noise" which makes the process possible. Thus, with the addition of only a small amount of equipment, the efficiency of a PCM channel can be doubled.

384 citations