scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2002"


Proceedings Article
01 Jan 2002
TL;DR: This document describes the single set of revisions to the CIECAM97s model that make up theCIECAM02 color appearance model and provides an introduction to the model and a summary of its structure.
Abstract: The CIE Technical Committee 8-01, color appearance models for color management applications, has recently proposed a single set of revisions to the CIECAM97s color appearance model This new model, called CIECAM02, is based on CIECAM97s but includes many revisions1-4 and some simplifications A partial list of revisions includes a linear chromatic adaptation transform, a new non-linear response compression function and modifications to the calculations for the perceptual attribute correlates The format of this paper is an annotated description of the forward equations for the model Introduction The CIECAM02 color appearance model builds upon the basic structure and form of the CIECAM97s5,6 color appearance model This document describes the single set of revisions to the CIECAM97s model that make up the CIECAM02 color appearance model There were many, often conflicting, considerations such as compatibility with CIECAM97s, prediction performance, computational complexity, invertibility and other factors The format for this paper will differ from previous papers introducing a color appearance model Often a general description of the model is provided, then discussion about its performance and finally the forward and inverse equations are listed separately in an appendix Performance of the CIECAM02 model will be described elsewhere7 and for the purposes of brevity this paper will focus on the forward model Specifically, this paper will attempt to document the decisions that went into the design of CIECAM02 For a complete description of the forward and inverse equations, as well as usage guidelines, interested readers are urged to refer to the TC 8-01 web site8 or to the CIE for the latest draft or final copy of the technical report This paper is not intended to provide a definitive reference for implementing CIECAM02 but as an introduction to the model and a summary of its structure Data Sets The CIECAM02 model, like CIECAM97s, is based primarily on a set corresponding colors experiments and a collection of color appearance experiments The corresponding color data sets9,10 were used for the optimization of the chromatic adaptation transform and the D factor The LUTCHI color appearance data11,12 was the basis for optimization of the perceptual attribute correlates Other data sets and spaces were also considered The NCS system was a reference for the e and hue fitting The chroma scaling was also compared to the Munsell Book of Color Finally, the saturation equation was based heavily on recent experimental data13 Summary of Forward Model A color appearance model14,15 provides a viewing condition specific means for transforming tristimulus values to or from perceptual attribute correlates The two major pieces of this model are a chromatic adaptation transform and equations for computing correlates of perceptual attributes, such as brightness, lightness, chroma, saturation, colorfulness and hue The chromatic adaptation transform takes into account changes in the chromaticity of the adopted white point In addition, the luminance of the adopted white point can influence the degree to which an observer adapts to that white point The degree of adaptation or D factor is therefore another aspect of the chromatic adaptation transform Generally, between the chromatic adaptation transform and computing perceptual attributes correlates there is also a non-linear response compression The chromatic adaptation transform and D factor was derived based on experimental data from corresponding colors data sets The non-linear response compression was derived based on physiological data and other considerations The perceptual attribute correlates was derived by comparing predictions to magnitude estimation experiments, such as various phases of the LUTCHI data, and other data sets, such as the Munsell Book of Color Finally the entire structure of the model is generally constrained to be invertible in closed form and to take into account a sub-set of color appearance phenomena Viewing Condition Parameters It is convenient to begin by computing viewing condition dependent constants First the surround is selected and then values for F, c and Nc can be read from Table 1 For intermediate surrounds these values can be linearly interpolated2 Table 1 Viewing condition parameters for different surrounds Surround F c Nc Average 10 069 10 Dim 09 059 095 Dark 08 0525 08 The value of FL can be computed using equations 1 and 2, where LA is the luminance of the adapting field in cd/m2 Note that this two piece formula quickly goes to very small values for mesopic and scotopic levels and while it may resemble a cube-root function there are considerable differences between this two-piece function and a cube-root as the luminance of the adapting field gets very small ! k =1/ 5L A +1 ( ) (1) ! F L = 02k 4 5L A ( ) + 01 1" k4 ( ) 2 5L A ( ) 1/ 3 (2) The value n is a function of the luminance factor of the background and provides a very limited model of spatial color appearance The value of n ranges from 0 for a background luminance factor of zero to 1 for a background luminance factor equal to the luminance factor of the adopted white point The n value can then be used to compute Nbb, Ncb and z, which are then used during the computation of several of the perceptual attribute correlates These calculations can be performed once for a given viewing condition

409 citations


Proceedings Article
01 Jan 2002
TL;DR: It is shown and described how, like the human visual system, the Foveon X3 sensor has an inherent luminancechrominance behavior which results in higher image quality using fewer image pixels.
Abstract: In the two centuries of photography, there has been a wealth of invention and innovation aimed at capturing a realistic and pleasing full-color twodimensional representation of a scene. In this paper, we look back at the historical milestones of color photography and bring into focus a fascinating parallelism between the evolution of chemical based color imaging starting over a century ago, and the evolution of electronic photography which continues today. The second part of our paper is dedicated to a technical discussion of the new Foveon X3 multilayer color image sensor; what could be descried as a new more advanced species of camera sensor technology. The X3 technology is compared to other competing sensor technologies; we compare spectral sensitivities using one of many possible figures of merit. Finally we show and describe how, like the human visual system, the Foveon X3 sensor has an inherent luminancechrominance behavior which results in higher image quality using fewer image pixels.

118 citations


Proceedings Article
01 Jan 2002
TL;DR: The objectives in formulating iCAM were to simultaneously provide traditional color appearance capabilities, spatial vision attributes, and color difference metrics, in a model simple enough for practical applications.
Abstract: For over 20 years, color appearance models have evolved to the point of international standardization These models are capable of predicting the appearance of spatially-simple color stimuli under a wide variety viewing conditions and have been applied to images by treating each pixel as an independent stimulus It has been more recently recognized that revolutionary advances in color appearance modeling would require more rigorous treatment of spatial (and perhaps temporal) appearance phenomena In addition, color appearance models are often more complex than warranted by the available visual data and limitations in the accuracy and precision of practical viewing conditions Lastly, issues of color difference measurement are typically treated separate from color appearance Thus, the stage has been set for a new generation of color appearance models This paper presents one such model called iCAM, for image color appearance model The objectives in formulating iCAM were to simultaneously provide traditional color appearance capabilities, spatial vision attributes, and color difference metrics, in a model simple enough for practical applications The framework and initial implementation of the model are presented along with examples that illustrate its performance for chromatic adaptation, appearance scales, color difference, crispening, spreading, high-dynamic-range tone mapping, and image quality measurement It is expected that the implementation of this model framework will be refined in the coming years as new data become available

94 citations


Proceedings Article
01 Nov 2002
TL;DR: The Retinex Theory can be extended to perform yet another image processing task: that of removing shadows from images by a simple modification to the original, path based retinex computation such that it incorporates information about the location of shadow edges in an image.
Abstract: The Retinex Theory first introduced by Edwin Land forty years ago has been widely used for a range of applications. It was first introduced as a model of our own visual processing but has since been used to perform a range of image processing tasks including illuminant correction, dynamic range compression, and gamut mapping. In this paper we show how the theory can be extended to perform yet another image processing task: that of removing shadows from images. Our method is founded on a simple modification to the original, path based retinex computation such that we incorporate information about the location of shadow edges in an image. We demonstrate that when the location of shadow edges is known the algorithm is able to remove shadows effectively. We also set forth a method for the automatic location of shadow edges which makes use of a 1-d illumination invariant image. In this case the location of shadow edges is imperfect but we show that even so, the algorithm does a good job of removing the shadows.

93 citations


Proceedings Article
01 Jan 2002
TL;DR: This paper describes the efforts to create a calibrated, portable high dynamic range imaging system, and discusses the general properties of seventy calibrated high dynamicrange images of natural scenes in the database (http://pdc.stanford.edu/hdri).
Abstract: The ability to capture and render high dynamic range scenes limits the quality of current consumer and professional digital cameras. The absence of a well-calibrated high dynamic range color image database of natural scenes is an impediment to developing such rendering algorithms for digital photography. This paper describes our efforts to create such a database. First, we discuss how the image dynamic range is affected by three main components in the imaging pipeline: the optics, the sensor and the color transformation. Second, we describe a calibrated, portable high dynamic range imaging system. Third, we discuss the general properties of seventy calibrated high dynamic range images of natural scenes in the database (http://pdc.stanford.edu/hdri/). We recorded the calibrated RGB values and the spectral power distribution of illumination at different locations for each scene. The scene luminance ranges span two to six orders of magnitude. Within any scene, both the absolute level and the spectral composition of the illumination vary considerably. This suggests that future high dynamic range rendering algorithms need to account jointly for local color adaptation and local illumination level.

81 citations


Proceedings Article
01 Jan 2002
TL;DR: In this paper, a new method of display characterization is proposed which is applicable to the assessment of color reproduction of liquid-crystal displays (LCDs), considering both channel interaction and non-constancy of channel chromaticity.
Abstract: Acolor management system (CMS) such as ICC profile or sRGB space have been proposed for color transformation and reproduction of cross media. In such a CMS, accurate colorimetric characterization of a display device plays a critical role in achieving device-independent color reproduction. In the case of a CRT, colorimetric characterization based on a GOG model is accurate enough for this purpose. However, there is no effective counterpart in liquid-crystal displays (LCDs) since the characterization of an LCD has many difficulties, such as channel interaction and non-constancy of channel chromaticity. In this paper, a new method of display characterization is proposed which is applicable to the assessment of color reproduction of LCDs. The proposed method characterizes an electro-optical transfer function considering both channel interaction and non-constancy of channel chromaticity. Experimental results show that the proposed method is very effective in the colorimetry of LCDs.

62 citations


Proceedings Article
01 Jan 2002
TL;DR: It is demonstrated that a one-color per pixel image can be written as the sum of luminance and chrominance, and it is shown that the Bayer CFA is the most optimal arrangement of three colors on a square grid.
Abstract: We propose a new method for color demosaicing based on a mathematical model of spatial multiplexing of color. We demonstrate that a one-color per pixel image can be written as the sum of luminance and chrominance. In case of a regular arrangement of colors, such as with the Bayer color filter array (CFA), luminance and chrominance are well localized in the spatial frequency domain. Our algorithm is based on selecting the luminance and chrominance signal in the Fourier domain. This simple and efficient algorithm gives good results, comparable with the Bayesian approach to demosaicing. Additionally, this model allows us to demonstrate that the Bayer CFA is the most optimal arrangement of three colors on a square grid. Visual artifacts of the reconstruction can be clearly explained as aliasing between luminance and chrominance. Finally, this framework also allows us to control the trade-off between algorithm efficiency and quality in an explicit manner.

56 citations


Proceedings Article
01 Jan 2002
TL;DR: The results are consistent in thatCIECAM02 performed as well as, or better than, CIECAM97s in almost all cases, there being a large improvement in the prediction of saturation.
Abstract: A new CIE color appearance model (CIECAM02) has been developed. This paper describes the three major drawbacks of the earlier CIECAM97s model, and shows how the new model performs in these color regions. In addition, both models were tested using available data groups. The results are consistent in that CIECAM02 performed as well as, or better than, CIECAM97s in almost all cases, there being a large improvement in the prediction of saturation. The CIECAM02 model can therefore be considered as a possible replacement for CIECAM97s for all image applications.

35 citations


Proceedings Article
01 Feb 2002
TL;DR: In this article, a color matching algorithm was proposed to obtain the color matching functions of any digital camera, with correct weights and white bal- ance, from the relative scaling of their complete (3D) spectral sensitivities obtained from real spectroradiometric data.
Abstract: We propose a new algorithm to obtain the color matching functions of any digital camera, with correct weights and white bal- ance, from the relative scaling of their complete (3-D) spectral sensitivities obtained from real spectroradiometric data. Thanks to this algorithm, it is possible to predict the RGB digital levels of any digital camera in realistic illumination-scene environ- ments—spatially non-uniform illumination field, variable chromaticity and large dynamic range of luminance levels—opening the possibility of transforming any digital camera into a tele-colorimeter. The illumination-scene test was the Macbeth Color- Checker Chart under three different light sources (halogen, metal halide and daylight fluorescent lamps), provided by a non- standard light box. The results confirmed that it is possible to predict any RGB digital levels exclusively by varying the f-number of the camera zoom lens.

35 citations


Proceedings Article
01 Jan 2002
TL;DR: A new tool which combines the known techniques with the possibility of interactive gamut mapping is presented and will serve as an important pedagogical tool in the teaching of color engineering and can be used in the production of high quality color images in the future.
Abstract: Several tools and techniques for the visualization of color gamuts have been presented in the past. We present a short survey on the topic, and conclude that tools with the possibility for interactive color adjustment in some color space are almost absent. Therefore, a new tool which combines the known techniques with the possibility of interactive gamut mapping is presented along with suggestions for future work. The motivation for developing the new tool is threefold: First, it will serve as an important pedagogical tool in the teaching of color engineering. Secondly, we believe that the tool will prove helpful in research related to color reproduction. Finally, we hope that the tool can be used in the production of high quality color images in the future.

33 citations


Proceedings Article
01 Jan 2002
TL;DR: Comparative computation results show that μ-factor is not a competent metric for the optimal design of camera spectral sensitivity functions while UMG is able to pick out the optimum successfully, and the ultimate optimal set has been obtained by selecting the set with highest μ-Factor value from the sub-optimal collection obtained with UMG.
Abstract: To evaluate and optimally design spectral sensitivity functions for color input devices, a metric that incorporates practical, significant requirements is desired. The candidate metrics are Vora-Trussell's p-factor, a metric based on geometrical difference, and the proposed Unified Measure of Goodness, or UMG, which simultaneously considers the imaging noise and its propagation, colorimetric reproduction accuracy and multi-illuminant color correction. A systematic approach is presented to searching for an optimal set of spectral sensitivity functions from among the complete combinations of the given filter components. Comparative computation results show that μ-factor is not a competent metric for the optimal design of camera spectral sensitivity functions while UMG is able to pick out the optimum successfully. Furthermore, the ultimate optimal set has been obtained by selecting the set with highest μ-factor value from the sub-optimal collection obtained with UMG. This hierarchical approach comprehensively considers the advantages of both quality metrics. The candidates of the optimal sets based on the given filter components are experimentally tested and presented in the end of the article.

Proceedings Article
01 Jan 2002

Proceedings Article
01 Jan 2002
TL;DR: The results yielded that statistical difference between the peaks of preference of image quality may exist between cultures, but that the cultural difference observed is most likely not of practical significance for most applications.
Abstract: Observer preferences in the color reproduction of pictorial images have been a topic of debate for many years Through a series of psychophysical experiments we are trying to better understand the differences and trends in observer preferences for pictorial images, determine if cultural biases on preference exist, and finally generate a set of preferred color reproduced images for future experimentation and evaluation The results yielded that statistical difference between the peaks of preference of image quality may exist between cultures, but that the cultural difference observed is most likely not of practical significance for most applications The analysis of a second experiment yielded that the intra-observer repeatability of an observer is about half of the variation between observers Furthermore the analysis demonstrated that preferences on images with faces have a much tighter range of preference in comparison to images without faces

Proceedings Article
01 Nov 2002
TL;DR: The invariant image formed from an RGB image taken under light that can be approximated as Planckian solves the colour constancy problem at a single pixel and has smaller entropy value because the invariant value is smoothed out across former shadow boundaries.
Abstract: The invariant image [1,2]formed from an RGB image taken under light that can be approximated as Planckian solves the colour constancy problem at a single pixel. The invariant is a very useful tool for possible use in a large number of computer vision problems,such as removal of shadows from images [3].This image is formed by projecting log-log chromaticity coordinates into a 1D direction determined by a calibration of the imaging camera. The invariant can be formed whether or not gammacorrection is applied to images and thus can work for ordinary webcam images,for example,once a self-calibration is carried out [3].As such,the invariant image is an important new mechanism for image understanding.Since the resulting greyscale image is approximately independent of illumination,it is impervious to lighting change and hence to the presence of shadows.However,in forming the invariant image,it can sometimes happen that shadows are not completely removed.Here,we consider the problem of simple matrixing of sensor values so that the resulting invariant image is improved.To do so,we consider the calibration images and apply an optimization routine for establishing a 3 x 3 matrix to apply to the sensors, prior to forming the invariant,with an eye to improving lighting invariance.We find that an optimization does indeed improve the invariant.The resulting image generally has smaller entropy value because the invariant value is smoothed out across former shadow boundaries;thus the new invariant more smoothly captures the underlying intrinsic reflectance properties in the scene.



Proceedings Article
01 Jan 2002
TL;DR: A colorimetrically based metric that takes into account some aspects of the visual system and using information both about the statistics of colour differences, of the original images and of changes to spatial characteristics is able to give a close prediction of observer responses.
Abstract: This paper first presents a summary of a psychophysical experiment in which observers made judgements about the types of differences they perceived between originals and reproductions in a cross–media colour image reproduction. The results of the observer–reported visual data from that experiment are then compared with analogous metrics extracted from colorimetric data of the corresponding originals and reproductions. While there is a good agreement in terms of the most general findings, looking at more detailed results shows significant differences between visual and colorimetrically–based data. The paper then proceeds to describe a colorimetrically based metric that takes into account some aspects of the visual system and using information both about the statistics of colour differences, of the original images and of changes to spatial characteristics is able to give a close prediction of observer responses. The final metric is proposed for further testing as a means of predicting observer responses of image difference in colour reproduction as well as in other applications.


Proceedings Article
01 Jan 2002
TL;DR: The first three principal components explain about 99.8% of cumulative contribution of variance of spectral reflectances for each race and each face part, and for all races as well as mentioned in this paper.
Abstract: Spectral reflectances of various parts of human faces from various ethnic races were measured as part of experiments on spectral imaging for human portraits. Principal components analysis (PCA) was applied to the spectral reflectances from the various races, and a variety of face parts. The first three principal components explain about 99.8% of cumulative contribution of variance of spectral reflectances for each race and each face part, and for all races as well. Color differences of spectral reconstruction either for individual races and all races or for individual face parts based on different sets of principal components were estimated. The results indicate that, when using three basis functions and under D 50 illumination, the basis functions based only on spectra of Pacific-Asian subjects will provide the best overall color reproduction. However, from a spectral matching point of view, three basis functions based on all spectra will provide the best spectral reproduction with minimum overall mean value of metameric indices. More analyses were applied to spectral reflectances of human facial skin from different sources and their corresponding spectral reconstruction based on different sets of principal components. Those results provide practical suggestions for imaging, or spectral imaging, system design, especially imaging systems for human portraiture.

Proceedings Article
01 Nov 2002
TL;DR: An optimization technique to find hue constant RGB sensors through this optimization might be applicable in color engineering applications such as finding RGB sensors for color image encodings.
Abstract: We present an optimization technique to find hue constant RGB sensors. The hue representation is based on a log RGB opponent color space that is invariant to brightness and gamma. While modeling the visual response did not derive the opponent space, the hue definition is similar to the ones found in CIE Lab and IPT. Finding hue constant RGB sensors through this optimization might be applicable in color engineering applications such as finding RGB sensors for color image encodings.


Proceedings Article
01 Jan 2002
TL;DR: In this paper, the modified s-CIELAB method was applied to investigate color errors produced by the RGB-stripe pixel structure, and the results showed that the pixel structure can degrade the image quality even for a viewing distance where the pixel structures is not visible because the color difference caused by the structure has a low-frequency component and the human vision can detect it.
Abstract: Most direct-view color displays have a sub-pixel structure similar to the RGB-stripe structure. The video data for R and B are produced on the condition that they are reproduced at the same position as G but they are displayed on the screen with 1/3 of a pixel separation from the corresponding G pixel. This pixel structure thus entails a convergence error of 1/3 of a pixel. The modified s-CIELAB method was applied to investigate color errors produced by the RGB-stripe pixel structure. The results show that the pixel structure can degrade the image quality even for a viewing distance where the pixel structure is not visible because the color difference caused by the structure has a low-frequency component and the human vision can detect it. The same method was applied to the convergence error and it was shown that the necessary convergence accuracy is around 1/4 of a pixel.

Proceedings Article
01 Jan 2002
TL;DR: The data show that the contrast-sensitivity functions for all chromatic directions are consistently low-pass irrespective of the average colour of the stimulus, and an interesting asymmetry is found in that sensitivity to yellow-blue contrast is reduced for blue gratings relative to yellow gratings.
Abstract: Measurements of contrast sensitivity are well established for modulations in luminance, red-green and yellow-blue color directions in color space. Relatively less work has been carried out, however, to measure contrast sensitivity in other color directions or to assess the effect of the average color of the stimulus has been largely ignored. In this study we have measured conventional contrast-sensitivity functions for iso-luminant red-green and yellow-blue gratings and for gratings in two other color directions that we have nominally called lime-purple and cyan-orange (selected so that they bisect the red-green and yellow-blue directions in Boynton-MacLeod cone space). The measurements were repeated for modulations of cone contrast on chromatic fields (for example, we measured sensitivity to modulations in yellow-blue for gratings whose means colors were either yellow, neutral or blue). Our data show that the contrast-sensitivity functions for all chromatic directions are consistently low-pass irrespective of the average colour of the stimulus. However, we find an interesting asymmetry in that sensitivity to yellow-blue contrast is reduced for blue gratings relative to yellow gratings. This asymmetry is not observed for the red-green colour direction and asymmetries in the lime-cyan and purple-orange colour directions are consistent with an effect of S-cone adaptation. If sensitivity to chromatic contrast depends upon the mean color of the image then sophisticated models of the contrast-sensitivity function (that include, for example, parameters to describe local mean color of the stimulus or image) may be required.

Proceedings Article
01 Jan 2002

Proceedings Article
01 Jan 2002



Proceedings Article
01 Jan 2002

Proceedings Article
01 Jan 2002
TL;DR: An electrophotographic simulation model which estimates the microscopic structure of any printed toner layer based on its input halftone bitmap and an extension to the KUBELKA-MUNK (KM) model, which allows to compute the halft one reflectance spectra from the estimated transmittance spectRA.
Abstract: We present a prediction model for digital printers and more specifically for electrophotographic devices. On the one hand, we propose an electrophotographic simulation model which estimates the microscopic structure of any printed toner layer based on its input halftone bitmap. Applying BOUGUER–BEER–LAMBERT’s law, the obtained spatial toner arrangement yields the spectral transmittance distribution for non-light scattering colors. On the other hand, we introduce an extension to the KUBELKA-MUNK (KM) model, which allows to compute the halftone reflectance spectra from the estimated transmittance spectra. The extended KM model bridges the gap between the mathematical description of the optical point spread function of common office papers and the experimental results of simple reflectance measurements. With the combination of the models, we are capable of predicting the reflectance spectra of a printed monochrome wedge with a mean estimation error of less than CIELAB ∆E ∗ = 1.