scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2005"


Proceedings Article
01 Jan 2005
TL;DR: This paper demonstrates a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings, and extends the color range to encompass the visible gamut, enabling a new generation of display devices that are just beginning to enter the market.
Abstract: The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility In this paper, we demonstrate a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard output-referred image This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original The tone-mapped image data is also compressed, and the composite is delivered in a standard JPEG wrapper To naive software, the image looks like any other, and displays as a tone-mapped version of the original To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband Our method further extends the color range to encompass the visible gamut, enabling a new generation of display devices that are just beginning to enter the market

168 citations


Proceedings ArticleDOI
17 Jan 2005
TL;DR: Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model.
Abstract: Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In the case of offset prints, the mean difference between predictions and measurements expressed in CIE-LAB CIE-94 ΔE 94 values is reduced at 100 lpi from 1.54 to 0.90 (accuracy improvement factor: 1.7) and at 150 lpi it is reduced from 1.87 to 1.00 (accuracy improvement factor: 1.8). Similar improvements have been observed for a thermal transfer printer at 600 dpi, at lineatures of 50 and 75 lpi. In the case of an ink-jet printer at 600 dpi, the mean ΔE 94 value is reduced at 75 lpi from 3.03 to 0.90 (accuracy improvement factor: 3.4) and at 100 lpi from 3.08 to 0.91 (accuracy improvement factor: 3.4).

81 citations


Proceedings ArticleDOI
17 Jan 2005
TL;DR: This work constructs a global color mapping function in order to transform the colors of a source image to match a target color distribution to any desired degree of accuracy and shows how the method can be applied to color histogram equalization as well as color transfer from an example image or a color palette.
Abstract: Histogram warping is a novel histogram specification technique for use in color image processing. As a general purpose tool for color correction, our technique constructs a global color mapping function in order to transform the colors of a source image to match a target color distribution to any desired degree of accuracy. To reduce the risk of color distortion, the transformation takes place in an image dependent color space, featuring perceptually uniform color axes with statistically independent chromatic components. Eliminating the coherence between the color axes enables the transformation to operate independently on each color axis. Deforming the source color distribution to reproduce the dominant color features of the target distribution, the histogram warping process is controlled by designating the color shifts and contrast adjustments for a set of key colors. Assisted by mode detection, matching quantiles establish the correspondence between the color distributions. Interpolation by monotonic splines serves to extend the mapping over the entire dynamic range without introducing artificial discontinuities into the resulting color density. We show how our method can be applied to color histogram equalization as well as color transfer from an example image or a color palette.

50 citations


Proceedings ArticleDOI
17 Jan 2005
TL;DR: The results suggest that the polynomial model for reflectance recovery is superior in terms of accuracy to a standard linear transform and its generalisation performance is equivalent provided that some regularisation is employed.
Abstract: If digital cameras and scanners are to be used for colour measurement it is necessary to correct their device responses to device-independent colour co-ordinates, such as CIE tristimulus values. In order to do this it is sufficient to recover the underlying spectral reflectance functions from a scene at each pixel. Traditionally, linear methods are used to transform device responses to reflectance values. Recently, however, several non-linear methods have been applied to this problem, including generic methods such as neural networks, more novel approaches such as sub-manifold approximation and approaches based upon quadratic programming. In this paper we apply polynomial models to the recovery of reflectance. We perform a number of simulations with both tri-chromatic and multispectral imaging systems to determine their accuracy and generalisation performance. We find that, although higher order polynomials seem to be superior to linear methods in terms of accuracy, the generalisation performance for the two methods is approximately equivalent. This suggests that the advantage of polynomial models may only be seen when the training and test data are statistically similar. Furthermore, the experiments with multispectral systems suggest that the improvement using high order polynomials on training data is reduced when the number of sensors is increased.

47 citations


Proceedings Article
01 Jan 2005
TL;DR: An experimental investigation of the application of multispectral imaging in dermatology demonstrates the effectiveness of using spectral information for the color reproduction and quantitative analysis of skin disorders.
Abstract: This paper presents an experimental investigation of the application of multispectral imaging in dermatology. The focus areas of this work are as follows: a) the improving the color reproduction accuracy of skin lesions, b) exploring the spectral feature of skin disease using the multispectral color enhancement technique, and c) multispectral image analysis aiming at supporting quantitative diagnosis. The experiment focused on inflammatory and immunologic diseases; the color of skin lesions associated with these diseases is believed to be difficult to reproduce by conventional imaging devices. In view of this fact, we demonstrate the effectiveness of using spectral information for the color reproduction and quantitative analysis of skin disorders.

42 citations



Proceedings ArticleDOI
17 Jan 2005
TL;DR: This work determines relative importance of hue and intensity based on the saturation of an image pixel with respect to rod and cone cells excitation of retina and effectively applies this method to the generation of a color histogram and uses it for content-based image retrieval applications.
Abstract: In human retina, rod cells contribute to scotopic or dim-light vision and cone cells to photopic or bright-light vision. Excitation of the cone cells leads to the perception of color while that of rod cells helps in perception of various shades of gray. At low levels of illumination, only the rod cells are excited and gray shades are perceived. As the illumination level increases, more and more cone cells are excited and actual colors are perceived. The HSV model separates out the luminance component of a pixel color from its chrominance components, which is similar to the human perception of color. Hue represents pure colors, which are perceived by the excitation of cone cells. Saturation gives a measure of the degree by which a pure color is diluted by white light. Using the results of these analyses, we determine relative importance of hue and intensity based on the saturation of an image pixel. We effectively apply this method to the generation of a color histogram where each pixel contributes to gray color and true color. Using a weight function, percentage of true and gray color contributions of a pixel are measured. This information is used for content-based image retrieval applications.

39 citations


Proceedings ArticleDOI
17 Jan 2005
TL;DR: A 16-band camera system designed to produce spectral images of ancient paintings is described and Spectral reflectance were used to analyze a degraded area on an ancient painting.
Abstract: To preserve museum collections of works of art, these collections are often photographed for display in digital museums. However, conventional photography cannot capture spectral characteristics of objects. In this paper, we describe a 16-band camera system designed to produce spectral images of ancient paintings. Results of color reproduction of captured images and results of spectral analysis of images of ancient paintings are also presented. The camera consists of a 2000×2000-pixel CCD, a rotational filter turret with 16 interference filters, and a PC-based image capturing and displaying unit. The camera's lens is interchangeable, and it enables two or more different view sizes. Each band image of the camera can be focused independently, and it reduces longitudinal chromatic aberration. A stroboscope is used for lighting, and the rotational filter turret and electrical shutter of the CCD have been synchronized with it. An electric motor-driven photographic platform is used to enable photographing large objects in several shots. We evaluated the results of color estimation for an image taken by this camera using the GretagMacbeth ColorChecker 24-color chart. The average ΔEab was 2.09 (maximum ΔEab was 4.03). Spectral reflectance were used to analyze a degraded area on an ancient painting.

37 citations


Proceedings Article
01 Jan 2005

36 citations


Proceedings ArticleDOI
17 Jan 2005
TL;DR: A new automatic color thresholding based on wavelet denoising and color clustering with K-means is described in order to segment text information in a camera-based image to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.
Abstract: This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. A particular focus is given on stroke analysis to improve character segmentation, the step which follows color thresholding. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall. This separation of backgrounds is done with supervised classification. After tests with several classifiers (linear and quadratic discriminant analysis, K-nearest neighbors, neural networks and support vector machines), best results are given with a set of features based on properties of the gray-level histogram and by using a support vector machine.

34 citations


Proceedings Article
01 Jan 2005
TL;DR: This talk will focus on recent advances in Foveon's X3 sensor technology and new image processing concepts that have helped point the technology into the evolving digital camera market.
Abstract: Increasing competition and asymptotic image quality improvements in point-and-shoot digital cameras has led to two late-blooming but profitable markets: the affordable digital SLR and inexpensive camera modules. In this talk I will describe this new digital camera landscape and how the color imaging conference has helped this industry evolve. The talk will focus on recent advances in Foveon's X3 sensor technology and new image processing concepts that have helped point the technology into the evolving digital camera market. I will discuss simulation work that we undertake at Foveon to help with the design of sensors and image processing algorithms including the theory of how silicon can be used for color separation, sensor noise modeling, and a hyperspectral imaging model. Lastly, the advantages and importance of color resolution will be addressed. Introduction As the camera market has changed, Foveon has been working hard to follow the trend to higher quality and lower cost sensors. Through a rigorous program with our partners and in the design of or sensors we have achieved significant improvements aimed at both the high-end large area camera sensors and those aimed at lower-end markets. Modeling of X3 silicon color separation The mechanism of color separation used in the Foveon X3 sensors relies on the absorption of photons at different wavelengths and at different depths. The higher energy photons, those at the blue end of the spectrum, are absorbed at the surface whereas the lower energy photons penetrate deeper into the silicon substrate before they are absorbed. The wavelength-dependent absorption coefficient of silicon, and corresponding mean penetration depth, are plotted in Figure 1. In the Foveon X3 sensor, regions within the depth of the silicon are formed by transitions between different doping gradients and used to separate the electron-hole pairs that are formed at different depths by this naturally occurring property of silicon. The depths of these transitions are the key variables that determine the spectral sensitivities of such a device. In the talk I will show a demonstration of a Matlab model that computes quantum efficiency or spectral sensitivity curves of multiple layers at a chosen set of thicknesses. Figure 2 shows the theoretical spectral sensitivities computed by the model alongside actual sensitivity data from an early X3 sensor measured using a monochrometer. In addition to the color separation model we have been working on models of all of the assorted noise sources that contribute to the overall noise in the image capture process. From a long list of quantities that includes variables such as photodiode and sense node capacitance, well depth, pixel size, fill factor, and read out time we can generate estimates of the noise levels that we expect to correspond with the quantum efficiency curves generated from the color separation model. We use these color and noise models, alongside a model of ISO speed 4 and a model of metamerism index to compare the tradeoffs between sensor design parameters and these two figures of merit. Figure 1 Absorption coefficient and penetration depth in Silicon vs.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: In this paper, the mean, standard deviation and histogram distribution of a set of natural scene images are used as the target color properties for each color scheme, and the final grayscale image segments are obtained by using clustering and merging techniques.
Abstract: A natural color mapping method has been previously proposed that matches the statistical properties (mean and standard deviation) of night-vision (NV) imagery to those of a daylight color image (manually selected as the "target" color distribution). Thus the rendered NV image appears to resemble the target image in terms of colors. However, in this prior method the colored NV image may appear unnatural if the target image's "global" color statistics are too different from that of the night vision scene (e.g., it would appear to have too much green if much more foliage was contained in the target image). Consequently, a new "local coloring" method is presented in the current paper, and functions to render the NV image segment-by-segment by using a histogram matching technique. Specifically, a false-color image (source image) is formed by assigning multi-band NV images to three RGB (red, green and blue) channels. A nonlinear diffusion filter is then applied to the false-colored image to reduce the number of colors. The final grayscale image segments are obtained by using clustering and merging techniques. The statistical matching procedure is merged with the histogram matching procedure to assure that the source image more closely resembles the target image with respect to color. Instead of using a single target color image, the mean, standard deviation and histogram distribution of a set of natural scene images are used as the target color properties for each color scheme. Corresponding to the source region segments, the target color schemes are grouped by their scene contents (or colors) such as green plants, roads, ground/earth. In our experiments, five pairs of night-vision images were initially analyzed, and the images that were colored (segment-by-segment) by the proposed "local coloring" method are shown to be much more natural, realistic, and colorful when compared with those produced by the "global-coloring" method.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: This article is an overview of color vision and color perception from an evolutionary and anthropological perspective, intended for an audience with no prior background in either of these fields of study.
Abstract: This article is an overview of color vision and color perception from an evolutionary and anthropological perspective. It is intended for an audience with no prior background in either of these fields of study. This is an effort to provide a general overview of some the more recent significant works regarding color vision and perception, in an evolutionary framework, that is accessible to a general audience. Though it is intended to explain some of the general dynamics of a detailed and complex history, this cannot be considered an exhaustive overview, but a general description of some of the fundamental anthropological and evolutionary understandings of color vision and perception.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: A new geometric method to compute color gamut boundaries fast and geometrically accurate for high-quality image-dependent gamut mapping in three dimensions is proposed.
Abstract: We propose a new geometric method to compute color gamut boundaries fast and geometrically accurate The method is designed for high-quality image-dependent gamut mapping in three dimensions For such a mapping the gamut boundary must be constructed for every image individually and we cannot rely on precomputed lookup tables This can only be practical if the gamut boundary can be computed very fast The proposed method is fast compared to other geometric methods without sacrificing geometric accuracy of the computed boundary

Proceedings ArticleDOI
17 Jan 2005
TL;DR: The spectral gamut mapping algorithm is applied to spectral data from the Macbeth Color Checker and test images, and initial results show that the amount of clipping increases with the number of dimensions used.
Abstract: A method is proposed for performing spectral gamut mapping, whereby spectral images can be altered to fit within an approximation of the spectral gamut of an output device. Principal component analysis (PCA) is performed on the spectral data, in order to reduce the dimensionality of the space in which the method is applied. The convex hull of the spectral device measurements in this space is computed, and the intersection between the gamut surface and a line from the center of the gamut towards the position of a given spectral reflectance curve is found. By moving the spectra that are outside the spectral gamut towards the center until the gamut is encountered, a spectral gamut mapping algorithm is defined. The spectral gamut is visualized by approximating the intersection of the gamut and a 2-dimensional plane. The resulting outline is shown along with the center of the gamut and the position of a spectral reflectance curve. The spectral gamut mapping algorithm is applied to spectral data from the Macbeth Color Checker and test images, and initial results show that the amount of clipping increases with the number of dimensions used.

Proceedings Article
01 Jan 2005
TL;DR: It is shown that the proposed approach can be successfully applied to the images with texture by sowing a small number of color seeds and the color can be estimated depending on the Euclidean distance and the luminance distance between each pixel to be colorized.
Abstract: Colorization is a computerized process of adding color to a monochrome image. The authors have developed colorization algorithms which propagate colors from seeded color pixels. Since those algorithms are constructed based on a region growing approach, failure colorization occurs at the place where a luminance changes intensely such as edge and texture. Although we developed, in the previous work, a partitioning algorithm for preventing the error propagation at edge, numerous color seeds were required for accurate colorization of the image with texture. This paper presents a new algorithm for colorization with texture by blending seeded colors. In our algorithm, the color can be estimated depending on the Euclidean distance and the luminance distance between each pixel to be colorized. It is shown that the proposed approach can be successfully applied to the images with texture by sowing a small number of color seeds. Introduction Colorization is a computerized process that adds color to a black and white print, movie and TV pro-gram, supposedly invented by Wilson Markle. It was initially used in 1970 to add color to footage of the moon from the Apollo mission. The demand of adding color to monochrome images such as BW movies and BW photos has been increasing. For example, in the amusement field, many movies and video clips have been colorized by human’s labor, and many monochrome images have been distributed as vivid images. In other fields such as archaeology dealing with historical monochrome data and security dealing with monochrome images by a crime prevention camera, we can imagine easily that colorization techniques are useful. A luminance value of a monochrome image can be calculated uniquely by a linear combination of RGB color components. However, searching for the RGB components from a luminance value poses conversely an ill-posed problem, because there is several corresponding color to one luminance value. Due to these ambiguous, human interaction usually plays a large role in the colorization process. The correspondence between a color and a luminance value is determined through common sense (green for grass, blue for the ocean) or by investigation. Even in the case of pseudo-coloring, where the mapping of luminance values to a set of RGB components is automatic, the choice of the color-map is purely subjective. Since there are a few industrial software products, those technical algorithms are generally not available. However, by operating those software products, it turns out that humans must meticulously hand-color each of the individual image subjectivity. There also exist a few patents for colorization. However, those approaches depend on heavy human operation. Recently, simple colorization algorithms have been proposed by a few research groups. In 2002, one of the authors proposed a colorization algorithm in which a small number of color seeds were sown on a monochrome image and the remaining pixels are colorized by propagating seeds’ color to adjacent pixels. The algorithm has been improved in Refs.5-9. In the same year, Welsh et al. colorize a monochrome image by transferring color from a reference color image with a stochastic matching. The concept of transferring color from one image to another image was inspired by work in Ref.11. In the Welsh’s method, the source image, which is the same kind of image as a monochrome image, is prepared and the colorization is performed by color matching between both pictures. After that, Levin et al. mark a monochrome image with some color scribbles and adjacent pixels are colorized by formulating and solving an optimization problem. Those conventional algorithms are very simple and work well as an intuitive impression, especially for the image which can be segmented to a few large regions with the same chrominance components. However, it was difficult to perform accurate colorization for texture. In order to obtain an accurate colorized result for a texture image, Horiuchi’s algorithm requires many color seeds. In the case of Welsh’s algorithm, a specific reference image is required and Levin’s algorithm requires many color scribbles. This study aims to develop a new colorization algorithm for monochrome images with texture. This paper organized as follows: Section 2 presents our conventional algorithm and shows the problem for an image with texture. Section 3 presents the proposed colorization algorithm, and Section 4 demonstrates experiments. Finally, we conclude with a discussion in Section 5. Conventional Colorization by Propagating Seeded Colors Independently The most advanced colorization algorithm for still monochrome images by the authors' works can be shown in Ref.7. In this section, the conventional algorithm is explained briefly and a problem about texture is shown. Let ) , ( y x I = be a pixel in an input monochrome image and let { } p p p p y x S 1 ) , ( = = be a set of color seeds, where P is the total number of the seeds. The color seeds, which are color pixels strictly, are given manually as a prior knowledge by a user. The position of the seeds and their color are determined by the user. Note that the color must be chosen with keeping the luminance of the original monochrome pixels. We present our method in CIELAB color space, each monochrome pixel I is transformed (a) Position of seven color seeds on the monochrome image. (b) Colorized image. Figure 1. A colorized result by the method in Ref.7. into the luminance signal ) (I L . Each color seeds p S is also transformed into ) ( ), ( ), ( p p p S b S a S L , respectively. In Ref.7, each pixel I is colorized by )) ( ( )), ( ( ), ( I f b I f a I L in CIELAB color space. The function ) (⋅ f selects a color seeds which have the minimum Euclidean distance, and defined as:       − = 2 min ) ( p p p S I S I f (1) where 2 ⋅ means the Euclidean distance in the X-Y image space. Figure 1 shows an example of colorization by using the algorithm in Ref.7. Figure 1(a) shows an input monochrome image and the position of color seeds expressed by red circles. Each seeds were sown at the center of the circle. In this example, seven seeds were sown on the monochrome image by the user. Figure 1(b) shows the colorized results. Better result was obtained. Figure 2 shows another example. The image consists of texture such as petals and trees. By sowing five color seeds as shown in Fig.2(a), a failure colorized result was obtained as shown in Fig.2(b). In order to obtain more accurate result, the user has to sow color seeds for each small region in the texture. In actual application, it is impossible to sow numerous seeds on each region. Reference 7 also proposed a partitioning algorithm to prevent the error propagation at edge. However, it is difficult to determine a threshold of partition. Even if the user can set the partition, failure estimation will be occurred after collapsing the partition. Moreover, the method produces visible artifacts of block distortion. In order to solve the problem, we propose a new colorization algorithm by blending seed color in the next section. Proposed Colorization by Blending Many Seed Color Decision of the Chrominance Components In our algorithm, we use two properties of natural images. The first property is that pixels with similar luminance values should have similar colors. This property was used for solving colorization problem in Levin’s algorithm. The second property is that near pixels should have similar colors. This property was used in Horiuchi’s algorithm. In the proposed method, we express those (a) Position of five color seeds on the monochrome image. (b) Colorized image. Figure 2. A failure example of colorization by the method in Ref.7. Figure 4. Gamut mapping on a*-b* plane by clipping. two properties by distances NED and NLD as follows. (NED: Normalized Euclidean distance) We define the first distance ] 1 , 0 [ ) , ( 1 ∈ p S I d between I and p S

Proceedings Article
01 Jan 2005
TL;DR: The simulation results show that incorporating pigment mapping into the matrix R method can recover the smoothness of the reflectance spectrum, and further improve spectral accuracy of spectral imaging.
Abstract: Spectral imaging has been widely developed over the last ten years for archiving cultural heritage. It can retrieve spectral reflectance of each scene pixel and provide the possibility to render images for any viewing condition. A new spectral reconstruction method, the matrix R method, can achieve high spectral and colorimetric accuracies simultaneously for a specific viewing condition. Although the matrix R method is very effective, the reconstructed reflectance spectrum is not smooth when compared with in situ spectrophotometry. The goal of this research was to smooth the spectrum and make it more accurate. One possible solution is to identify pigments and find their compositions for each pixel. After that, the reflectance spectrum can be modified based on two-constant Kubelka-Munk theory using the absorption and scattering coefficients of these pigments, weighted by their concentrations. The concentrations were optimized to best fit the spectral reflectance predicted by the matrix R method. As a preliminary experiment, it was assumed that a custom target was painted using several known pigments. The simulation results show that incorporating pigment mapping into the matrix R method can recover the smoothness of the reflectance spectrum, and further improve spectral accuracy of spectral imaging. Introduction Traditional colorimetric devices acquire only three samples, critically under-sampling spectral information and suffering from metamerism. Alternatively, spectral devices increase the number of samples and can reconstruct spectral information for each scene pixel. Retrieved spectral information can be used to render color images for any viewing condition. Spectral imaging has been widely developed over the last ten years for archiving culture heritage at a number of institutes worldwide. Three spectral acquisition systems have been developed and tested in our laboratory. Recently, the matrix R method was proposed and implemented for spectral imaging reconstruction. The method followed the Wyszecki hypothesis where a spectrum can be decomposed into a fundamental stimulus and a metameric black. The spectral reflectance and tristimulus values were both calculated from multichannel camera signals. Then the hybrid spectral reflectance was generated by combing the fundamental stimulus and metameric black predicted from tristimulus values and spectral reflectance, respectively. This method achieved high spectral and colorimetric accuracies simultaneously for a certain viewing condition. The spectral accuracy of this method was mainly determined by the estimated spectral reflectance, which was calculated by multiplying the multi-channel camera signals with a transformation matrix. Each column of the transformation matrix can be estimated by a basis vector, and spectral reflectance can be represented as a linear combination of these basis vectors, weighted by the multi-channel camera signals. A transformation matrix for a six-channel virtual camera is shown in Figure 1. Due to the wavelike shape of the basis vectors, the predicted spectral reflectance for a white patch, for examples, is not as flat as in situ spectrophotometry, shown in Figure 2. The goal of this research was to smooth reflectance spectra and to further improve spectral accuracy. 360 460 560 660 760 −300 −200 −100 0 100 200 300 Wavelength (nm) B as is V ec to r Figure 1. The transformation matrix from six-channel camera signals to spectral reflectance factor. 360 460 560 660 760 0 0.2 0.4 0.6 0.8 Wavelength (nm) R ef le ct an ce F ac to r Figure 2. Measured (solid) and predicted (dashed) spectral reflectance factors

Proceedings ArticleDOI
17 Jan 2005
TL;DR: Examination of variation in average color for two-color halftoned images as a function of color-to-color misregistration distance shows that dot-on-dot/dot-off-dot color shifts were very high, while rotated dots screens exhibited very little color shift under the present idealized conditions.
Abstract: Color-to-color misregistration refers to misregistration between color separations in a printed or display image. Such misregistration in printed halftoned images can result in several image defects, a primary one being shifts in average color. The present paper examines the variation in average color for two-color halftoned images as a function of color-to-color misregistration distance. Dot-on-dot/dot-off-dot and rotated dot screen configurations are examined via simulation and supported by print measurements. The color and color shifts were calculated using a spectral Neugebauer model for the underlying simulations. As expected, dot-on-dot/dot-off-dot color shifts were very high, while rotated dots screens exhibited very little color shift under the present idealized conditions. The simulations also demonstrate that optical dot gain significantly reduces the color shifts seen in practice.


Proceedings Article
01 Jan 2005
TL;DR: In this paper, the authors present a new algorithm designed to have a good behavior concerning continuity and contrast conservation which also performs well in classical psychophysical tests, and they apply this new test for both, the quality check of existing as well as the development of new gamut mapping algorithms.
Abstract: The design of a gamut mapping algorithm (GMA) is always a compromise between preserving different competing aspects such as color, contrast and lightness. A natural requirement to a GMA is that the algorithmic treatment of this competition has to avoid any additional artefacts such as discontinuities or loss of contrast. In this paper several common gamut mapping algorithms are studied from this aspect, resulting in the observation that problems with geometric discontinuities are widespread. For the assessment of the phenomena induced by local mapping properties, an algorithmic test was developed and applied. This new test supports both, the quality check of existing as well as the development of new GMAs. Finally we present a first new algorithm designed to have a good behavior concerning continuity and contrast conservation which also performs well in classical psychophysical tests.

Proceedings Article
01 Jan 2005
TL;DR: Psychophysical measurements of noise adaptation in color image perception and mathematical prediction of the effect are described and the results illustrate the hypothesized pattern-dependent adaptation and its prediction through adaptation of a 2-D contrast sensitivity function in an image-appearance-model-based difference metric.
Abstract: Webster1 has proposed “that adaptation increases the salience of novel stimuli by partially discounting the ambient background.” This is an excellent, concise, description of the purpose and function of chromatic adaptation in image reproduction applications. However, Webster was not limiting this proposal to just chromatic adaptation, but rather using it as a general description for all forms of perceptual adaptation. Demonstrations of adaption to other properties of image displays such as motion, blur, and spatial frequency led the authors to ponder the question of whether observers might adapt to the noise structure in images to enhance the novel stimuli — the systematic image content. This paper describes psychophysical measurements of noise adaptation in color image perception and explores mathematical prediction of the effect. The results illustrate the hypothesized pattern-dependent adaptation and its prediction through adaptation of a 2-D contrast sensitivity function in an image-appearance-model-based difference metric. Introduction Spatial frequency adaptation has been recognized for over 30 years and used as evidence for the existence of spatial-frequencyand orientation-tuned mechanisms in the human visual system.2 Figure 1 is a typical demonstration of spatial frequency adaptation. After gazing at the bar on the left side of Fig. 1 for 15-30 s., the identical patterns on the right side appear to shift in spatial frequency in directions opposite the adapting stimuli. Webster and coworkers1,3,4 have expanded the exploration of spatial frequency adaptation to the study of adaptation to complex spatial stimuli such as image blur, face expression, and face recognition. Figure 2 recreates one of Webster’s demonstrations of blur adaptation. After gazing at the bar between the upper images for 15-30s., the bottom two images, which are physically identical will appear significantly different. The image on the left will appear more blurred after adaptation to a sharp image while the image on the right will appear sharper after adaptation to a blurry image. This effect can also be seen in the form of simultaneous contrast whereby an image will appear sharper if surrounded by blurry images. Webster’s observations led the authors to hypothesize that the human visual system might be capable of adapting to noise content in images effectively enhancing the perception of image content while minimizing the perception of artifacts introduced by imaging systems. Quantitative knowledge of such adaptation effects is critical for the development of accurate image quality metrics. Figure 1. Demonstration of spatial frequency adaptation. Figure 2. Demonstration of adaptation to image blur. A visual demonstration of noise adaptation in images is easily created as illustrated in Fig. 3. Adaptation to the images at the top will result in the lower-left image appearing noisier than the lowerright image despite being physically identical. Figure 3. Demonstration of adaptation to image noise. Webster and Mollon5 measured contrast adaptation in natural images illustrating that the visual system does adapt to the range of color and lightness information in a scene. This adaptation could be considered similar to an automatic gamut mapping in the visual system. While these results suggest the possibility of adapting to the noise contrast in an image, they did not explicitly explore noise adaptation. Field and Brady6 describe an approach to perception based on the content of natural scenes that is easily extensible to the concept of adaptation to the noise in an image. Other researchers have explored related forms of adaptation, but not specifically image noise. Clifford and Weston7 studied adaptation to Glass patterns, essentially noise with some correlated structure. Anderson and Wilson8 described complex spatial frequency adaptation to identity elements in faces. Artal et al.9 have shown that neural mechanisms, presumably long-term adaptation, are capable of compensating for optical aberrations in observers’ eyes. Finally, Durgin et al.10,11 have shown adaptation to natural and artificial texture. This, and related, work comes closest to measuring noise adaptation however texture adaptation is an examination of noise adaptation in the absence of other content. The current work aims to examine the perception of the remaining image content after noise adaptation. Experimental The experiment began with the hypothesis that adaptation to spatially-structured noise would decrease the sensitivity (raise the threshold) of observers to similar noise within an image. Furthermore, it was hypothesized that adapting noise of one structure (e.g. vertically oriented) would have little, or no, effect on the sensitivity to noise of a completely different structure (e.g. horizontally oriented). A simple psychophysical experiment was designed and implemented to test these hypotheses. Observers were presented with images intermittently placed on an adapting background. Three types of adapting backgrounds were used (see Fig. 4), 2D random, horizontal, and vertical white noise with uniform luminance distribution. Additionally, a uniform gray adapting background was used. Each adapting background was used with contrast levels of 9.4, 18.9, 28.1, and 37.5 percent (Fig. 4). The adapting backgrounds filled the experimental display, a carefully-characterized 23”Apple Cinema HD Display viewed at 1 meter. The display (1920x1200 pixels) subtended 28x17 degrees of visual field with an addressability of 68 pixels/degree. The maximum display luminance was 320 cd/m2 with a white point approximating CIE Illuminant D65. The adapting backgrounds were achromatic. Figure 4. Adapting backgrounds ranging from uniform (left) to 37.5% contrast (right) for random, horizontal, and vertical white noise. Visual sensitivity to each of the three types (random, horizontal, vertical) of noise was measured using the method of adjustment. These measurements were completed using 5 different images (Fig. 5) upon which the noise was added. These images include 4 pictorial scenes and a uniform gray (equal to the adapting background mean luminance, approximately middle gray, and 128 digital counts on a Macintosh display). The images were each 512x512 pixels, or 7.5x7.5 degrees of viewing angle. Figure 5. Five images used for measurement of sensitivity to added noise (random, horizontal, and vertical). The test images were presented together with an original image having no added noise. The images were presented for 1 s. followed by 4 s. in which only the adapting background was present. This cycle repeated while the observers adjusted the noise contrast of the right image until the noise was just identifiable. Specifically observers were asked to adjust the noise contrast until they could just discriminate which of the three types of noise was being added to the image. These contrast discrimination thresholds (called visible contrast in the plotted results) were obtained for each combination of image content, background noise type, background noise contrast, and image noise type. There was a total of 195 threshold settings for a full experimental session. Observers could complete a session in about 2 hours. Once observers set the image noise level to the criterion contrast, they pressed a button and a new trial began. Trials were completely randomized in all experimental variables. Figure 6 shows an example stimulus configuration with vertical noise in the adapting background and horizontal noise (clearly above the threshold setting) in the test image. Two observers, MF and GJ, performed the experiment five times each to collect precise data on two observers and assess intraobserver variability. An additional 10 observers completed the experiment once to verify the effect and estimate inter-observer variability. All observers had normal, or corrected-to-normal, visual acuity and normal color vision. Data for two observers was discarded since the available range of noise was not sufficient for them in multiple trials. Thus, the reported inter-observer data are for a total of 10 observers. Figure 6. Example stimulus with the reference image on the left, test image with horizontal noise on the right, and adapting background with vertical noise. Results Figure 7 shows the visibility of random noise (observers MF and GJ) as a function of adapting background contrast averaged over all images for each adapting condition. Example 95% error bars are presented on one curve, the magnitude of which would be similar for the other data sets. While the error bars appear large relative to the adaptation effect, most of the variability is due to image dependent changes in the threshold. Only about 1/3 of the error is associated with random noise (see Fig. 10). The adaptation effect is statistically significant for each viewing situation. The results show that, for both observers, random noise in the adapting field elevates the threshold for random noise in the image and the effect increases with adapting contrast. Horizontal and vertical adapting noise also elevate the thresholds, but to a lesser extent as would be expected since those adapting stimuli only depress one dimension of the 2D contrast sensitivity function. Observer GJ generally shows higher thresholds (possibly a criterion effect in the method of adjustment) and larger adaptation effects. 0.04 0.05 0.06 0.07 0.08 0.09 0.00 0.10 0.20 0.30 0.40 Adapting Contrast V is ib le C o n tr a st MF Random Adapt MF Horizontal Adapt MF Vertical Adapt GJ Random Adapt GJ Horizontal Adapt GJ Vertical Adapt Figure 7. Random noise visibility for all adapting conditions. Figure 8 shows similar results for the visibility of horizontal and vertical image noise. The results are consistent with the thresholds for vertical noise elevated when adapting to vertical noise and vice versa. There is no effect of horizontal noise adaptation on the visibility of vertical


Proceedings ArticleDOI
17 Jan 2005
TL;DR: The theoretical approach is introduced to design the optimal chromaticities for primaries with a given size of triangular color gamut in xy-plane and the simulation results showed that the optimal primaries for 85% of NTSC area have similar points with sRGB for red and blue, and green primary is located in between sRGB and NTSC.
Abstract: The theoretical approach is introduced to design the optimal chromaticities for primaries of a display with a given size of triangular color gamut in xy-plane. Optimal primaries are defined as a set of chromaticities of red, green and blue primaries with fixed white point that most optimally satisfying four criteria, i.e. gamut size, gamut shape, coverage of object colors and hue of the primaries, in the visually uniform color space, CIECAM02. It is assumed that the optimal gamut should cover that of sRGB and have similar maximum chroma for each hue. The number of SOCS data located outside the gamut is used as a criterion to judge the coverage of object colors. Also it is set the hues of primaries to be close to those of sRGB. The simulation results showed that the optimal primaries for 85% of NTSC area have similar points with sRGB for red and blue, and green primary is located in between sRGB and NTSC. For 100% of NTSC area, the optimal chromaticities are located near those of NTSC for red and green and that of sRGB for blue.

Proceedings Article
01 Jan 2005
TL;DR: The latest advances in color displays in major application markets are reviewed and those developments which result in enhanced functional color performance and improved image quality are focused on.
Abstract: This paper was presented at IS&T/SID CIC13 Conference in Scotssdale, AZ, November 7-11, 2006. Abstract Over the past twenty-five years a diversity of color display technologies has evolved to support a wide range of applications, including television receivers, computer monitors, cell phones, PDAs, automobile dashboards, aircraft instruments, and even incar and in-flight entertainment systems. In all product categories consumers’ expectations for color display performance have grown at a rapid pace, driving the accelerated development of core display technologies, along with supporting color control algorithms and image processing methodology. In this keynote address I review the latest advances in color displays in major application markets and focus on those developments which result in enhanced functional color performance and improved image quality.

Proceedings Article
01 Jan 2005
TL;DR: An algorithm based on illuminant invariance theory to find shadow regions in a colour image by model the problem of finding shadows by a Markov Random Field using a new measure derived by which shadow edges can be locally identified.
Abstract: We design an algorithm based on illuminant invariance theory to find shadow regions in a colour image. Shadows are caused by a local change in both the colour and the intensity of illumination. Using both chromaticity and intensity cues, an illuminant discontinuity measure is derived by which shadow edges can be locally identified. We model the problem of finding shadows by a Markov Random Field using our new measure. A graph-cut optimization method is then applied to the MRF to find the globally optimal segmentation of shadows in an image. In previous work, a 2-d chromaticity colour invariant image was recovered from a greyscale 1-d invariant image by adding back light so as to match the chromaticity of bright pixels. Here, since we segment shadows, we can take a completely different approach and leave nonshadow pixels unchanged, while adding light to shadow pixels so as to match neighbouring nonshadow pixels. The results are much more convincing shadow-free images, and shadowsegmentation is excellent.


Proceedings Article
02 Nov 2005
TL;DR: A vectorial approach based upon a joint analysis of a structure tensor and a so called flow tensor, both computed from image derivatives are proposed.
Abstract: Actually in most applications, optical flow is computed from one luminance Y-plane and only a few methods refer to color optical flow. In fact, a brief analysis shows that these methods are either marginal approaches, whether dramatically time consuming techniques. Here, we propose a vectorial approach based upon a joint analysis of a structure tensor and a so called flow tensor, both computed from image derivatives

Proceedings Article
01 Jan 2005
TL;DR: A non-parametric method is presented, called the Wilcoxon signed-rank test, which can be used to evaluate performance without making any underlying assumption of the error distribution, and which can derive a new CAT that statistically significantly outperforms CAT02 at the 95% confidence level.
Abstract: The performance of many color science and imaging algorithms are evaluated based on their mean errors. However, if these errors are not normally distributed, statistical evaluations based on the mean are not appropriate performance metrics. We present a non-parametric method, called the Wilcoxon signed-rank test, which can be used to evaluate performance without making any underlying assumption of the error distribution. When applying the metric to the performance of chromatic adaptation transforms on corresponding color data, we can derive a new CAT that statistically significantly outperforms CAT02 at the 95% confidence level.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: The experimental result reveals the subpixel nature of color crosstalk, which was analyzed with micrographs captured from an imaging colorimeter to improve the color performance of LCD and is easier to implement and more accurate in performance.
Abstract: The drive for larger size, higher spatial resolution, and wider aperture LCD has shown to increase the electrical crosstalk between electrodes in the driver circuit. This crosstalk leads to additivity errors in color LCD. In this paper, the LCD color crosstalk was modeled using a capacitance coupling mode and the crosstalk effect was analyzed with micrographs captured from an imaging colorimeter. The experimental result reveals the subpixel nature of color crosstalk that whenever any two neighboring subpixels are “on” at the same time, there is crosstalk from one subpixel to another, but whenever there is one “off” subpixel between the two “on” subpixels, there is no crosstalk between the “on” subpixels. There is positive crosstalk from right to left across all three subpixels. Based on this crosstalk model, the crosstalk of a LCD was characterized and a spatial subpixel crosstalk correction algorithm was developed to improve the color performance of LCD. The correction algorithm reduced crosstalk by a factor of 16. Compared to a 3D lookup table approach, the new algorithm is easier to implement and more accurate in performance.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: This paper proposes a color decomposition method for a multi-primary display using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space, which guarantees color signal continuity and computational efficiency, and requires less memory.
Abstract: This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-uptable (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.