scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 1999"


Proceedings Article
01 Jan 1999
TL;DR: An attempt to integrate a wide variety of psycho-physical experiments into a computational model to calculate color appearance and apply this model to printing wide dy-namic range, real-life scenes and finding the best reproduc-tion of an image with limited printer gamut.
Abstract: This paper is an attempt to integrate a wide variety of psycho-physical experiments into a computational model to calculatecolor appearance. Having described the fundamentals of sucha model, we turn to applying this model to printing wide dy-namic range, real-life scenes and finding the best reproduc-tion of an image with limited printer gamut.Key words: Mondrians, Retinex, Complex Images, Mod-els of Vision. Introduction In 1963 Edwin Land decided he wanted to take his workon vision in a different direction. He quipped that many crit-ics thought that his conclusions about human vision were sus-pect, because the experiments were photographic, and thatthese critics thought that he (Land) could do anything withphotographic film. He set out to do experiments using justpapers, lights and a very good telephotometer. Red and Whiteexperiments had shown the importance of complex images,so Land started with displays using about 100 papers. Heasked Lucretia Weed to make a display resembling Mondrian’spainting in the Tate Gallery, London. As it turned out Lucretiafinished the display before we could find color photographsof that Mondrian painting. Years later, Hank Spekreijse, at aconference in Amsterdam, pointed out that his countrymanMondrian had never used high-chroma greens. So, althoughthese experimental displays are higher in chroma and contrastthan the Tate Mondrian painting, they are better visual testtargets than the original because they have a much larger rangeof colors.Nigel Daw’s experiments1 with afterimages had con-vinced Land to avoid regular arrays of squares. Daw’s ex-periment had two parts: First, he had observers make a strongcolor afterimage of an object by fixating at a point in a colorimage, say a square pillow, that formed a diamond afterimageon the retina. Second, he asked observers to describe the af-terimages as the observers moved their gaze to different fixa-tion points on a black and white image of the same scene.When observers fixated on new points in the image, the mis-match of contours between external scene and the internal af-terimage inhibited the visibility of the afterimage. However,when the observers fixated on the original point registeringthe color afterimage in the black and white image of the pil-low, the afterimage became more visible. For several minutesthe observers could make the color afterimage of the pillowappear by looking at the original fixation point, and make itdisappear by looking at another point in the image. The con-clusion is that afterimages are inhibited by different contourson the current image. Hence, Land avoided the problem ofafterimages by making each color in the Mondrian a differentsize or shape. Regular arrays of constant size patches are sub-ject to the problem that the color afterimage of last square fitsthe contours of the new area of interest.Land set out to study the appearance of colors by varyingthe spectra, intensity and duration of illumination falling on alarge variety of Mondrians.

211 citations


Proceedings Article
01 Jan 1999
TL;DR: The specifications and usage of standard RGB color spaces promoted today by standard bodies and/or the imaging industry are described and the digital image color workflow is examined with emphasis on when an RGB color space is appropriate, and when to apply color management by profile.
Abstract: This paper describes the specifications and usage of standard RGB color spaces promoted today by standard bodies and/or the imaging industry. As in the past, most of the new standard RGB color spaces were developed for specific imaging workflow and applications. They are used as interchange spaces to communicate color and/or as working spaces in imaging applications. Standard color spaces can facilitate color communication: if an image is in ‘knownRGB,’ the user, application, and/or device can unambiguously understand the color of the image, and further color manage from there if necessary. When applied correctly, a standard RGB space can minimize color space conversions in an imaging workflow, improve image reproducibility, and facilitate accountability. The digital image color workflow is examined with emphasis on when an RGB color space is appropriate, and when to apply color management by profile. An RGB space is “standard” because either it is defined in an official standards document (a de jure standard) or it is supported by commonly used tools (a de facto standard). Examples of standard RGB color spaces are ISO RGB, sRGB, ROMM RGB, Adobe RGB 98, Apple RGB, and video RGB spaces (NTSC, EBU, ITU-R BT.709). As there is no one RGB color space that is suitable for all imaging needs, factors to consider when choosing an RGB color space are discussed.

136 citations


Proceedings Article
01 Jan 1999
TL;DR: This work considers committee methods based on both linear and non–linear ways of combining the illumination estimates from the original set of color constancy algorithms, which are always more accurate than the estimates of any of the other algorithms taken in isolation.
Abstract: We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non–linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.

88 citations


Proceedings ArticleDOI
21 Dec 1999
TL;DR: In this paper, a review of the basic halftoning algorithms is presented by the nature of the appearance of resulting patterns, including white noise, recursive tessellation, the classical screen, and blue noise.
Abstract: Digital halftoning remains an active area of research with a plethora of new and enhanced methods. While several fine overviews exist, this purpose of this paper is to review retrospectively the basic classes of techniques. Halftoning algorithms are presented by the nature of the appearance of resulting patterns, including white noise, recursive tessellation, the classical screen, and blue noise. The metric of radially averaged power spectra is reviewed, and special attention is paid to frequency domain characteristics. The paper concludes with a look at the components that comprise a complete image rendering system. In particular when the number of output levels is not restricted to be a power of 2. A very efficient means of multilevel dithering is presented based on scaling order- dither arrays. The case of real-time video rendering is considered where the YUV-to-RGB conversion is incorporated in the dithering system. Example illustrations are included for each of the techniques described.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

73 citations


Proceedings Article
01 Dec 1999
TL;DR: This work extends previous work on using a neural network for illumination, or white-point, estimation from the case of calibrated images to that of uncalibrated images of unknown origin, and shows that the chromaticity of the ambient illumination can be estimated with an average CIE Lab error.
Abstract: Color images often must be color balanced to remove unwanted color casts. We extend previous work on using a neural network for illumination, or white-point, estimation from the case of calibrated images to that of uncalibrated images of unknown origin. The results show that the chromaticity of the ambient illumination can be estimated with an average CIE Lab error of 5∆E. Comparisons are made to the grayworld and white patch methods.

60 citations


Proceedings Article
01 Jan 1999
TL;DR: The LTS1 Reference LTS-CONF-1999-024 describes the construction and operation of the Large Hadron Collider and some of the fundamental mechanisms behind its construction.
Abstract: Keywords: LTS1 Reference LTS-CONF-1999-024View record in Web of Science Record created on 2006-06-14, modified on 2016-08-08

37 citations


Proceedings ArticleDOI
21 Dec 1999
TL;DR: A spectral characterization of the acquisition system taking into account the acquisition noise is performed and the spectral reflectance of each pixel of the imaged surface is estimated by inverting the model using a Principal Eigenvector approach.
Abstract: In this article we describe the experimental setup of a multispectral image acquisition system consisting of a professional monochrome CCD camera and a tunable filter in which the spectral transmittance can be controlled electronically. We have performed a spectral characterization of the acquisition system taking into account the acquisition noise. To convert the camera output signals to device-independent data, two main approaches are proposed and evaluated. One consists in applying regression methods to convert from the K camera outputs to a device- independent color space such as CIEXYZ or CIELAB. Another method is based on a spectral model of the acquisition system. By inverting the model using a Principal Eigenvector approach, we estimate the spectral reflectance of each pixel of the imaged surface.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

37 citations


Proceedings Article
01 Jan 1999
TL;DR: In this paper, an algorithm is described to select a set of six inks from a given ink database for a spectral-based six-color printing system which minimizes metamerism.
Abstract: An algorithm is described to select a set of six inks from a given ink database for a spectral-based six-color printing system which minimizes metamerism. Since there are C(n, 6) ink combinations for an ink database containing n inks, the number of ink combinations is a geometric figure when n is large. There are obviously impossible ink combinations which can be removed analytically. Vector correlation analysis is first employed to remove the infeasible combinations. Utilizing the statistical primaries 8 of a given spectral image requiring reproduction as the basis information, the ink-selection algorithm then searches for similar inks through a given ink database to obtain a few highest correlated inks for each statistical primary. Second, the ink-selection algorithm estimates the colorimetric and spectral accuracy of candidate ink sets in an empirically derived color mixing space which approximates the color formation of a halftone printing process. The candidate ink set with the highest spectral accuracy will be designated as the optimal ink set for a six color printing process thereby achieving the least metameric color reproduction.

33 citations


Proceedings ArticleDOI
21 Dec 1999
TL;DR: The paper discusses concepts envisaged for color correction in image capturing devices with respect to fundamental requirements on color analysis and studies and ideas on multispectral color technology and on how this technology might be introduced in future imaging and color management systems.
Abstract: Color management systems are being introduced worldwide to improve the color quality of digital image capture and device independent electronic color image reproduction. To be able to supply device independent color data at interfaces in imaging systems, device dependent color correction is required. The paper discusses concepts envisaged for color correction in image capturing devices with respect to fundamental requirements on color analysis. The common image capturing technology is based on the use of three color channels. Main points of the discussion are the shortcomings of this technology to analyze metameric colors correctly and the question if this will be an essential point for future imaging technology. Further parts of the paper cover the alternative multispectral technology. Multispectral cameras delivering the complete spectrum of color stimuli of each pixel of an image are available in the laboratory. This technology offers s solution to the problem of metameric color analysis and offers flexibility to match different illuminants as well, yet, the amount of additional effort is large. The paper summarizes studies and ideas on multispectral color technology and on how this technology might be introduced in future imaging and color management systems.

32 citations


Proceedings Article
01 Jan 1999
TL;DR: Experiments were performed to test a set of general-purpose gamut-mapping functions, which utilized contrast-preserving scaling functions and showed that vast improvements were obtained when linear lightness and chroma rescaling functions were replaced with contrast- PreservingLightness and Chroma Rescaling functions.
Abstract: Experiments were performed to test a set of general-purpose gamut-mapping functions. These gamut-mapping algorithms utilized contrast-preserving scaling functions. These algorithms were tested against the GCUSP gamut-mapping algorithm proposed by Morovic and Luo, which was shown to have very good universal gamut-mapping characteristics. The results of these experiments showed that vast improvements were obtained when linear lightness and chroma rescaling functions were replaced with contrast-preserving lightness and chroma rescaling functions. For these experiments, the gamut mapping consisted of sigmoidal lightness-remapping functions followed by either knee or sigmoid-like chromatic compression functions.

30 citations


Proceedings Article
01 Jan 1999
TL;DR: A new contrast-enhancing algorithm was found to give more favorable reproductions than several gamut-mapping techniques described in the literature, and these algorithms can sometimes result in undesirable artifacts for certain images, including contouring and loss of shadow detail.
Abstract: Six techniques for mapping the colors of an image into the gamut of printable colors were compared. Six pictorial scenes were used in two psychophysical experiments, one to test accurate reproduction and one to test preferred reproduction. A new contrast-enhancing algorithm was found to give more favorable reproductions than several gamut-mapping techniques described in the literature. This algorithm performs luminance compression by applying an inverted power function to images in a linear RGB color space: 1 (1 RGB). Remaining out-of-gamut pixels are clipped to the gamut surface in the direction of a central point on the neutral axis. Other algorithms that performed well were those that clip out-of-gamut colors to the surface of the gamut, and do not affect colors within the gamut. These algorithms can sometimes result in undesirable artifacts for certain images, including contouring and loss of shadow detail. However, observers did not object to the loss of shadow detail if the colorfulness of the image was maintained or increased. Also, the results of a matching experiment (original present) and a preference experiment gave quite different results. Clipping algorithms did well in the matching experiments, while contrast boosting algorithms did best in the preference matching. The preferred techniques did well in both experiments.

Proceedings Article
01 Dec 1999
TL;DR: A simple method is introduced based on direct measurements for characterizing fluorescent surfaces that has low error and avoids the need to develop a comprehensive and accurate physical model, and several modern color constancy algorithms are modified and extended to address fluorescence.
Abstract: Fluorescent surfaces are common in the modern world, but they present problems for machine color constancy because fluorescent reflection typically violates the assumptions needed by most algorithms. The complexity of fluorescent reflection is likely one of the reasons why fluorescent surfaces have escaped the attention of computational color constancy researchers. In this paper we take some initial steps to rectify this omission. We begin by introducing a simple method for characterizing fluorescent surfaces. It is based on direct measurements, and thus has low error and avoids the need to develop a comprehensive and accurate physical model. We then modify and extend several modern color constancy algorithms to address fluorescence. The algorithms considered are CRULE and derivatives [1-4], Color by Correlation [5], and neural net methods [6-8]. Adding fluorescence to Color by Correlation and neural net methods is relatively straight forward, but CRULE requires modification so that its complete reliance on diagonal models can be relaxed. We present results for both synthetic and real image data for fluorescent capable versions of CRULE and Color by Correlation, and we compare the results with the standard versions of these and other algorithms.

Proceedings ArticleDOI
21 Dec 1999
TL;DR: A novel color correction technique that uses adaptive segmentation techniques to identify the presence of undesired color casts, estimate their chromatic strength, and alter the image's near-neutral color regions to compensate for the cast.
Abstract: Digital Still Camera images often have undesired color casts due to unusual illuminant sources. This paper describes a novel color correction technique that uses adaptive segmentation techniques to identify the presence of such casts, estimate their chromatic strength, and alter the image's near-neutral color regions to compensate for the cast. The segmentation method identifies most major objects in the scene and their average color.

Proceedings ArticleDOI
21 Dec 1999
TL;DR: It has been found that to a certain amount printed ink commanded by a printer, the printed image with ink penetration has bigger reflectance than that without ink penetration, and the range of color reproduction (color gamut) of printed image is reduced due to ink penetration.
Abstract: A theoretical approach describing the effect of ink penetration on the reflectance of the printed image is presented in this paper. Three different models with respect to the density of penetrating ink, constant, linear and exponential distribution, are studied. In addition to the constant model whose differential equations of light propagation can be solved analytically, series solutions corresponding to the linear and exponential models have been worked out. Generally good convergence of the series expansions has been found from simulation. It has been found that to a certain amount printed ink commanded by a printer, the printed image with ink penetration has bigger reflectance than that without ink penetration. Consequently, the range of color reproduction (color gamut) of printed image is reduced due to ink penetration.

Proceedings Article
01 Jan 1999
TL;DR: An application of the classification algorithm to the problem of rendering a color image acquired under one illumination under a second illuminant, with a different color temperature, using the ratio of R, G, and B sensor responses under different illuminants is considered.
Abstract: Knowledge of the full illuminant spectral power distribution is useful for many imaging applications. In most applications, however, accurate estimation is impossible because very few color measurements are made. In many of these cases, however, a great deal is known about the potential set of illuminants. In these cases, classification of scene illumination, rather than estimation of the full spectral power distribution of the illumination, is appropriate and useful. We analyze illuminant classification algorithms designed to group images by illuminant color temperature. To classify the illumination color temperature, a version of the correlation method suggested by Finlayson and colleagues is used. The original algorithm uses chromaticity coordinates, and thus does not use the fact that bright image regions contain more information about the illuminant than dark regions. Using calibrated images with known illuminants, we find that the original correlation method can be improved by using a scaled version of the red and blue sensor responses. When applied to these quantities, the algorithm is more sensitive to differences in illuminant color temperature. Then, we consider an application of the classification algorithm to the problem of rendering a color image acquired under one illumination under a second illuminant, with a different color temperature. This algorithm uses the ratio of R, G, and B sensor responses under different illuminants. The proposed method is applied to an image database of real scene.

Proceedings Article
01 Nov 1999
TL;DR: The role that metamers play in developing a new colour correction algorithm is examined in detail and it is demonstrated that the new method significantly outperforms traditional linear correction methods.
Abstract: At an early stage in almost all colour reproduction pipelines device RGBs are transformed to CIE XYZs. This transformation is called colour correction. Because the XYZ matching functions are not a linear combination of device spectral sensitivities there are some colours which look the same to a device but have quite different XYZ tristimuli. That such device metamerism exists is well known, yet the problem has not been adequately addressed in the colour correction literature. In this paper, we examine in detail the role that metamers play in developing a new colour correction algorithm. Our approach works in two stages. First, for a given RGB we characterise the set of all possible camera metamers. In the second stage this set is projected onto the XYZ colour matching functions. This results in a set of XYZs any one of which might be the correct answer for colour correction. Good colour correction results by choosing the middle of the set. We call the process of computing the set of metamers, projecting them to XYZs and performing selection, metamer constrained colour correction. Experiments demonstrate that our new method significantly outperforms traditional linear correction methods. For the particular case of saturated colours (these are among the most difficult to deal with) the error is halved on average; the maximum error is reduced by a factor of 4.

Proceedings Article
01 Jan 1999
TL;DR: The results show a significant effect of choice of encoding scheme, the presence or absence of a legend, and an interaction between these two factors on the interpretability of multidimensional graphical images.
Abstract: The performance in judging values in a univariate map encoded using five different color scales was tested in eleven subjects. Digital elevation maps (DEMs) were encoded using: 1) an RGB gray scale (RGB), 2) a gray scale based on CIELAB L* (L*), 3) a L* scale with an added red hue component (Red L*), 4) an L* scale with continuous hue change (Spectral L*), and 5) a gray scale based on luminance (Luminance). Performance was tested using an Evaluation task and a Production task. For both tasks judgments were made both with and without legends for all five encoding schemes. The results show a significant effect of choice of encoding scheme, the presence or absence of a legend, and an interaction between these two factors. Performance with a legend was significantly better than without one. The Spectral L* scale led to the best performance while Luminance encoding was the worst. This experiment is a first step in using quantifiable psychophysical procedures to evaluate the effectiveness of different color encoding schemes on the interpretability of multidimensional graphical images.

Proceedings Article
01 Jan 1999
TL;DR: An analysis is presented of how the space in which principal component analysis is performed can affect the colorimetric and spectral accuracy of spectral reconstruction.
Abstract: An analysis is presented of how the space in which principal component analysis is performed can affect the colorimetric and spectral accuracy of spectral reconstruction. The spectral reconstruction is performed using digital counts given by a new concept of spectral image acquisition constituted by a trichromatic camera combined with absorption filters, instead of the traditional monochrome camera and a set of interference filters. The comparison of the spectral reconstruction performance in each space shows the advantages and disadvantages of using alternative spaces rather than reflectance.


Proceedings Article
01 Jan 1999
TL;DR: In this article, the authors introduced new models and mathematical formulations describing the light scattering and ink spreading phenomena, and computed the spectra of 100 real paper samples produced by two ink-jet printers with an average prediction error of about in CIELAB.
Abstract: This study introduces new models and mathematical formulations describing the light scattering and ink spreading phenomena. Based on these new theoretical tools, the spectra of 100 real paper samples produced by two ink-jet printers were computed with an average prediction error of about in CIELAB.

Proceedings Article
01 Dec 1999
TL;DR: This paper investigates a number of color constancy algorithms in the context of specular and nonspecular reflection, and proposes extensions to several variants of ForsythOs CRULE algorithm which make use ofspeularities if they exist, but do not rely on their presence.
Abstract: There is a growing trend in machine color constancy research to use only image chromaticity information, ignoring the magnitude of the image pixels. This is natural because the main purpose is often to estimate only the chromaticity of the illuminant. However, the magnitudes of the image pixels also carry information about the chromaticity of the illuminant. One such source of information is through image specularities. As is well known in the computational color constancy field, specularities from inhomogeneous materials (such as plastics and painted surfaces) can be used for color constancy. This assumes that the image contains specularities, that they can be identified, and that they do not saturate the camera sensors. These provisos make it important that color constancy algorithms which make use of specularities also perform well when the they are absent. A further problem with using specularities is that the key assumption, namely that the specular component is the color of the illuminant, does not hold in the case of colored metals. In this paper we investigate a number of color constancy algorithms in the context of specular and nonspecular reflection. We then propose extensions to several variants of ForsythOs CRULE algorithm [1-4] which make use of specularities if they exist, but do not rely on their presence. In addition, our approach is easily extended to include colored metals, and is the first color constancy algorithm to deal with such surfaces. Finally, our method provides an estimate of the overall brightness, which chromaticity-based methods cannot do, and other RGB based algorithms do poorly when specularities are present.

Proceedings ArticleDOI
21 Dec 1999
TL;DR: A novel halftoning approach that has embedded in it a model for the electrophotographic process is presented and results show good exploitation of pixel modulation and improvement over DBS with no printer model.
Abstract: A novel halftoning approach that has embedded in it a model for the electrophotographic process is presented. Models for the laser beam, exposure of the organic photo-conductor, and the resulting absorptance on the paper are embedded into the Direct Binary Search halftoning algorithm. The algorithm is applicable to any arbitrary pixel modulation scheme and is also highly portable between different electrophotographic print engines. Computational issues are addressed to make the approach viable. Results show good exploitation of pixel modulation and improvement over DBS with no printer model.

Proceedings Article
01 Jan 1999
TL;DR: The results suggest that the traditional concepts of linear luminance integration and equivalent background are satisfactory on average, however, results for individual observers show very striking, consistent, and significant trends with substantial inter-observer variability.
Abstract: A psychophysical experiment was carried out to examine the relationship between image contrast and overall perceived brightness. A second phase of the experiment looked at the relationship between the perceived brightness of variegated backgrounds and the simultaneous contrast effect produced by such backgrounds. These results have important ramifications for procedures used to calculate adapting chromaticities and luminances for image displays. The results suggest that the traditional concepts of linear luminance integration and equivalent background are satisfactory on average. However, results for individual observers show very striking, consistent, and significant trends with substantial inter-observer variability. These results help to reconcile differences between fundamental vision science experiments and practical experiences with color appearance models.

Proceedings Article
01 Jan 1999
TL;DR: For applications where colorimetric information is insufficient to characterize an input scene or document, multispectral image capture (i.e. for more than three records) has been suggested and resulting errors in the estimated object spectral reflectance factor and subsequent Colorimetric transformation are addressed.
Abstract: For applications where colorimetric information is insufficient to characterize an input scene or document, multispectral image capture (ie for more than three records) has been suggested 1-3 Experimental cameras have been described, as have the results of signal processing to extract useful spectral and colorimetric information Previous reports have addressed both system accuracy and precision, the latter as influenced by random pixel-to-pixel image noise Another contributor to system precision is signal quantization Statistics are computed for various levels of uniform and non-uniform quantization The resulting errors in the estimated object spectral reflectance factor and subsequent colorimetric transformation are addressed The comparison of these errors with those due to stochastic noise sources indicates that both are influenced by the image processing employed

Proceedings ArticleDOI
21 Dec 1999
TL;DR: Two methods of determining combinations of colorant amounts are proposed: the variable reduction method and the division method, which employs sub-gamuts composed of appropriate sets of three or four colorants to form the entire color.
Abstract: The colorimetric characterization of printers using more than three colorants is discussed. In such printers, there is no unique combination of colorant amounts for the reproduction of a particular color. We categorize these printers as either black printers or hi-fi printers. Black printers use black (K) in addition to cyan (C), magenta (M), and yellow (Y). Hi-fi printers use saturated colorants such as red (R), green (G), and blue (B) in addition to CMYK colorants. We propose two methods of determining combinations of colorant amounts: the variable reduction method and the division method. The variable reduction method uses connecting functions to reduce the number of variables controlling colorant amounts. Although this method offers simplicity, it does not always utilize the entire color gamut. The division method employs sub-gamuts composed of appropriate sets of three or four colorants; these sub- gamuts are combined to form the entire color. While the division method allows access to the entire color gamut, its boundaries tend to cause pseudo contours due to abrupt changes of colorant amount. To facilitate the use of the division method, we have developed a software tool and verified the algorithm involved using a hypothetical hi-fi printer in computer simulation.



Proceedings ArticleDOI
Jon Yngve Hardeberg1
21 Dec 1999
TL;DR: In this article, the colorimetric faculties of a desktop scanner have been evaluated using several different desktop scanners and the results showed that the results were very good: mean CIELAB (Delta E*ab color errors as low as 1.4.
Abstract: To achieve high image quality throughout a digital imaging system, the first requirement is to ensure the quality of the device that captures real-world physical images to digital images, for example a desktop scanner. Several factors have influence on this quality: optical resolution, bit depth, spectral sensitivities, and acquisition noise, to mention a few. In this study we focus on the colorimetric faculties of the scanner, that is, the scanner's ability to deliver quantitative device-independent digital information about the colors of the original document. We propose methods to convert from the scanner's device-dependent RGB color space to the standard device-independent color space sRGB. The methods have been evaluated using several different desktop scanners. Our results are very good: mean CIELAB (Delta) E*ab color errors as low as 1.4. We further discuss advantages and disadvantages of a digital color imaging system using the sRGB space for image exchange, compared to using other color architectures.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
21 Dec 1999
TL;DR: A model based color halftoning method using the direct binary search (DBS) algorithm that exploits the differences in low human viewers respond to luminance and chrominance information and uses the total squared error in a luminance/chrominance based space as a metric.
Abstract: In this paper, we develop a model based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in low human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.