scispace - formally typeset
Search or ask a question

Showing papers presented at "Color Imaging Conference in 2010"


Proceedings Article
01 Jan 2010
TL;DR: The results presented here show that in fact MaxRGB works surprisingly well when tested on a new dataset of 105 high dynamic range images, and also better than previously reported when some simple pre-processing is applied to the images of the standard 321 image set.
Abstract: The poor performance of the MaxRGB illuminationestimation method is often used in the literature as a foil when promoting some new illumination-estimation method. However, the results presented here show that in fact MaxRGB works surprisingly well when tested on a new dataset of 105 high dynamic range images, and also better than previously reported when some simple pre-processing is applied to the images of the standard 321 image set [1]. The HDR images in the dataset for color constancy research were constructed in the standard way from multiple exposures of the same scene. The color of the scene illumination was determined by photographing an extra HDR image of the scene with 4 Gretag Macbeth mini Colorcheckers at 45 degrees relative to one another placed in it. With preprocessing, MaxRGB’s performance is statistically equivalent to that of Color by Correlation [2] and statistically superior to that of the Greyedge [3] algorithm on the 321 set (null hypothesis rejected at the 5% significance level). It also performs as well as Greyedge on the HDR set. These results demonstrate that MaxRGB is far more effective than it has been reputed to be so long as it is applied to image data that encodes the full dynamic range of the original scene. Introduction MaxRGB is an extremely simple method of estimating the chromaticity of the scene illumination for color constancy and automatic white balancing based on the assumption that the triple of maxima obtained independently from each of the three color channels represents the color of the illumination. It is often used as a foil to demonstrate how much better some newly proposed algorithm performs in comparison. However, is its performance really as bad as it has been reported [1,3-5] to be? Is it really any worse than the algorithms to which it is compared?1 The prevailing belief in the field about the inadequacy of MaxRGB is reflected in the following two quotations from two different anonymous reviewers criticizing a manuscript describing a different illumination-estimation proposal: “Almost no-one uses Max RGB in the field (or in commercial cameras). That this, rejected method, gives better performance than the (proposed) method is grounds alone for rejection.” “The first and foremost thing that attracts attention is the remarkable performance of the Scale-by-Max (i.e. White-Patch) algorithm. This algorithm has the highest performance on two of the three data sets, which is quite remarkable by itself.”   Paper’s title inspired by Charles Poynton, “The Rehabilitation of Gamma,” Proc. of Human Vision and Electronic Imaging III SPIE 3299, 232-249, 1998. We hypothesize that there are two reasons why the effectiveness of MaxRGB may have been underestimated. One is that it is important not to apply MaxRGB naively as the simple maximum of each channel, but rather it is necessary to preprocess the image data somewhat before calculating the maximum, otherwise a single bad pixel or spurious noise will lead to the maximum being incorrect. The second is that MaxRGB generally has been applied to 8-bit-per-channel, non-linear images, for which there is both significant tone-curve compression and clipping of high intensity values. To test the pre-processing hypothesis, the effects of preprocessing by median filtering, and resizing by bilinear filtering, are compared to that of the common pre-processing, which simply discards pixels for which at least one channel is maximal (i.e., for n-bit images when R=2n-1 or G=2n-1 or B=2n-1). To test the dynamic-range hypothesis, a new HDR dataset for color constancy research has been constructed which consists of images of 105 scenes. For each scene there are HDR2 (high dynamic range) images with and without Macbeth mini Colorchecker charts, from which the chromaticity of the scene illumination is measured. This data set is now available on-line. MaxRGB is a special and extremely limited case of Retinex [6]. In particular, it corresponds to McCann99 Retinex [7] when the number of iterations is infinite, or to path-based Retinex [8] without thresholding but with infinite paths. Retinex and MaxRGB both depend on the assumption that either there is a white surface in the scene, or there are three separate surfaces reflecting maximally in the R, G and B sensitivity ranges. In practice, most digital still cameras are incapable of capturing the full dynamic range of a scene and use exposures and tone reproduction curves that clip or compress high digital counts. As a result, the maximum R, G and B digital counts from an image generally do not faithfully represent the corresponding maximum scene radiances. Barnard et al. [9] present some tests using artificial clipping of images that show the effect that lack of dynamic range can have on various illumination-estimation algorithms. To determine whether or not MaxRGB is really as poor as it is report to be in comparison to other illumination-estimation algorithms, we compare the performance of several algorithms on the new image database. We also find that two simple preprocessing strategies lead to significant performance improvement in the case of MaxRGB. Tests described below show that MaxRGB performs as well on this new HDR data set as other representative and recently published algorithms. We also find that two simple pre-processing strategies lead to significant performance improvement. The results reported here extend those of an earlier study [10] in a number of ways: the size of the dataset   2 Note that the scenes were not necessarily of high dynamic range. The term HDR is used here to mean simply that that full dynamic range of the scene is captured within the image. 3 www.cs.sfu.ca/~colour/data  Page 1 of 4

103 citations


Proceedings Article
01 Jan 2010
TL;DR: A surface triangulation of a set of points in the color space is computed using an alpha-shape, which is a generalization of a convex hull applicable also to nonconvex solids.
Abstract: This paper proposes a solution to the problem of finding the boundary of the gamut of a color printing device or of a color image. A surface triangulation of a set of points in the color space is computed using an alpha-shape, which is a generalization of a convex hull applicable also to nonconvex solids. The desired level of detail can be controlled by means of an alpha parameter. A method for selecting the suitable value of this parameter is proposed.

79 citations


Proceedings Article
01 Jan 2010
TL;DR: A framework to incorporate near-infrared (NIR) information into algorithms to better segment objects by isolating material boundaries from color and shadow edges by forming an intrinsic image from the R, G, B, and NIR channels based on a 4-sensor camera calibration model.
Abstract: We present a framework to incorporate near-infrared (NIR) information into algorithms to better segment objects by isolating material boundaries from color and shadow edges. Most segmentation algorithms assign individual regions to parts of the object that are colorized differently. Similarly, the presence of shadows and thus large changes in image intensities across objects can also result in mis-segmentation. We first form an intrinsic image from the R, G, B, and NIR channels based on a 4-sensor camera calibration model that is invariant to shadows. The regions obtained by the segmentation algorithms are thus only due to color and material changes and are independent of the illumination. Additionally, we also segment the NIR channel only. Near-infrared (NIR) image intensities are largely dependent on the chemistry of the material and have no general correlation with visible color information. Consequently, the NIR segmentation only highlights material and lighting changes. The union of both segmentations obtained from the intrinsic and NIR images results in image partitions that are only based on material changes and not on color or shadows. Experiments show that the proposed method provides good object-based segmentation results on diverse images.

32 citations


Proceedings Article
01 Dec 2010
TL;DR: This paper evaluated the performance of two probabilistic estimation algorithms for automatic assignment of CIELAB coordinates into an arbitrary number of color names, resulting in practical color naming models that can support natural language image segmentation, with a computational simplicity that makes them suitable for online applications.
Abstract: Extensive research in color naming and color categorization has been more focused on a small number of consensual color categories than towards the development of more subtle color identifications. The work we present in this paper describes an online color-naming model. In this context we evaluated the performance of two probabilistic estimation algorithms for automatic assignment of CIELAB coordinates into an arbitrary number of color names. The algorithms were tested on data gathered in a sophisticated online color naming experiment detailed elsewhere and summarized here. Our methodology resulted in practical color naming models that can support natural language image segmentation, with a computational simplicity that makes them suitable for online applications. © 2010 Society for Imaging Science and Technology.

28 citations


Proceedings Article
01 Nov 2010
TL;DR: The results show that it is possible to effectively classify real, color-normal observers into a small number of categories, which in certain application contexts, can produce perceptibly better color matches for many observers compared to the matches predicted by the CIE 10° standard observer.
Abstract: The variability among color-normal observers poses a challenge to modern display colorimetry because of their peaky primaries. But such devices also hold the key to a future solution to this issue. In this paper, we present a method for deriving seven distinct colorimetric observer categories, and also a method for classifying individual observers as belonging to one of these seven categories. Five representative L, M and S cone fundamentals (a total of 125 combinations) were derived through a cluster analysis on the combined set of 47-observer data from 1959 Stiles-Burch study, and 61 color matching functions derived from the CIE 2006 model corresponding to 20-80 age parameter range. From these, a reduced set of seven representative observers were derived through an iterative algorithm, using several predefined criteria on perceptual color differences (delta E*00) with respect to actual color matching functions of the 47 Stiles-Burch observers, computed for the 240 Colorchecker samples viewed under D65 illumination. Next, an observer classification method was implemented using two displays, one with broad-band primaries and the other with narrow-band primaries. In paired presentations on the two displays, eight color-matches corresponding to the CIE 10° standard observer and the seven observer categories were shown in random sequences. Thirty observers evaluated all eight versions of fifteen test colors. For majority of the observers, only one or two categories consistently produced either acceptable or satisfactory matches for all colors. The CIE 10° standard observer was never selected as the most preferred category for any observer, and for six observers, it was rejected as an unacceptable match for more than 50% of the test colors. The results show that it is possible to effectively classify real, color-normal observers into a small number of categories, which in certain application contexts, can produce perceptibly better color matches for many observers compared to the matches predicted by the CIE 10° standard observer.

26 citations


Proceedings Article
01 Jan 2010
TL;DR: Accounting for ink spreading considerably improves the prediction accuracy and requires only one additional measurement per subdomain, and ink spreading can also be characterized with red, green and blue sensor responses without decreasing the model reflectance prediction accuracy.
Abstract: We propose an extension of the cellular Yule-Nielsen spectral Neugebauer model accounting for ink spreading of each ink within each subdomain. Characterization of the ink spreading within a given subdomain is performed by fitting the mid-range weights of subdomain node reflectances with the goal of minimizing the sum of square differences between predicted and measured mid-range reflectances. We show that the mid-range weights within a subdomain can be either separately fitted on three halftones or jointly fitted on a single halftone. Accounting for ink spreading considerably improves the prediction accuracy and requires only one additional measurement per subdomain. These additional measurements do not necessarily require spectral measurements. Instead, ink spreading can also be characterized with red, green and blue sensor responses without decreasing the model reflectance prediction accuracy.

24 citations


Proceedings Article
01 Jan 2010
TL;DR: The ecological valence theory not only predicts average color preferences better than three alternative theories containing more free parameters, but it provides a plausible explanation of why color preferences exist and how they arise.
Abstract: Aesthetic response to color is an important aspect of human experience, but little is known about why people like some colors more than others. Previous research suggested explanations based on sensory physiology and color-emotions. In this chapter we propose an ecological valence theory based on the hypothesis that color preferences are caused by people’s average affective responses to color-associated objects. That is, people like colors that are strongly associated with objects they like (e.g. blues with clear skies and clean water) and dislike colors strongly associated with objects they dislike (e.g. browns with feces and rotten fruit). We report data that strongly support this claim: the ecological valence theory not only predicts average color preferences better than three alternative theories containing more free parameters, but it provides a plausible explanation of why color preferences exist and how they arise.

21 citations


Proceedings Article
01 Jan 2010
TL;DR: In this paper, a multiscale retinex algorithm using Gaussian filters was proposed to enhance the local contrast of a captured image using the ratio between the intensities of an arbitrary pixel in the captured image and its surrounding pixels.
Abstract: As the dynamic range of a digital camera is narrower than that of a real scene, the captured image requires a tone curve or contrast correction to reproduce the information in dark regions. Yet, when using a global correction method, such as histogrambased methods and gamma correction, an unintended contrast enhancement in bright regions can result. Thus, a multiscale retinex algorithm using Gaussian filters was already proposed to enhance the local contrast of a captured image using the ratio between the intensities of an arbitrary pixel in the captured image and its surrounding pixels. The intensity of the surrounding pixels is estimated using Gaussian filters and weights for each filter, and to obtain better results, these Gaussian filters and weights are adjusted in relation to the captured image. Nonetheless, this adjustment is currently a subjective process, as no method has yet been developed for optimizing the Gaussian filters and weights according to the captured image. Therefore, this article proposes local contrast enhancement based on an adaptive multiscale retinex using a Gaussian filter set adapted to the input image. First, the weight of the largest Gaussian filter is determined using the local contrast ratio from the intensity distribution of the input image. The other Gaussian filters and weights for each Gaussian filter in the multiscale retinex are then determined using a visual contrast measure and the maximum color difference of the color patches in the Macbeth color checker. The visual contrast measure is obtained based on the product of the local standard deviation and locally averaged luminance of the image. Meanwhile, to evaluate the halo artifacts generated in large uniform regions that abut to form a high contrast edge, the artifacts are evaluated based on the maximum color difference between each color of the pixels in a patch in the Macbeth color and the averaged color in CIELAB standard color space. When considering the color difference for halo artifacts, the parameters for the Gaussian filters and weights representing a higher visual contrast measure are determined using test images. In addition, to reduce the induced graying-out, the chroma of the resulting image is compensated by preserving the chroma ratio of the input image based on the maximum chroma values of the sRGB color gamut in the lightness–chroma plane. In experiments, the proposed method is shown to improve the local contrast and saturation in a natural way. VC 2011 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2011.55.4.040502] INTRODUCTION Human vision is a complicated automatic self-adapting system that is capable of seeing over 5 orders of magnitude simultaneously and can gradually adapt to natural world scenes with a high dynamic range of over 9 orders of magnitude. Thus, human vision can concurrently perceive details in both bright and dark regions. In contrast, current color imaging capture and display devices, such as digital cameras, cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), and organic light-emitting diodes (OLEDs), are unable to capture and represent a dynamic range of more than 100:1. This means that the captured images suffer from poor scene detail and color reproduction in dark areas, especially in the case of a scene that contains both bright and dark areas. Nonetheless, despite the need to adjust the contrast of an image captured by a digital camera to represent the viewer’s perception of the natural scene, this remains a difficult problem, insofar as the human visual system is extremely complex and current techniques are unable to replicate it completely. As the sensitivity of the human eye changes locally according to the position of an object and the illuminant in the scene, a spatially adaptive method is required to overcome these limitations, which has led to the recent development of the single-scale retinex model, based on the retinex theory as a model of human vision perception. The singlescale retinex model utilizes the ratio of the lightness for a small central field in the region of interest to the average lightness over an extended field, where a Gaussian filter is generally used to obtain the average lightness. However, application of the single-scale retinex model introduces several problems, such as halos and graying-out, depending on the size of the Gaussian filter, which varies according to

17 citations


Proceedings Article
01 Nov 2010
TL;DR: A numerical method to determine a transformation of a color space into a hue linear color space with a maximum degree of perceptual uniformity using a color-difference formula and the Hung and Berns data as a reference of constant perceived hue is proposed.
Abstract: We propose a numerical method to determine a transformation of a color space into a hue linear color space with a maximum degree of perceptual uniformity. In a first step, a transformation of the initial color space into a nearly perceptually uniform space is computed using multigrid optimization. In a second step, a hue correction is applied to the resulting color space while preserving the perceptual uniformity as far as possible. The two-stage transformation can be stored as a single lookup table for convenient usage in gamut mapping applications. We evaluated our approach on the CIELAB color space using the CIEDE2000 color-difference formula as a measure of perceptual uniformity and the Hung and Berns data as a reference of constant perceived hue. Our experiments show a mean disagreement of 5.0% and a STRESS index of 9.43 between CIEDE2000 color differences and Euclidean distances in the resulting hue linear color space. Comparisons with the hue linear IPT color space illustrate the performance of our method. Introduction Psychophysical experiments show that observers favor hue-preserving gamut mapping algorithms. Maintaining the perceived hue is therefore an important objective in gamut mapping [1]. Hue linear color spaces, in which the lines of constant hue are straight lines, allow simple access to constant hue curves. Another desirable property for gamut mapping is perceptual uniformity of the color space, meaning that Euclidean distances agree with perceived distances. This is important for adjusting the degree of compression or for preserving contrast ratios. A gamut representation in the perceptually non-uniform CIELAB color space may lead to contrast ratio changes if highly chromatic gamut regions and regions close to the gray axis are treated similarly. In addition, CIELAB is not hue linear, which is especially evident in the blue region (see Fig. 1) [2]. If a gamut mapping is performed in CIELAB, a hue correction of this region is strongly recommended [3, 4]. Other color spaces are especially designed to be hue linear, such as the IPT color space [5], but they exhibit a lack of perceptual uniformity. Color order systems, such as the Munsell system, are also designed to be hue linear, but they cover rather low chroma regions. Unfortunately, there are many indicators that a perceptually uniform color space does not exist [6, 7, 8, 9]. To find a space with optimal perceptual uniformity, Urban et al. proposed a method to transform non-Euclidean into Euclidean color spaces with minimal isometric disagreement [10, 11]. The resulting color spaces show a high degree of perceptual uniformity, provided that the underlying color-difference formulas accurately reflect perceived color differences. Constant hue curves [3, 4] plotted in these approximately perceptually uniform color spaces reveal a significant lack of hue linearity (as shown for the LAB2000 space in Fig. 1). As a consequence, these color spaces are not recommended for gamut mapping — unless colors are mapped along curved trajectories, which requires much greater computational effort. As already mentioned, a hue linear color space with a maximum degree of perceptual uniformity would be beneficial for gamut mapping applications. This requires the creation of a new color space that combines the local property of perceptual uniformity with the global property of hue linearity. Instead of fitting the parameters of analytical functions to visual data, a numerical transformation based on lookup tables is used in this paper. To illustrate the basic concept of our method, we create a transformation of the CIELAB color space using the CIEDE2000 [12] color-difference formula as a measure of perceptual uniformity and the Hung and Berns data [3, 4] as a reference of constant perceived hue. Other color spaces such as the CIECAM02 [13] space and other color-difference formulas such as CIE94 [14], CMC [15] or improved versions of these formulas [16, 17] can be used equivalently. The Color Space Transformation Our initial color space is perceptually non-uniform and not hue linear. We assume that a color-difference formula is defined on this space, and that its color-difference estimations accurately reflect perceived color differences. The proposed method is a two-stage transformation of the initial color space. The first transformation maps the color space to a Euclidean space (Euclidean metric) with minimal isometric (lengthpreserving) disagreement with respect to the color-difference formula. The second transformation maps the resulting color space to a hue linear space while keeping the disagreement small. These transformations can be combined into a single color lookup table for usage in gamut mapping algorithms. In this paper, we use CIELAB as our initial color space, because it is well known and used in many industrial standards. The CIEDE2000 color-difference formula is used to estimate perceived color differences in CIELAB. The transformations can be summarized as follows: T00 : CIELAB Stage 1 7−→ LAB2000 T00,HL : LAB2000 Stage 2 7−→ LAB2000HL, (1) where LAB2000 [10] and LAB2000HL are Euclidean color spaces with minimal isometric disagreement with respect to CIEDE2000, and LAB2000HL is hue linear. The transformations can be turned into a single transformation by composition: T = T00,HL ◦T00. 18th Color Imaging Conference Final Program and Proceedings 97 Stage 1: Perceptual Uniformity The transformation of the CIELAB color space into a Euclidean space with respect to the CIEDE2000 color-difference formula has been described by Urban et al. [10]. We will therefore only sketch the method roughly. The color space transformation for CIELAB and CIEDE2000 is available online [18]. Because CIEDE2000 treats lightness differences independently of hue and chroma differences, the a∗b∗-plane is treated separately from L∗. The L∗ coordinate is transformed into the perceptually uniform lightness coordinate L∗ 00 by numerically integrating the CIEDE2000 formula along the lightness axis. The result is a onedimensional lookup table. The a∗b∗-plane is transformed using a two-dimensional lookup table. This table is calculated using multigrid optimization, starting from two regular grids whose vertices cover the a∗b∗-plane. These grids are designed such that each mesh of a grid encloses exactly one vertex of the other grid. The distance between any two neighboring vertices does not exceed five CIELAB units, the threshold below which CIEDE2000 correlates well with perceived differences [19]. For each mesh, the CIEDE2000 differences are calculated between its four vertices and the enclosed vertex of the other grid. The resulting four color differences are stored and remain unchanged during the subsequent multigrid optimization. In every iteration of the optimization, the vertices of a grid are shifted based on the meshes of the other grid. The objective is to decrease the disagreement between the stored CIEDE2000 differences and the corresponding Euclidean distances. In the first iteration, the vertices of the first grid are shifted based on the meshes of the second grid. In the second iteration, the vertices of the second grid are shifted based on the meshes of the first grid. The optimization continues with alternating grids until the change between subsequent iterations is sufficiently small. The two-dimensional lookup table is then created by mapping the vertices of either starting grid to the vertices of the corresponding optimized grid. Intermediate points are computed using bilinear interpolation. Figure 1 shows a starting grid in CIELAB and the corresponding grid in the LAB2000 space resulting from the multigrid optimization (grids in gray). The resulting color space transformation T00 consists of a onedimensional (lightness) and a two-dimensional lookup table:

15 citations


Proceedings Article
01 Jan 2010
TL;DR: The proposed method addresses HDR questions by replacing the power-function nonlinearities in CILAB and IPT with a more physiologically plausible hyperbolic function – Michaelis-Menten equation.
Abstract: Proposed method – hdr-CIELAB and hdr-IPT • Addressing HDR questions – Hard intercepts at zero luminance/lightness – Uncertain applicability for color brighter than diffuse white • Replacing the power-function nonlinearities in CILAB and IPT with a more physiologically plausible hyperbolic function – Michaelis-Menten equation

13 citations


Proceedings Article
01 Jan 2010
TL;DR: A simple numerical method for estimating the colorimetry of spot color overprints is described, which provides a simplified method that can be easily implemented and that is free of existing intellectual property.
Abstract: Although methods exist for predicting the color of a spot color ink when it overprints another ink, there is a need for a simplified model for the purpose of previewing that can be used during document creations. The proposed model characterizes each spot color individually and predicts the color of overprinting solids and halftones by linearly combining the reflectances of two colors. Each ink is defined separately in terms of its opacity, and relatively few measurements are required to predict the resulting color of the overprint. The model was evaluated for three different substrates. A 6color test chart, containing a total of 4550 patches of different combinations of C, M, Y, K and two spot colors, was printed using the offset printing process. The overprint model was applied to predict the resulting colors. Model predictions were compared to the measured data in terms of ∆L*, ∆a*, ∆b* and CIEDE2000. Average CIEDE2000 values between the measured and predicted colors were found to be below 3, for both the spot colors on all papers. Introduction Printing uses four process inks, CMYK. In some industries, additional inks are used, for example to print critical brand colors. These are known as spot colors or special inks. Colors printed by a combination of process inks and spot colored inks can be characterized by means of an ICC profile. However, it is not practical to print a profile target and generate an ICC profile for all inks in combination. This makes it difficult to predict the effect of overprinting spot colors onto process inks or other spot colors, whereas in a color reproduction workflow it is important to have a preview or proof of the anticipated color. Spectral printer models are available for characterizing the four-color printing system. These models try to depict the complex interaction between light, paper and ink. Some of these models include: Kubelka-Munk model, Yule-Nielsen modified Neugebauer model [1], Van De Capelle and Meireson model patented by EskoArtwork [2][3], Enhanced Yule-Nielsen modified Neugebauer (EYNSN) model to account for ink spreading in different ink superposition conditions [4][5]. Although these models were not developed for spot colors, they can be used to predict the color of ink combinations of spot colors. Some of these models require extensive number of inputs and optimization process. Furthermore, these are relatively complex models and they cannot be easily integrated into the existing standards like ICC profiles and PDF/X. YNSN model requires to print all combinations of primary inks. For a 6-ink printing system, it needs 64 color patches. The packaging industry uses a large number of spot colors. For example, one sample library contains 1114 spot colors, and it is not practical to print and measure all possible combinations of these inks. Here a simple numerical method for estimating the colorimetry of spot color overprints is described. This method does not necessarily generate a more accurate prediction than existing methods, but simply provides a simplified method that can be easily implemented and that is free of existing intellectual property. Method A 6-color test chart (Fig.1) was designed to evaluate the model, which is described in the next section. The Basic set of the IT8.7/3 CMYK chart,, consisting of 182 color patches, was selected to represent the background objects. Two spot colors, Pantone 157C and Pantone 330C, were printed in addition to Cyan, Magenta, Yellow and Black inks. On each of the 182 patches of CMYK, different combinations of spot colored inks (0%, 25%, 50%, 75% and 100%) were printed. Thus, for each CMYK basic color, there were 25 combinations resulting in a total of 4550 patches. Figure 1. Fraction of the 6-color test chart To define each spot colored ink individually, an ink characterization chart was used. This consists of ramps of inks from 0% to 100% printed over the substrate, over the grey backing (50% black ink) and over the black backing (solid black ink). Figure 2. Ink characterization chart for Spot Colour2 (Pantone 330C) Thus there were three similar ramps consisting of 11 steps, printed over white (i.e. Substrate), grey and black backings. This is similar to Van De Capelle's method [2]. Spectral measurements of 18th Color Imaging Conference Final Program and Proceedings 213 the ink characterization chart provide ink reflectance characteristics transparency. Printing process used in this study was the Offset printing process on three different substrates: Mitsubishi Paper MYU Coat NEOS, Hokuetsu Paper Pearl Coat N and Nihon Paper Be-7. Toppan standard inks were used with the following sequence: K – C – M – Y – Spot Colour1 (PMS 157C) – Spot Colour2 (PMS 330C). The test charts and ink characterization charts were measured according to ISO 13655:2010 measurement condition M0 using an X-Rite SpectraScan. The ink characterization chart is used to calculate the coefficients of the model. Model predictions were compared to the measured tristimulus values of color patches. The proposed model is compared to following existing models: Kubelka-Munk with Saunderson correction, YNSN model, Van De Capelle model. These models were applied for the above mentioned 6-ink printing system. Proposed Overprint Model The overprint model is used to predict colors resulting from combinations of special inks, which includes solid overprints as well as halftone overprints. The assumption made in this method is that at each wavelength the reflectance factor of an overprint approximates the product of the reflectance factors of the two inks measured independently. When this reflectance product is modified by a scaling constant, the approximation is often a good prediction of the actual reflectance. Since XYZ is a linear transform of reflectance, the same approach can be adopted for XYZ tristimulus values. In the overprint model, each ink is characterized separately. This is done by printing a solid ink and its tints on three backings – plane substrate (white), grey and solid black. Spectral measurements of these patches are used to derive the coefficients of the model (scaling factors and constants) for each spot colored ink by linear regression. Where a spot color is printed over another color, the firstprinted underlying color is considered as a background object and the overprinted spot color as a foreground object. The overprint model assumes that a resulting color (Xr, Yr, Zr) is correlated to the product of the background color (Xb, Yb, Zb) and foreground color (Xf, Yf, Zf). The resulting color (Xr, Yr, Zr) is predicted as follows: Xr = jx × (Xb × Xf) + kx Yr = jy × (Yb × Yf) + ky Zr = jz × (Zb × Zf) + kz (1) where [Xb Yb Zb]: tristimulus values of background color [Xf Yf Zf]: tristimulus values of foreground color [jx jy jz]: Scaling factors [kx ky kz]: Constants As seen in Eq. (1), linear regression is used to model the relationship between the resulting color (Xr, Yr, Zr) and the products of the background and foreground colors. Scaling factors and constants are calculated from the ink characterization chart using Equation (1). Color patches on all three backings (white, grey and black) are measured and used as the resulting color (Xr, Yr, Zr) in Equation (1). The foreground color for each patch is obtained from the tints of ink on white backing. Background color for each patch is extracted from the black ink characterization chart. A least squares method was sufficient to derive the scaling factors (jx, jy, jz) and constants (kx, ky, kz). In case of 2-inks overprint (say 40% Spot1 and 30% Spot2), the background color is the first color printed on the substrate (40% Spot1) and the foreground color is the second color (30% Spot2) printed after the first color. These are obtained from the ink characterization charts (tints printed on the white backing). The missing dot percentages are derived by interpolating the existing measurements. For a printing system of multiple spot inks printed on top of each other, Eq. 1 can be applied recursively to predict the resulting color of any given combinations of inks. There is no need to measure overprints of different combinations of inks; we only have to characterize each ink individually. In a color managed workflow, the tristimulus values of background color (made of C, M, Y, K ink-combinations) can be calculated using the A2B tag of the output intent ICC profile. Tristimulus values of the foreground color (i.e. Spot colored ink) can be derived from the measurements of the ink characterization chart (colors on the white backing). Tristimulus values of the resulting color (Xr, Yr, Zr) are predicted by applying Eq. 1. Finally the B2A tag of ICC profile can be used to estimate the colorant percentages. Results Figure 3 shows the relationship between the measured colors (X, Y, Z) and the variable terms in Eq. 1. Variable term products, for example (Xb * Xf ) are plotted against the measured values of X, Y and Z. This is for Nihon Paper Be-7 and combinations of CMYK (background color) + Spot Color1(foreground color). Figure 3. Linear regression between the measured X, Y, Z and the variable terms in Equation (1) 214 ©2010 Society for Imaging Science and Technology There is clearly a strong linear relationship. It means that the resulting color can be derived by a simple linear regression method. Table 1 shows the color difference values between the predicted and the measured colors for CMYK + Spot Color1 combinations on Nihon paper Be-7. All CIEDE2000 values are below 3. It can be seen that ∆a* values have contributed the most to color difference values. Table 1. Overall accuracy of the overprint model for Spot Color1 Color difference results for CMYK + Spot Color2 combinations on the same substrate are given in Table 2. Accuracy achieved is similar to that for Spot Color1. Table 2. Overall accuracy of the overprint model for Spot Color2 A histogram of CIEDE2000 values is shown in Figure 4. This is for

Proceedings Article
08 Nov 2010
TL;DR: A mathematical formulation of the CIEDE2000 by the line element to derive a Riemannian metric tensor in a color space that gives Just Noticeable Difference (JND) ellipsoids in three dimensions and ellipses in two dimensions is presented.
Abstract: The CIELAB based CIEDE2000 colour difference formula to measure small to medium colour differences is the latest standard formula of today which incorporates different corrections for the non uniformity of CIELAB space. It also takes account of parametric factors. In this paper, we present a mathematical formulation of the CIEDE2000 by the line element to derive a Riemannian metric tensor in a color space. The coefficients of this metric give Just Noticeable Difference (JND) ellipsoids in three dimensions and ellipses in two dimensions. We also show how this metric can be transformed between various colour spaces by means of the Jacobian matrix. Finally, the CIEDE2000 JND ellipses are plotted into the xy chromaticity diagram and compared to the observed BFD-P colour matching ellipses by a comparing method described in Pant and Farup (CGIV2010).

Proceedings Article
01 Jan 2010
TL;DR: In the present paper a method – HANS – is proposed to gain access to all possible, printable patterns by specifying relative area coverages of a printing system’s Neugebauer primaries instead of only colorant amounts.
Abstract: Traditionally the choices made by color separation are expressed as amounts of each of the available colorants to use for each of the reproducible colors. Halftoning then deals with the spatial distribution of colorants, which also results in the nature of their overprinting. However, having a colorant space as the way for color separation to communicate with halftoning gives access only to some of the possible printed patterns that a given printing system is capable of and therefore only to a reduced range of print attributes. In the present paper a method – HANS – is proposed to gain access to all possible, printable patterns by specifying relative area coverages of a printing system’s Neugebauer primaries instead of only colorant amounts. This results in delivering prints with more optimal print attributes than were possible using existing methods, allowing for up to 34% less ink use while delivering a 10% greater color gamut on a test printing system using CMYKcm inks. Introduction Print is the result of a number of colorants of different colors being superimposed on top of a substrate. Since the majority of printing technologies only allow for a very small number of levels of ink to be deposited at a given location on a substrate, halftoning is used to obtain ink patterns that result in a given color when seen from an appropriate viewing distance. These halftone patterns also result in inks being deposited on top of or next to one another in a specific way, giving a color that relates non-linearly to the amounts of the inks used. How much of an ink to use is the result of color separation, where ink amounts are chosen for each printable color. This is preceded by color management, where a choice of color reproduction objective (e.g., accuracy or pleasingness) can be made, where differences between the color gamuts of source content and the destination printing system are dealt with and where a color characterization of a printing system is employed with the aim of accurately rendering the chosen color reproduction objective Early color separation methods for three–ink printing systems, used since the late 19th century, involved the photomechanical construction of halftone patterns by filtering a projection of an original image through a set of color filters, each determining how much of a cyan, a magenta and a yellow ink to use, and then through a halftone screen, which resulted in the formation of dots of proportional sizes on the three printing plates. Here color separation filters determined ink amounts while halftone screens resulted in corresponding per–ink patterns, which were finally superimposed. The effectiveness of such methods was relatively limited given their very indirect control over the resulting printed patterns and therefore colors. Such control was significantly increased when computational color reproduction was pioneered during the first half of the 20th century. Here Neugebauer’s model of halftone color reproduction was key, which in its simplest form states that the color of a halftone pattern is the convex combination of the colors (i.e., CIE XYZs) of the Neugebauer Primaries (NPs) used in it. Here an NP is one of the possible ink overprints, with its convex weight being the relative area covered by it (Fig. 1). Figure 1. Relationship between print materials (top), resulting Neugebauer Primaries (center) for a three–colorant, bi–level printing system and an example of how colorant amounts and Neugebauer Primary area coverages relate in a halftone (bottom). The Neugebauer model enabled much tighter control over a printing system and was used as follows: For each color to be reproduced, find the amounts of inks, which, when halftoned using a given halftoning method, match that color. This involves having a model of the halftoning method, which for given ink amounts predicts corresponding NP area coverages. Having measured the NPs and using the Neugebauer model, a prediction can then be made of the resulting color from the NPs’ colors and their area coverages. Using these two models in reverse, appropriate ink amounts can be obtained for in–gamut colors. Broadly the same principle is employed even in the most recent color separation approaches and halftoning techniques. 10–15 A key to the success of such color separation is the accuracy of the model used and there have been numerous improvements here since the Neugebauer model’s introduction in 1937. In the above approaches, color separation and halftoning communicate via an ink space where color separation determines amounts of inks to use for a given color and halftoning then constructs patterns that deliver them. However, only certain Page 1 of 6

Proceedings Article
01 Jan 2010
TL;DR: A new algorithm is presented that allows for an efficient calculation of Logvinenko's color descriptors for large data sets and a wide variety of illuminants.
Abstract: Recently Logvinenko introduced a new object- color space, establishing a complete color atlas that is invariant to illumination (2). However, the existing implementation for calcu- lating the proposed color descriptors is computationally expensive and does not work for all types of illuminants. A new algorithm is presented that allows for an efficient calculation of Logvinenko's color descriptors for large data sets and a wide variety of illumi- nants.

Proceedings Article
01 Jan 2010
TL;DR: The experimental results indicate that the Structural SIMilarity index by Wang et al. (2004) is the most suitable metric for measuring the sharpness quality attribute, and a set of suitable image quality metrics for each attribute is proposed.
Abstract: Image quality assessment is a difficult and complex task due to its subjectivity and dimensionality. Attempts have been made to make image quality assessment more objective, such as the introduction of image quality metrics. However, it has been proven difficult to create an image quality metric correlated with perceived overall image quality. Because of this, and to reduce the dimensionality, quality attributes have been proposed to help linking subjective and objective image quality. Recently, Pedersen et al. (CIC, 2009) proposed a set of meaningful quality attributes for the evaluation of color prints with the intention of being used with image quality metrics. In this paper we evaluate image quality metrics for the quality attributes, and propose a set of suitable image quality metrics for each attribute. The experimental results indicate that the Structural SIMilarity index (SSIM) by Wang et al. (2004) is the most suitable metric for measuring the sharpness quality attribute. For the other quality attributes the results are not as conclusive. Introduction The printing industry is continuously moving forward as new products are introduced to the market. These products are becoming more and more affordable, and the technology is constantly improved. The need to assess the quality is also increased, for example to verify that new technology advancements produce higher quality prints than the current technology. There are two main methods to assess Image Quality (IQ), subjective and objective. Subjective assessment is carried out by human observers. Objective assessment does not involve human observers, but rather measurement devices to obtain numerical values, or alternatively IQ metrics. These IQ metrics are usually developed to take into account the human visual system, and thus with the goal of being correlated with subjective assessment. Numerous IQ metrics have been proposed [1], but so far no one has succeeded proposing an IQ metric fully correlated with subjective IQ [2–5]. Mostly because IQ is multi-dimensional and very complex. To reduce the complexity and dimensionality, Quality Attributes (QAs) have been used in the assessment of IQ. These QAs are terms of perception [6], such as sharpness and saturation. In earlier papers [7, 8] we proposed a set of six QAs for the evaluation of color prints: • Color contains aspects, such as hue, saturation, and color rendition, except lightness. • Lightness will range from ”light” to ”dark”. • Contrast can be described as the perceived magnitude of visually meaningful differences, global and local, in lightness and chromaticity, within the image. • Sharpness is related to the clarity of details and definition of edges. • Artifacts, like noise, contouring, and banding, contribute to degrading the quality of an image if detectable. • The physical QA contains all physical parameters that affect quality, such as paper properties and gloss. These QAs are referred to as the Color Printing Quality Attributes (CPQAs). We have created the CPQAs to help establishing a link between subjective and objective evaluation. Our long term goal is to evaluate quality without involving human observers. In order to achieve this, with the starting point of CPQAs, we need to identify IQ metrics able to correctly measure each CPQA. Therefore, in this paper we investigate and evaluate IQ metrics in the context of CPQAs, with the goal of proposing suitable metrics for each of the CPQAs. To achieve our goal the first step is to identify relevant IQ metrics for each of the CPQAs. Then an experiment is set up to evaluate each of the CPQAs, where both naive and expert observers are included to assure an extensive evaluation. Later, the results from the relevant metrics identified in the first step are compared against the results of the two observer groups. This enables us to refine the selection of IQ metrics for each CPQA, and to recommend a suitable set of IQ metrics able to measure each of the CPQAs. This paper is organized as follows: First we select the relevant metrics for the different CPQAs. Then the experimental setup is explained, and the printed images are prepared for the IQ metrics. We then evaluate the metrics before we conclude and propose future work. Selection of Image Quality Metrics for the Color Printing Quality Attributes Numerous IQ metrics have been proposed in the literature [1], and we have selected a sub-set of these, as shown in Table 1. The selection is based on the results from previous evaluation [2–4], the criteria on which the metrics were created, and their popularity. Since many of the IQ metrics are not created to evaluate all aspects of IQ, only the suitable metrics for each CPQA will be evaluated. Furthermore, for specific CPQAs we also evaluate parts of the metrics. For example, S-CIELAB combines the lightness and color differences to obtain an overall value. When suitable, we will evaluate these separately in addition to the full metric. Experimental setup In this paper, two experimental phases were carried out. In the first phase, 15 naive observers judged overall quality and the different CPQAs on a set of images. In the second phase, four expert observers judged the quality of a set of images and elaborated on different quality issues. We will give a brief introduction of the experimental setup, for more information see Pedersen et al. [18]. Table 1: Selected IQ metrics for the evaluation of CPQAs. X X X X X X X X X Metric CPQA Sharpness Color Lightness Contrast Artifacts

Proceedings Article
01 Jan 2010
TL;DR: This work indicates that existing formulae for gamma adjustment can also be related to the concept of entropy maximization, and investigates the user’s choice of gamma parameter by conducting double staircase psychophysical experiment on a wide range of monochrome images.
Abstract: Gamma adjustment is one of the simplest global tone reproduction operators. If an image is too bright or too dark the image can be made pleasing by applying a gamma greater than one (leading to a darker image) or less than one (leading to a brighter image) respectively. In recent theoretical work, the ’optimal’ gamma in an information theoretic sense has been derived. The starting point of this paper is to ask the question: in adjusting gamma in images do observers make a similar choice to the information theoretic optimum? Experimentally, we investigate the user’s choice of gamma parameter by conducting double staircase psychophysical experiment on a wide range of monochrome images. Two staircases beginning with bright and dark images with respect to which gamma adjustments are made. The user progressively darkens and lightens the respective images until the staircases converge (we have the same image). The pilot experiment indicates that there is a linear relationship between the maximum entropy of image and the chosen gamma from the experiment: our experiment provides prima facie evidence that image that observers adjust images to bring out information. Moreover, the combination of entropy calculation together with our regression line we effectively provide an automatic algorithm for gamma adjustment. Finally, we also discuss the relationship between our assumption to the chosen gamma, a modified non-linear masking operator and two versions of CIECAM, and found that all of the operators give the similar trends, but slightly poorer fits, for predicting the gamma parameter. Put another way our work indicates that existing formulae for gamma adjustment can also be related to the concept of entropy maximization. Introduction Gamma adjustment in the context of tone reproduction operator provides contrast adjustment. The simplest form of the operator is defined by the following power-law expression:

Proceedings Article
01 Nov 2010
TL;DR: A simplified color prediction model for printed halftones is developed that is based only on the reflectances of the fulltone color and the paper and incorporates the PSF for modeling the ODG.
Abstract: Optical dot gain (ODG) plays an important role for predicting the color of printed halftones. The detailed knowledge of light scatter within the printing substrate might improve the accuracy of printer models and can reduce the number of required training colors to fit the model to the printing system. We propose an apparatus and method for measuring local anisotropic light scatter within graphic arts paper for predicting ODG. The setup is a modification of existing approaches for a more robust determination of the light’s point spread function (PSF). To verify our approach we develop a simplified color prediction model for printed halftones that is based only on the reflectances of the fulltone color and the paper and incorporates the PSF for modeling the ODG. Our experiments show that the accuracy of the model in terms of color differences to the measured colors was improved by considering ODG. Introduction The reflectance spectrum of a print reproduction is a result of various factors including the spectral reflectance properties of inks and papers, the scattering behavior of incident light within the paper as well as the considered printing process and halftone method. Printing system properties such as the printer gamut or the optical dot gain (ODG) directly depend on these factors. In order to correctly control a printing process we need a mathematical model of the printer that accurately predicts spectral reflectances of the printout given a particular set of control values. We can find a wide variety of models for predicting spectral reflectances of multi-ink prints in literature. Wyble and Berns [1] distinguish two general types of printer models: regression based models and first principle models. They state that most models used in practices are regression based models. These models simulate the behavior of the system as a whole and are not necessarily based on physical principles. In general, test patches are printed and the model parameters are fitted to the reflectances measured. If one of the influencing factors, such as ink, paper or the printing process is changed, new test patches have to be printed and the model parameters have to be fitted again. It is very difficult to calculate correction factors to transfer the printer model to a different setup. Furthermore, the number of test patches required for accurately fitting the model to a setup usually increases drastically with the number of inks. The frequently used cellular Yule-Nielsen spectral Neugebauer model (CYNSN) [6, 7, 8, 9] with x grid points requires xk test patches, where k is the number of inks. Modeling a four-ink system utilizing five grid points results in 625 test patches. For a seven-ink system the number of test patches increases to 78,125. The measurement effort as well Expenses for fitting a printer model number of number of area covered with test inks test patches patches (5mm x 5mm) 4 (CMYK) 625 0.0156 m2 7 (CMYKRGB) 78,125 1.95 m2 as the required resources in terms of consumables to print these test patches exceed any practical dimension (see table). In recent years printing with seven (CMYKRGB) and more inks became increasingly important and new printers such as the Canon imagePROGRAF IPF6100 or HP Z3200 with up to 12 inks were introduced to the market. The described drawbacks of regression based models limit their applicability for those systems. In contrast, first principle models simulate the physical processes of the printing system. Even if we consider a printing system with more than four inks we can assume that only a few test patches are required to fit a first principle model. In this case, the overall number of model parameters of the first principle model should be significantly smaller than the number of parameters of the regression based model ( j+k ≪m see figure 1). Additionally, some of the results might be transferable to other printing setups. If only the paper differs, all parameters that are paper-independent have not to be changed. Hence, it is plausible that the effort for fitting a model to a setup can be reduced drastically using a first principle model. To better understand the concept of first principle models we need to look closer at the printing process: A raster image processor (RIP) calculates a digital halftone pattern from the printer control values (figure 1). The printer creates a physical image of this pattern onto the paper (concept images in figure 1). Usually,


Proceedings Article
01 Jan 2010
TL;DR: This work develops an optimization to produce a matrix that best transforms the camera sensors such that color differences between erstwhile metamer pairs are maximized, under the new lights, to allow for greater flexibility than simply the human visual system.
Abstract: It was suggested [Bala CIC17] that metamerism could be exploited for watermarking applications by utilizing narrowband LED illuminant spectra for breaking apart metamer colors. It was noticed that, for metameric ink reflectances differing only by the K ink contribution, absolute differences between metamer pairs peaked around a few wavelengths: LEDs with those spectra were then used for displaying the watermarks. Here we investigate the idea of interposing a camera and a display system to make the effects produced more pronounced. We develop an optimization to produce a matrix that best transforms the camera sensors such that color differences between erstwhile metamer pairs are maximized, under the new lights. As well, we consider the problem of optimizing on the lighting itself in addition, leading to even more emphatic breaking apart of metamer pairs and thus more visible watermarks. Introduction In [1], Bala et al. examine whether radically changing the illuminant can successfully break metamerism sufficiently to be used in watermarking. There, printed inks metameric under illuminant D50 were used, with spectra for metameric pairs corresponding to the widest difference in K values. It was observed that subtracting these pairs of surface spectral reflectance functions always produced difference spectra that peaked in absolute value at roughly the same two spectral locations, about 518nm and 621nm. Therefore it was considered that illuminating with LED illumination near those spectral values would most increase RGB discriminability. That is, the idea would be to print a document using such metamer pairs for a hidden background and foreground that would be revealed under narrowband LED illumination. This would therefore complement the idea of using substrate fluorescent properties for hidden watermarks [2] by moving into the domain of visible light. The results in that work, while promising, are not as emphatic as desired, in that the separation between background color and foreground color was not convincingly large. Therefore, in this work we follow the same basic idea but interpose a camera, to allow for greater flexibility than simply the human visual system. As well, we subsequently apply what amounts to a sensor matrix transform (reminiscent of spectral sharpening [3]) specifically with the objective of maximizing the difference, under a new illuminant, of the difference between formerly metameric pairs. That is, suppose we decide to interpose a camera and a display, instead of simply using the eye and XYZ tristimulus values. Then is there a matrix transform, applied to the camera RGBs, which will best emphasize the difference between foreground and background ink? To answer this question, we develop an optimization generating a 3× 3 matrix M for linearly transforming the color space such that the difference between background and foreground colors is maximized, for metamer pairs observed under the new, LED lighting. As well, since the availability of LED light chromaticity is very broad [4], we also examine whether a different choice of LED lighting or combination of LEDs provides the most effective discriminability. It turns out that, indeed, a more general combination of LEDs is more effective than using the original two lights suggested in [1]. To do so, we include a vector of binary weights w in our optimization, where w selects whether or not to include narrowband LED spectral colors in the new illumination – i.e., we optimize on color space transform matrix M and simultaneously on the LED illumination to be used so as best to provide discriminability of watermarks visible only under LED lighting. In general, we would also like to optimize on designing ink reflectance spectra as well as illumination spectra, for this application, as in the optimization set out in [5] in another context. Nevertheless, even without further optimization on the inks themselves, one finds that it is already possible do better with a camera than using the eye. Note that in the following we simply use RGB differences to drive an optimization, but certainly one could use perceptual color. As well, here we use only a small set of metamer pairs, in that the paper provides a proof in principle rather than an exhaustive solution to this general problem. Xerox metamers Here, we consider a set of 6 ink patches, divided into 3 sets of metamer pairs. These 3 pairs have close to matching XYZ values under illuminant D50. Reflectance spectra are plotted in Fig. 1, along with the spectral differences between pair members (cf. [1]). These curves are indeed basically metamer pairs under D50: transforming to XYZ and then to CIELAB with normalizing illuminant D50, we obtain ∆E values between color signal pairs, given by metamer reflectances times D50, as follows: Pair ∆E 1–2 0.82 3–4 0.59 5–6 1.05 (And, with flat, equi-energy illumination applied, the reflectance pairs themselves have ∆E nearly zero.) Color space transform Fixed Lights In [1], difference curves were examined for metamer pairs (differing by maximum K value range), and it was pointed out that 400 450 500 550 600 650 700 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Proceedings Article
01 Jan 2010
TL;DR: A novel approach for measuring color noise from natural images using a reducedreference (RR) approach based on a reference camera, which was better for predicting subjective noise compared to the visual noise metric that is the state-of-the-art test target method for digital cameras.
Abstract: Noise is one fundamental quality attribute in digital cameras. Traditionally, noise has been measured from solid patches of artificial test targets. In image quality research, it has been difficult to find connections between a test target and subjective test data. In addition, image quality algorithms computed from natural images are not well correlated. In this paper, we propose a novel approach for measuring color noise from natural images. With the proposed method, the suitable surfaces for noise calculations are located from the scene using a reference camera image. It is now possible to use the same image files for subjective and objective measurements and correlations are easier to find. The results show that the method is promising. Its performance was better for predicting subjective noise compared to the visual noise metric that is the state-of-the-art test target method for digital cameras. Introduction Digital cameras produce different noise types in images. For example, noise can be high-frequency achromatic noise, lowfrequency red-green or yellow-blue color noise or a combination of both. In this study, a new method to measure and characterize color noise directly from a natural image is described. The proposed method is based on a reference camera. The reference camera shoots a natural scene, and the appropriate areas are identified from the image for measurement purposes. The method has been developed for camera benchmarking studies. The method requires that the images of the reference camera and the cameras to be benchmarked are produced from the same scene. The study of color noise in the literature can be divided in two distinct areas. In the first area, the goal is to describe the noise model and weighting factors of its chrominance components. In these studies, noise level has often been measured from solid patches of specific test targets. For example, Kuang et al. [2] fit the parameters of the noise model based on empirical data. They also implemented a function incorporated in the noise model that described the effect of luminance level. In another study, Kelly and Keelan [3] described new weighting factors of the chrominance component for the signal-to-noise ratio calculation. In the second area of color noise study, the goal is to find the noise level or noisy areas from natural images for noise reduction purposes. Gheorghe et al. [5] proposed a method to reduce color noise from a natural image. Their method was based on a hybrid multi-scale spatial dual tree adaptive wavelet filter in huesaturation-value color space. Lee [4] proposed a method to detect color noise areas from natural images. His method was based on correlation between the R/G/B color channels. In addition, a noise metric for luminance channel has been proposed [10]. These methods are based on the no-reference (NR) approach. The measurements are performed without the original noiseless images. The problem with using NR methods with digital cameras is that these methods are often sensitive to other image distortions. For example, NR noise metrics can interpret image details as noise energy. In addition, NR metrics are often highly image-content specific. The proposed method differs from the earlier methods discussed in the literature. The method is based on the reducedreference (RR) approach. It utilizes information from a reference image, but it does not need a pixel-wise equivalence as the fullreference (FR) approach does. Pixel-wise comparison is not even possible when digital cameras are benchmarked. When images are produced from a given scene using different digital cameras, there is always rotation, scaling and 3D-projection between the images. We can find an analogy between the test target method and the proposed method. With the test target method the properties of the solid patches are known. With the proposed method, the suitable areas are located for measurements from the scene using a reference camera image. The selection is based on distortion. In this study, we describe how surfaces for noise calculations can be selected. In addition, we show how noise type can be characterized and noise level can be measured from these surfaces. The benefit of the proposed method compared to test target methods is that the same images can be used for subjective and objective measurements. It has been difficult to find correlations between test target computations and subjective test data such as MOS using conventional image quality research methods. We believe that these relationships are easier to find if both measurements are made using the same natural images. Compared to NR methods, the benefit of the proposed method is that at least some features from the reference (noiseless) image and scene are known. With these features, the problems related to the other image distortions and image content can be avoided. Method The proposed method is based on blocks that are located from the scene using a reference camera image. The block selection is based on three features: chromatic energy, achromatic energy and brightness of the block. The chromatic energy of the blocks should be low. The blocks can have achromatic structural energy, but this structure should be composed more of random texture than edges. There are two reasons why random texture in a scene can be beneficial for noise measurements. The first reason is that achromatic texture-like surfaces in scenes are sensitive to color noise in digital camera images. The second and more important reason is that texture-like surfaces present challenges for noise reduction algorithms in cameras. If the structure is edge-like, then a noise reduction method can easily filter the noise away from the neighboring smooth area of the edges. If the structure is a random texture, then it is difficult to separate the noise energy from the image structure energy using computational methods. In addition, the intensity of the selected blocks should not be too low or too high. If a block is too bright, then it becomes saturated for images produced by low-end cameras. If the block is too dark, then it is possible that a low-end camera does not detect its structure energy and that the camera image processing software applies strong noise reduction to it. The method was applied in the YCbCr space. With opponent color space, it is possible to separate achromatic information from chromatic information. The method operates on the principle that the control blocks are initially symmetrically located on reference image (Figure 1a). The method searches for new locations for the blocks on the limited neighborhood in Cb and Cr channels by maximizing the homogeneous metric value. Figure 1b shows the blocks that are located on the new places for the Cb channel. The homogeneous metric used was the co-occurrence matrix energy feature COE of the blocks calculated by Equation (1): ) , ( ) , (



Proceedings Article
01 Jan 2010
TL;DR: A new appearance-based HDR image splitting algorithm that incorporates the iCAM06 image appearance model to do image enhancement is introduced and it is found that the overall quality, color, and naturalness of the imagesproduced by the new algorithm is superior to those produced by the square root method.
Abstract: High dynamic range (HDR) displays that incorporate two optically-coupled image planes have recently been developed. The existence of such displays requires HDR image splitting algorithms that send appropriate signals to each plane to faithfully reproduce the appearance of the HDR input. In this paper we introduce a new appearance-based HDR image splitting algorithm that incorporates the iCAM06 image appearance model to do image enhancement. We compare its performance to the widely used luminance square root algorithm and report the results of image quality experiments that compare the two algorithms with respect to contrast, color, sharpness, naturalness, and overall quality. We find that the overall quality, color, and naturalness of the images produced by the new algorithm is superior to those produced by the square root method. The new algorithm provides a principled and effective approach for presenting HDR images on dual imager HDR displays. Introduction Real-world scenes encompass a 10 log unit range of luminance levels, from below 0.001 cd/m to over 100,000 cd/m [1]. The 4 to 6 log unit (10,000 to 1,000,000 to 1) luminance dynamic range found in many scenes vastly exceeds the ranges that can be captured or reproduced by conventional imaging systems. For example, images captured by conventional digital cameras typically have dynamic ranges between 100 or 1000 to 1 around a level set by aperture and shutter speed. Conventional display systems are similar in that their output dynamic ranges are on the order of 100 to 1 with maximum luminance between 100 and 400 cd/m [2]. Over the past fifteen years, new technologies have been developed for overcoming this dynamic range bottleneck to produce image capture and display systems capable of recording and reproducing high dynamic range (HDR) images. High dynamic range image capture systems have been developed for both still images [3,4,5,6,7] and video recordings [8]. For displaying HDR images, several alternative systems have been developed, including both softcopy [9,10] and hardcopy [11,12] designs. HDR display systems typically reproduce high dynamic range images by using two standard dynamic range (SDR) imagers that are optically coupled. The basic principle is that one imager (such as a projector or LED array) provides spatially varying illumination for a second imager (for example a transmissive LCD or reflective print), allowing HDR image values to be reproduced. This dual image plane design requires that a given HDR input image be split into two complementary SDR components that drive the coupled systems. The widely used square root HDR splitting algorithm first converts an input HDR image to XYZ tristimulus values, then takes the square root of the Y channel and sends this achromatic signal to one image plane. A color signal is created by composing Y with its corresponding X and Z channels to the other. Under ideal conditions, this approach will reproduce the original luminance range of the HDR input, but faithful color reproduction is not considered. To take a more principled approach to the HDR image splitting problem, we have developed a new algorithm based on the iCAM06 image appearance model [13]. The algorithm first uses iCAM06 to create a SDR color image that is sent to one plane, then calculates a luminance residual that is sent to the other plane to reproduce the HDR luminance range [14]. The goal of the algorithm is to create displayed HDR images that better reproduce the visual appearances of HDR scenes. In the following sections, we first describe prior work on HDR displays and the luminance square root HDR image splitting algorithm, we then describe our new appearance-based HDR image splitting algorithm, and report on a series of experiments designed to evaluate and compare the appearance of HDR images displayed using the two algorithms.



Proceedings Article
01 Jan 2010
TL;DR: An experimental validation of a proposed set of quality attributes for the evaluation of color prints shows a correspondence between the quality attributes and the criteria.
Abstract: Image quality assessment is a difficult and complex task. Quality attributes have been used in the evaluation of perceived image quality in an attempt to reduce the complexity and the dimensionality. Recently, Pedersen et al. (CIC, 2009) proposed a set of quality attributes for the evaluation of color prints. In this paper we perform an experimental validation of these quality attributes to ensure that the criteria on which they were selected are fulfilled. The results show a correspondence between the quality attributes and the criteria. The quality attributes are therefore considered as a good starting point to describe overall image

Proceedings Article
01 Jan 2010
TL;DR: The digital method is regarded as a good candidate for extended color perception studies allowing more flexible test setups compared to tests using surface colors, as well as for evaluating perceived color differences as a scaling study in comparison to an actual user study.
Abstract: The use of LCD displays as a test platform for the evaluation for perceived color differences is examined. The setup and verification of an accurate color reproduction workflow is presented. As a first application we compare a monitor based color difference test with a corresponding already existing test based on printed samples. In view of the results, we regard the digital method as a good candidate for extended color perception studies allowing more flexible test setups compared to tests using surface colors. Motivation Colorimetry as it is known nowadays, is largely based on the understanding of human vision, which is studied through psychovisual tests. For quality control and development it is important to include human visual response which still nowadays is an essential part of many projects. This demands visual evaluations which involves judging hundreds of samples by observers. Information technology provides the possibility to embed such tests in a digital environment. This means, with the use of a monitor a more flexible test set up for visual judgments can be designed. Existing monitor-based studies focused on colorimetric tolerances for real-world images [1] [2], others focused on evaluating color patches in regard to color difference formulae [3] and threshold tolerances for CRT-generated stimuli, see [4]. While former CRT display based studies often evaluated perceived thresholds (JNDs), the aim of the current study is to explore the potential for evaluating perceived color differences as a scaling study in comparison to an actual user study. For this, we re-implement a recent study conducted by the Fogra Graphic Technology Research Association (short ’Fogra’) [5] but this time using LCD displays instead of printed samples. The main challenge to accomplish this is to design and control the digital workflow for displaying colors. The following sections outline the involved steps of the current work: In Technical Set-up we describe the general test set-up, focus on the challenges of displaying colors accurately on LCD displays and the employment of a web browser to display colors. Section Verification refers to the technical achieved accuracy of LCD displays. The section Application: Fogra Color Difference Test describes the employed test set-up which was adopted to enable a comparison of newly gathered LCD based visual data with already existing print based visual response data. The results from this comparison are to be found in Evaluation & Results. Section Discussion wraps up with ideas for further considerations in the monitor based color difference testing. Technical Set-up Within a controlled laboratory environment, following the CIE guidelines for viewing conditions [6], three high-end LCD displays were used for display and evaluation. All displays had to be characterized and profiled carefully to achieve the best possible accuracy for displaying CIELAB colors on screen. The visual test was realized in HTML and PHP. Color display values were computed directly from CIELAB to RGB within the PHP program by taking each monitor’s specific primary values and gamma into account. A database was created which comprised of the CIELAB color reference values. Calibration To attain precise color values on the RGB LCD displays, the first step was to calibrate each monitor to six given target settings (see Table 1) and store this information in the form of a profile. The calibration was carried out by utilizing Eizo’s own calibration software Color Navigator for which it accesses the monitor-stored 10bit or 12bit look-up tables. Measurements for the calibration were carried out with a spectrophotometer, Xrite Eye One. Setting Target Value Gamut Monitor’s native gamut size WP D50 0.34567 0.35850 Temp 5000 K Bright. 120 cd/m2 Gamma 2.2 Min Possible minimum Table 1. Aimed target settings Three EIZO displays, one of the build CG 220 and two CG 241 were used, denoted by Monitor 1, 2 and 3. The characteristic of each monitor vary from one to the next caused by different model type and age. Due to these characteristics the achieved target values deviated slightly, see 2. Setting Monitor 1 Monitor 2 Monitor 3 Gamut native native native WP 0.3456 0.3585 0.3452 0.3586 0.3453 0.3585 Temp 5004 K 5020 K 5012 K Bright. 114.8 cd/m2 121.5 cd/m2 120.4 cd/m2 Gamma 2.2 2.2 2.2 Min 0.33 cd/m2 0.27 cd/m2 0.16 cd/m2 Table 2. Achieved target settings Color accuracy in web browser We chose a web browser to display colors, since widespread and well defined standards are available as well as many applications exist to verify the behavior of the workflow in many different setups. Implementation of the desired content is sim18th Color Imaging Conference Final Program and Proceedings 115 ple (we choose PHP as a programming language) and platformindependent. Displaying images and colors in a browser is based on the interaction of separate modules: The content, the application, the system, the monitor hardware and possibly other components. At each of these modules, a color conversion may or may not take place. Although many specifications are publicly available, it is hard to determine the precise process of color conversions within such a complex environment. The modules can not be analyzed separately as the resulting color can only be measured after the entire process is finished. Nonetheless it is possible to draw conclusions from the behavior of the whole process when exchanging the modules or changing the behavior of the modules one by one. By extensive evaluation, it was possible to determine the standard workflow for the following modules: • Colors: Plain CSS background-colors, denoted by integer numbers in the range [0,255] • Markup-Language: HTML5 + CSS 2.1 • Browser: Safari 4.0.5 • System: Mac OS X 10.6.2 (Snow Leopard) • Cable: DVI-D single link • Monitor: EIZO with integrated graphics card The standard workflow, as it is intended by ICC color management, requires writing sRGB colors to the webpage. The browser parses the values and assumes sRGB values which are passed to the system. The system allows to set any RGB space as the device space for the connected monitor. The system will convert the sRGB colors into the device space, executed on the CPU or GPU. Finally, the monitor transforms these values into intensities which will result in the color visible on the screen, see Figure 1. The chosen browser considers colorspace definitions in images and passes this information to the system which manages all color conversions. When no colorspace is given, a browser should assume sRGB as the default RGB space [7] which has been verified for the chosen browser. Accordingly, plain CSS colors do not define a color space and therefore are assumed to be in sRGB color space. Since the values are interpreted as sRGB values within the standard workflow, only colors inside the sRGB gamut can be reproduced. Monitors, especially the Eizo models used here, nonetheless are capable of reproducing a larger gamut. We therefore defined a tweaked workflow to take advantage of the larger gamut, see Figure 1. In our tweaked workflow, the monitor hardware contains the uploaded correction values in the same way as in the standard workflow. The system profile nonetheless is set to sRGB. Again, any RGB value written in CSS will be assumed by the browser to be in sRGB space which is passed to the system. As the system now has sRGB as the monitor profile, it will not alter the color values and the system will pass the values directly to the monitor without any conversion. Therefore, device-RGB colors can be written directly in the browser which gives us an additional degree of control and also allows to use the full gamut of the monitor. In return, it is in the responsibility of the implementation to compute the display values correctly. Computing and displaying the color values Using the calibration tool of the monitors, the RGB color space definitions shown in Table 2 as well as the three primary Sy ste m

Proceedings Article
01 Dec 2010
TL;DR: This paper proposes a set of computer graphic tools to facilitate color design in the traditional aesthetic design fields and employs a reflection model that covers the widest range of color appearances encountered by designers including solid colors, metallic colors, and the glossiness of these colors.
Abstract: Creative tools are proposed that allow color stylists to take advantage of their training in the art and design fields. A simple reflection model is employed that has the minimum number of free parameters required to design solid and metallic color finishes from conceptualization to fabrication. The parameters correspond to color specification terms familiar to designers such as face color, flop color, travel, and gloss. We demonstrate how the reflection model can also be used to develop effective interfaces for color stylists. We create a virtual mood board that allows direct selection of the reflection model parameters from pictures. We also develop an image based BRDF tweaker for adjusting color appearance directly on a 3D object. Introduction Industrial designers, interior designers, and architects all make extensive use of exemplars when they choose colors for a new product. They collect material samples from suppliers, they select illustrations from fashion magazines, and they take pictures of objects with interesting surface finishes. These exemplars are organized and displayed on “mood boards” (see Figure 1) that allow the designer to compare the colors and make final selections. The designer uses the mood board as a reference point when communicating their intentions to others and applying the design. Traditional hue, saturation, and brightness (HSB) color organization systems are used, after the fact, to provide a name and a specification for the final color. The existing computer graphic tools for color simulation and selection do not readily facilitate the color design process used in these traditional design fields. Computer graphic reflection models have been developed with the goal of accurately portraying subtle color appearance effects. These models have grown increasingly complex, and they include physical parameters for which aesthetic designers have no intuition. Numerous computer graphic HSB color systems have been proposed, but they are not able by themselves to convey the spatial aspects that separate one color appearance from another. In addition, in the traditional design process, exemplars and sketches are used more often than HSB color systems to ideate color. In this paper we propose a set of computer graphic tools to facilitate color design in the traditional aesthetic design fields. We employ a reflection model that covers the widest range of color appearances encountered by designers including solid colors, metallic colors, and the glossiness of these colors. The reflection model is constructed to have the minimum number of free parameters and to select these parameters so that they correspond to color specification terms familiar to designers such as face color, flop color, travel, and gloss. The model is defined in a way that allows it to be used as a manufacturing specification for the final color. Figure 1. Designers often use mood boards to propose the selection of colors and materials. They might start out collecting images from fashion magazines or other visually appealing artifacts like dried leaves or butterfly wings that can be pinned onto a mood board. As the designer’s vision develops, they rearrange and refine the images on the board. Finally, the designer uses the mood board when discussing their concepts with others and as a reference when implementing the design. The art director of a style and fashion magazine was invited to participate in the user study. She proposed “Orange Ball of Paris” that appeals to a young and fashionable audience. We also demonstrate how the reflection model can be used to develop effective interfaces for color designers. We create a virtual mood board that allows direct selection of the reflection model parameters from pictures. We also describe a novel color design interface for tweaking color and appearance directly on a 3D surface. Relevant work There is a variety of background work in providing artistic and design oriented controls over computer graphic reflection models. Design galleries presents renderings with variations of computer graphic parameters as a interface for tweaking scene parameters in [11]. Ngan [12] created image driven navigation systems for specifying the parameters to various BRDF models. Pellacini created a psychophysically-based BRDF model that aided the specification of gloss parameters in [13]. Kautz [8] allows an artist to create a bitmap texture that represents the shape and color of a reflectance distribution. Khan introduced a system for doing image space material editing in [9]. This paper focused on the rendering system that could apply new material designs to existing images. An intuitive and artistic interface for painting BRDFs was presented in [3]. The interface provides the ability to create a BRDF by positioning and manipulating highlights on a spherical canvas. A mapping between painted highlights and specular lobes was created for an extended Ward model. This software is constrained to a point light source and a spherical canvas. Sloan [19] developed a novel technique for capturing 3D non-photorealistic shading models from 2D artwork. The interface has a tool for selecting a region of a 2D source image to approximate an illuminated sphere that is later used to render 3D models. This allows for unconstrained lighting and shape, but does not pull pure BRDF information. The information collected convolves lighting with the BRDF, similar to a prefiltered environment map. Outside of our own work, there has been a variety of research on realistic rendering of optical phenomena relevant to the design of automotive paints. Rendering of interference and wave based optical phenomena was covered in [4, 7, 20]. Image based measurement techniques was applied to automotive paints in [6, 16]. Face/flop/travel/gloss reflection model This section describes the reflection model. First, the variables of the reflection model are defined, along with how they control the physical appearance of the resulting material. Since these variables cannot be directly related to graphics rendering, they are linked to a parametric representation that has been previously tied to a realtime rendering engine [17]. This reflection model is connected to the physical world through measurement tools and rapid color prototyping systems. Face/flop/travel/gloss form We describe a reflection model that has the following parameters: face, flop, travel, and gloss. Figure 2 gives a visualization of how the parameters of this reflection model correlate to the shape of the BRDF. “Face” is the color at 15◦ off specular angle and is represented as a Lab tristimulus value. This angle establishes the directionally diffuse color while avoiding the specular highlight resulting from the first surface coatings. “Flop” is the color at far from specular and is represented as a Lab tristimulus value. The particular angle that flop is associated with is the variable travel. Travel represents how color changes with viewing direction. This is intended to be the color of the diffuse portion of the material at as close to a specular angle as possible. If the face and flop colors are identical, the color is a solid color and therefore travel does not alter the appearance of the material. Figure 2. A diagram illustrating how the four components of the metallic reflection model relate to the shape of the BRDF lobe. The incoming light L is at 45◦ from the surface normal N. The specular reflection direction is

Proceedings Article
01 Jan 2010
TL;DR: The aim of this research is to reduce the amount of time required to compare the overall quality of ICC profiles and give a more thorough evaluation than is typically done by subjectively evaluating each aspect of the printer profile individually.
Abstract: The increased interest in color management has resulted in more options for the user to choose between for their color management needs. Evaluating the quality of each of these color management packages is a challenging and time-consuming task. We propose an evaluation using image quality metrics to assess the quality of a printer profile. This will determine the best solution for a given set of objectives. The goal of this work is to create a thorough evaluation for a printer profile to determine the most appropriate profile without using observers. A printer profile has several aspects that can be evaluated separately: colorimetric accuracy, invertibility, grayscale reproduction, perceptual image quality, smoothness and gamut mapping . In this paper we look for a solution for applying image quality metrics to evaluate the different aspects of ICC printer profiles. The aim of this research is to reduce the amount of time required to compare the overall quality of ICC profiles and give a more thorough evaluation than is typically done by subjectively evaluating each aspect of the printer profile individually.