Conference

# Color Imaging Conference

About: Color Imaging Conference is an academic conference. The conference publishes majorly in the area(s): Gamut & Color image. Over the lifetime, 1857 publication(s) have been published by the conference receiving 19175 citation(s).

Topics: Gamut, Color image, Color space, RGB color model, Color balance

##### Papers published on a yearly basis

##### Papers

More filters

•

01 Nov 2004

TL;DR: It is shown that Max-RGB and Grey-World are two instantia-tions of Minkowski norm, and that for a large cali-brated dataset L6 norm colour constancy works best over-all (the authors have improved the performance achieved by a sim-ple normalization based approach).

Abstract: Colour constancy is a central problem for any visual system performing a task which requires stable perception of the colour world. To solve the colour constancy problem we estimate the colour of the prevailing light and then, at the second stage, remove it. Two of the most commonly used simple techniques for estimating the colour of the light are the Grey-World and Max-RGB algorithms. In this paper we begin by observing that this two colour constancy computations will respectively return the right answer if the average scene colour is grey or the maximum is white (and conversely, the degree of failure is proportional to the extent that these assumptions hold). We go on to ask the following question: “ Would we perform better colour constancy by assuming the scene average is some shade of grey?”. We give a mathematical answer to this question. Firstly, we show that Max-RGB and Grey-World are two instantia-tions of Minkowski norm. Secondly, that for a large cali-brated dataset L6 norm colour constancy works best over-all (we have improved the performance achieved by a sim-ple normalization based approach). Surprisingly we found performance to be similar to more elaborated algorithm.

507 citations

•

01 Jan 1996

TL;DR: The aim of this color space is to complement the current color management strategies by enabling a third method of handling color in the operating systems, device drivers and the Internet that utilizes a simple and robust device independent color definition.

507 citations

•

01 Jan 2002TL;DR: This document describes the single set of revisions to the CIECAM97s model that make up theCIECAM02 color appearance model and provides an introduction to the model and a summary of its structure.

Abstract: The CIE Technical Committee 8-01, color appearance models for color management applications, has recently proposed a single set of revisions to the CIECAM97s color appearance model This new model, called CIECAM02, is based on CIECAM97s but includes many revisions1-4 and some simplifications A partial list of revisions includes a linear chromatic adaptation transform, a new non-linear response compression function and modifications to the calculations for the perceptual attribute correlates The format of this paper is an annotated description of the forward equations for the model Introduction The CIECAM02 color appearance model builds upon the basic structure and form of the CIECAM97s5,6 color appearance model This document describes the single set of revisions to the CIECAM97s model that make up the CIECAM02 color appearance model There were many, often conflicting, considerations such as compatibility with CIECAM97s, prediction performance, computational complexity, invertibility and other factors The format for this paper will differ from previous papers introducing a color appearance model Often a general description of the model is provided, then discussion about its performance and finally the forward and inverse equations are listed separately in an appendix Performance of the CIECAM02 model will be described elsewhere7 and for the purposes of brevity this paper will focus on the forward model Specifically, this paper will attempt to document the decisions that went into the design of CIECAM02 For a complete description of the forward and inverse equations, as well as usage guidelines, interested readers are urged to refer to the TC 8-01 web site8 or to the CIE for the latest draft or final copy of the technical report This paper is not intended to provide a definitive reference for implementing CIECAM02 but as an introduction to the model and a summary of its structure Data Sets The CIECAM02 model, like CIECAM97s, is based primarily on a set corresponding colors experiments and a collection of color appearance experiments The corresponding color data sets9,10 were used for the optimization of the chromatic adaptation transform and the D factor The LUTCHI color appearance data11,12 was the basis for optimization of the perceptual attribute correlates Other data sets and spaces were also considered The NCS system was a reference for the e and hue fitting The chroma scaling was also compared to the Munsell Book of Color Finally, the saturation equation was based heavily on recent experimental data13 Summary of Forward Model A color appearance model14,15 provides a viewing condition specific means for transforming tristimulus values to or from perceptual attribute correlates The two major pieces of this model are a chromatic adaptation transform and equations for computing correlates of perceptual attributes, such as brightness, lightness, chroma, saturation, colorfulness and hue The chromatic adaptation transform takes into account changes in the chromaticity of the adopted white point In addition, the luminance of the adopted white point can influence the degree to which an observer adapts to that white point The degree of adaptation or D factor is therefore another aspect of the chromatic adaptation transform Generally, between the chromatic adaptation transform and computing perceptual attributes correlates there is also a non-linear response compression The chromatic adaptation transform and D factor was derived based on experimental data from corresponding colors data sets The non-linear response compression was derived based on physiological data and other considerations The perceptual attribute correlates was derived by comparing predictions to magnitude estimation experiments, such as various phases of the LUTCHI data, and other data sets, such as the Munsell Book of Color Finally the entire structure of the model is generally constrained to be invertible in closed form and to take into account a sub-set of color appearance phenomena Viewing Condition Parameters It is convenient to begin by computing viewing condition dependent constants First the surround is selected and then values for F, c and Nc can be read from Table 1 For intermediate surrounds these values can be linearly interpolated2 Table 1 Viewing condition parameters for different surrounds Surround F c Nc Average 10 069 10 Dim 09 059 095 Dark 08 0525 08 The value of FL can be computed using equations 1 and 2, where LA is the luminance of the adapting field in cd/m2 Note that this two piece formula quickly goes to very small values for mesopic and scotopic levels and while it may resemble a cube-root function there are considerable differences between this two-piece function and a cube-root as the luminance of the adapting field gets very small ! k =1/ 5L A +1 ( ) (1) ! F L = 02k 4 5L A ( ) + 01 1" k4 ( ) 2 5L A ( ) 1/ 3 (2) The value n is a function of the luminance factor of the background and provides a very limited model of spatial color appearance The value of n ranges from 0 for a background luminance factor of zero to 1 for a background luminance factor equal to the luminance factor of the adopted white point The n value can then be used to compute Nbb, Ncb and z, which are then used during the computation of several of the perceptual attribute correlates These calculations can be performed once for a given viewing condition

394 citations

•

01 Jan 2001

TL;DR: This course outlines recent advances in high-dynamic-range imaging, from capture to display, that remove this restriction, thereby enabling images to represent the color gamut and dynamic range of the original scene rather than the limited subspace imposed by current monitor technology.

Abstract: Current display devices can display only a limited range of contrast and colors, which is one of the main reasons that most image acquisition, processing, and display techniques use no more than eight bits per color channel. This course outlines recent advances in high-dynamic-range imaging, from capture to display, that remove this restriction, thereby enabling images to represent the color gamut and dynamic range of the original scene rather than the limited subspace imposed by current monitor technology. This hands-on course teaches how high-dynamic-range images can be captured, the file formats available to store them, and the algorithms required to prepare them for display on low-dynamic-range display devices. The trade-offs at each stage, from capture to display, are assessed, allowing attendees to make informed choices about data-capture techniques, file formats, and tone-reproduction operators. The course also covers recent advances in image-based lighting, in which HDR images can be used to illuminate CG objects and realistically integrate them into real-world scenes. Through practical examples taken from photography and the film industry, it shows the vast improvements in image fidelity afforded by high-dynamic-range imaging.

294 citations

•

[...]

TL;DR: This work provides concise MATLABTM implementations of two of the spatial techniques of making pixel comparisons using Retinex methods of lightness computation, along with test results on several images and a discussion of the results.

Abstract: Many different descriptions of Retinex methods of light- ness computation exist. We provide concise MATLAB™ implemen- tations of two of the spatial techniques of making pixel comparisons. The code is presented, along with test results on several images and a discussion of the results. We also discuss the calibration of input images and the postRetinex processing required to display the output images. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1636761)

285 citations