scispace - formally typeset
Search or ask a question
Author

Changjun Li

Bio: Changjun Li is an academic researcher from Liaoning University. The author has contributed to research in topics: Chromatic adaptation & Color difference. The author has an hindex of 14, co-authored 44 publications receiving 1173 citations. Previous affiliations of Changjun Li include Imperial Chemical Industries & University of Leeds.


Papers
More filters
Proceedings Article
01 Jan 2002
TL;DR: This document describes the single set of revisions to the CIECAM97s model that make up theCIECAM02 color appearance model and provides an introduction to the model and a summary of its structure.
Abstract: The CIE Technical Committee 8-01, color appearance models for color management applications, has recently proposed a single set of revisions to the CIECAM97s color appearance model This new model, called CIECAM02, is based on CIECAM97s but includes many revisions1-4 and some simplifications A partial list of revisions includes a linear chromatic adaptation transform, a new non-linear response compression function and modifications to the calculations for the perceptual attribute correlates The format of this paper is an annotated description of the forward equations for the model Introduction The CIECAM02 color appearance model builds upon the basic structure and form of the CIECAM97s5,6 color appearance model This document describes the single set of revisions to the CIECAM97s model that make up the CIECAM02 color appearance model There were many, often conflicting, considerations such as compatibility with CIECAM97s, prediction performance, computational complexity, invertibility and other factors The format for this paper will differ from previous papers introducing a color appearance model Often a general description of the model is provided, then discussion about its performance and finally the forward and inverse equations are listed separately in an appendix Performance of the CIECAM02 model will be described elsewhere7 and for the purposes of brevity this paper will focus on the forward model Specifically, this paper will attempt to document the decisions that went into the design of CIECAM02 For a complete description of the forward and inverse equations, as well as usage guidelines, interested readers are urged to refer to the TC 8-01 web site8 or to the CIE for the latest draft or final copy of the technical report This paper is not intended to provide a definitive reference for implementing CIECAM02 but as an introduction to the model and a summary of its structure Data Sets The CIECAM02 model, like CIECAM97s, is based primarily on a set corresponding colors experiments and a collection of color appearance experiments The corresponding color data sets9,10 were used for the optimization of the chromatic adaptation transform and the D factor The LUTCHI color appearance data11,12 was the basis for optimization of the perceptual attribute correlates Other data sets and spaces were also considered The NCS system was a reference for the e and hue fitting The chroma scaling was also compared to the Munsell Book of Color Finally, the saturation equation was based heavily on recent experimental data13 Summary of Forward Model A color appearance model14,15 provides a viewing condition specific means for transforming tristimulus values to or from perceptual attribute correlates The two major pieces of this model are a chromatic adaptation transform and equations for computing correlates of perceptual attributes, such as brightness, lightness, chroma, saturation, colorfulness and hue The chromatic adaptation transform takes into account changes in the chromaticity of the adopted white point In addition, the luminance of the adopted white point can influence the degree to which an observer adapts to that white point The degree of adaptation or D factor is therefore another aspect of the chromatic adaptation transform Generally, between the chromatic adaptation transform and computing perceptual attributes correlates there is also a non-linear response compression The chromatic adaptation transform and D factor was derived based on experimental data from corresponding colors data sets The non-linear response compression was derived based on physiological data and other considerations The perceptual attribute correlates was derived by comparing predictions to magnitude estimation experiments, such as various phases of the LUTCHI data, and other data sets, such as the Munsell Book of Color Finally the entire structure of the model is generally constrained to be invertible in closed form and to take into account a sub-set of color appearance phenomena Viewing Condition Parameters It is convenient to begin by computing viewing condition dependent constants First the surround is selected and then values for F, c and Nc can be read from Table 1 For intermediate surrounds these values can be linearly interpolated2 Table 1 Viewing condition parameters for different surrounds Surround F c Nc Average 10 069 10 Dim 09 059 095 Dark 08 0525 08 The value of FL can be computed using equations 1 and 2, where LA is the luminance of the adapting field in cd/m2 Note that this two piece formula quickly goes to very small values for mesopic and scotopic levels and while it may resemble a cube-root function there are considerable differences between this two-piece function and a cube-root as the luminance of the adapting field gets very small ! k =1/ 5L A +1 ( ) (1) ! F L = 02k 4 5L A ( ) + 01 1" k4 ( ) 2 5L A ( ) 1/ 3 (2) The value n is a function of the luminance factor of the background and provides a very limited model of spatial color appearance The value of n ranges from 0 for a background luminance factor of zero to 1 for a background luminance factor equal to the luminance factor of the adopted white point The n value can then be used to compute Nbb, Ncb and z, which are then used during the computation of several of the perceptual attribute correlates These calculations can be performed once for a given viewing condition

409 citations

Journal ArticleDOI
TL;DR: In this article, the performance of the CIE 2002 colour appearance model, CIECAM02, in predicting three types of colour discrimination data sets: large and small-magnitude color differences under daylight illuminants and small magnitude colour differences under illuminant A.
Abstract: Can a single colour model be used for all colorimetric applications? This article intends to answer that question. Colour appearance models have been developed to predict colour appearance under different viewing conditions. They are also capable of evaluating colour differences because of their embedded uniform colour spaces. This article first tests the performance of the CIE 2002 colour appearance model, CIECAM02, in predicting three types of colour discrimination data sets: large- and small-magnitude colour differences under daylight illuminants and small-magnitude colour differences under illuminant A. The results showed that CIECAM02 gave reasonable performance compared with the best available formulae and uniform colour spaces. It was further extended to give accurate predictions to all types of colour discrimination data. The results were very encouraging in that the CIECAM02 extensions performed second best among all the colour models tested and only slightly poorer than the models that were developed to fit a particular data set. One extension derived to fit all types of data can predict well for colour differences having a large range of difference magnitudes. © 2006 Wiley Periodicals, Inc. Col Res Appl, 31, 320–330, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.20227

296 citations

Journal ArticleDOI
TL;DR: A simplified version of CMCCAT97 is described, which not only is significantly simpler and eliminates the problems of reversibility, but also gives a more accurate prediction to almost all experimental data sets than does the original transform.
Abstract: CMCCAT97 is a chromatic adaptation transform included in CIECAM97s, the CIE 1997 colour appearance model, for describing colour appearance under different viewing conditions and is recommended by the Colour Measurement Committee of the Society of Dyers and Colourists for predicting the degree of colour inconstancy of surface colours. Among the many transforms tested, this transform gave the most accurate predictions to a number of experimental data sets. However, the structure of CMCCAT97 is considered complicated and causes problems when applications require the use of its reverse mode. This article describes a simplified version of CMCCAT97— CMCCAT2000—which not only is significantly simpler and eliminates the problems of reversibility, but also gives a more accurate prediction to almost all experimental data sets than does the original transform. © 2002 John Wiley & Sons, Inc. Col Res Appl, 27, 49–58, 2002

100 citations

Journal ArticleDOI
TL;DR: A new colour rendering index, CRI-CAM02UCS, is proposed, which predicts visual results more accurately than the CIE CIR-Ra and includes two components necessary for predicting colour rendering in one metric: a chromatic adaptation transform and uniform colour space based on the CIECAM02 model.
Abstract: A new colour rendering index, CRI-CAM02UCS, is proposed. It predicts visual results more accurately than the CIE CIR-Ra. It includes two components necessary for predicting colour rendering in one metric: a chromatic adaptation transform and uniform colour space based on the CIE recommended colour appearance model, CIECAM02. The new index gave the same ranks as those of CIE-Ra in the six lamps tested regardless the sample sets used. It was also found that the methods based on the size of colour gamut did not agree with those based on the test-sample method. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2012

56 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A survey of many recent developments and state-of-the-art methods in computational color constancy, including a taxonomy of existing algorithms, and methods are separated in three groups: static methods, gamut- based methods, and learning-based methods.
Abstract: Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the-art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods, and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available datasets. Finally, various freely available methods, of which some are considered to be state of the art, are evaluated on two datasets.

537 citations

Proceedings ArticleDOI
17 Jun 2003
TL;DR: This paper tries to quantify the colourfulness in natural images to perceptually qualify the effect that processing or coding has on colour, and fit a metric to the results, and obtain a correlation of over 90% with the experimental data.
Abstract: We want to integrate colourfulness in an image quality evaluation framework. This quality framework is meant to evaluate the perceptual impact of a compression algorithm or an error prone communication channel on the quality of an image. The image might go through various enhancement or compression algorithms, resulting in a different -- but not necessarily worse -- image. In other words, we will measure quality but not fidelity to the original picture.While modern colour appearance models are able to predict the perception of colourfulness of simple patches on uniform backgrounds, there is no agreement on how to measure the overall colourfulness of a picture of a natural scene. We try to quantify the colourfulness in natural images to perceptually qualify the effect that processing or coding has on colour. We set up a psychophysical category scaling experiment, and ask people to rate images using 7 categories of colourfulness. We then fit a metric to the results, and obtain a correlation of over 90% with the experimental data. The metric is meant to be used real time on video streams. We ignored any issues related to hue in this paper.

511 citations

Book
21 Nov 2005
TL;DR: This landmark book is the first to describe HDRI technology in its entirety and covers a wide-range of topics, from capture devices to tone reproduction and image-based lighting, leading to an unparalleled visual experience.
Abstract: This landmark book is the first to describe HDRI technology in its entirety and covers a wide-range of topics, from capture devices to tone reproduction and image-based lighting. The techniques described enable you to produce images that have a dynamic range much closer to that found in the real world, leading to an unparalleled visual experience. As both an introduction to the field and an authoritative technical reference, it is essential to anyone working with images, whether in computer graphics, film, video, photography, or lighting design. New material includes chapters on High Dynamic Range Video Encoding, High Dynamic Range Image Encoding, and High Dynammic Range Display Devices Written by the inventors and initial implementors of High Dynamic Range Imaging Covers the basic concepts (including just enough about human vision to explain why HDR images are necessary), image capture, image encoding, file formats, display techniques, tone mapping for lower dynamic range display, and the use of HDR images and calculations in 3D rendering Range and depth of coverage is good for the knowledgeable researcher as well as those who are just starting to learn about High Dynamic Range imaging Table of Contents Introduction; Light and Color; HDR Image Encodings; HDR Video Encodings; HDR Image and Video Capture; Display Devices; The Human Visual System and HDR Tone Mapping; Spatial Tone Reproduction; Frequency Domain and Gradient Domain Tone Reproduction; Inverse Tone Reproduction; Visible Difference Predictors; Image-Based Lighting.

417 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: This work proposes a tone mapping operator that can minimize visible contrast distortions for a range of output devices, ranging from e-paper to HDR displays, and shows that the problem can be solved very efficiently by employing higher order image statistics and quadratic programming.
Abstract: We propose a tone mapping operator that can minimize visible contrast distortions for a range of output devices, ranging from e-paper to HDR displays. The operator weights contrast distortions according to their visibility predicted by the model of the human visual system. The distortions are minimized given a display model that enforces constraints on the solution. We show that the problem can be solved very efficiently by employing higher order image statistics and quadratic programming. Our tone mapping technique can adjust image or video content for optimum contrast visibility taking into account ambient illumination and display characteristics. We discuss the differences between our method and previous approaches to the tone mapping problem.

410 citations

Proceedings Article
01 Jan 2002
TL;DR: This document describes the single set of revisions to the CIECAM97s model that make up theCIECAM02 color appearance model and provides an introduction to the model and a summary of its structure.
Abstract: The CIE Technical Committee 8-01, color appearance models for color management applications, has recently proposed a single set of revisions to the CIECAM97s color appearance model This new model, called CIECAM02, is based on CIECAM97s but includes many revisions1-4 and some simplifications A partial list of revisions includes a linear chromatic adaptation transform, a new non-linear response compression function and modifications to the calculations for the perceptual attribute correlates The format of this paper is an annotated description of the forward equations for the model Introduction The CIECAM02 color appearance model builds upon the basic structure and form of the CIECAM97s5,6 color appearance model This document describes the single set of revisions to the CIECAM97s model that make up the CIECAM02 color appearance model There were many, often conflicting, considerations such as compatibility with CIECAM97s, prediction performance, computational complexity, invertibility and other factors The format for this paper will differ from previous papers introducing a color appearance model Often a general description of the model is provided, then discussion about its performance and finally the forward and inverse equations are listed separately in an appendix Performance of the CIECAM02 model will be described elsewhere7 and for the purposes of brevity this paper will focus on the forward model Specifically, this paper will attempt to document the decisions that went into the design of CIECAM02 For a complete description of the forward and inverse equations, as well as usage guidelines, interested readers are urged to refer to the TC 8-01 web site8 or to the CIE for the latest draft or final copy of the technical report This paper is not intended to provide a definitive reference for implementing CIECAM02 but as an introduction to the model and a summary of its structure Data Sets The CIECAM02 model, like CIECAM97s, is based primarily on a set corresponding colors experiments and a collection of color appearance experiments The corresponding color data sets9,10 were used for the optimization of the chromatic adaptation transform and the D factor The LUTCHI color appearance data11,12 was the basis for optimization of the perceptual attribute correlates Other data sets and spaces were also considered The NCS system was a reference for the e and hue fitting The chroma scaling was also compared to the Munsell Book of Color Finally, the saturation equation was based heavily on recent experimental data13 Summary of Forward Model A color appearance model14,15 provides a viewing condition specific means for transforming tristimulus values to or from perceptual attribute correlates The two major pieces of this model are a chromatic adaptation transform and equations for computing correlates of perceptual attributes, such as brightness, lightness, chroma, saturation, colorfulness and hue The chromatic adaptation transform takes into account changes in the chromaticity of the adopted white point In addition, the luminance of the adopted white point can influence the degree to which an observer adapts to that white point The degree of adaptation or D factor is therefore another aspect of the chromatic adaptation transform Generally, between the chromatic adaptation transform and computing perceptual attributes correlates there is also a non-linear response compression The chromatic adaptation transform and D factor was derived based on experimental data from corresponding colors data sets The non-linear response compression was derived based on physiological data and other considerations The perceptual attribute correlates was derived by comparing predictions to magnitude estimation experiments, such as various phases of the LUTCHI data, and other data sets, such as the Munsell Book of Color Finally the entire structure of the model is generally constrained to be invertible in closed form and to take into account a sub-set of color appearance phenomena Viewing Condition Parameters It is convenient to begin by computing viewing condition dependent constants First the surround is selected and then values for F, c and Nc can be read from Table 1 For intermediate surrounds these values can be linearly interpolated2 Table 1 Viewing condition parameters for different surrounds Surround F c Nc Average 10 069 10 Dim 09 059 095 Dark 08 0525 08 The value of FL can be computed using equations 1 and 2, where LA is the luminance of the adapting field in cd/m2 Note that this two piece formula quickly goes to very small values for mesopic and scotopic levels and while it may resemble a cube-root function there are considerable differences between this two-piece function and a cube-root as the luminance of the adapting field gets very small ! k =1/ 5L A +1 ( ) (1) ! F L = 02k 4 5L A ( ) + 01 1" k4 ( ) 2 5L A ( ) 1/ 3 (2) The value n is a function of the luminance factor of the background and provides a very limited model of spatial color appearance The value of n ranges from 0 for a background luminance factor of zero to 1 for a background luminance factor equal to the luminance factor of the adopted white point The n value can then be used to compute Nbb, Ncb and z, which are then used during the computation of several of the perceptual attribute correlates These calculations can be performed once for a given viewing condition

409 citations