scispace - formally typeset
Search or ask a question

Showing papers on "Color constancy published in 1988"


Journal ArticleDOI
TL;DR: The simulations indicate how configural image properties trigger interactions among spatially organized contrastive, boundary segmentation, and filling-in processes to generate emergent percepts, providing the first unified mechanistic explanation of 1-D and 2-D brightness phenomena.
Abstract: Computer simulations of a neural network model of 1-D and 2-D brightness phenomena are presented. The simulations indicate how configural image properties trigger interactions among spatially organized contrastive, boundary segmentation, and filling-in processes to generate emergent percepts. They provide the first unified mechanistic explanation of this set of phenomena, a number of which have received no previous mechanistic explanation. Network interactions between a Boundary Contour (BC) System and a Feature Contour (FC) System comprise the model. The BC System consists of a hierarchy of contrast-sensitive and orientationally tuned interactions, leading to a boundary segmentation. On and off geniculate cells and simple and complex cortical cells are modeled. Output signals from the BC System segmentation generate compartmental boundaries within the FC System. Contrast-sensitive inputs to the FC System generate a lateral filling-in of activation within FC System compartments. The filling-in process is defined by a nonlinear diffusion mechanism. Simulated phenomena include network responses to stimulus distributions that involve combinations of luminance steps, gradients, cusps, and corners of various sizes. These images include impossible staircases, bull’s-eyes, nested combinations of luminance profiles, and images viewed under nonuniform illumination conditions. Simulated phenomena include variants of brightness constancy, brightness contrast, brightness assimilation, the Craik-O’Brien-Cornsweet effect, the Koffka-Benussi ring, the Kanizsa-Minguzzi anomalous brightness differentiation, the Hermann grid, and a Land Mondrian viewed under constant and gradient illumination that cannot be explained by retinex theory.

391 citations


Journal ArticleDOI
TL;DR: Model calculations show that the behavior observed in bees is consistent with the retinex theory, i.e., an algorithm using long-range interactions is required to explain color constancy.
Abstract: A multicolored display was illuminated by 3 bands of wavelengths corresponding to the maxima of the spectral sensitivities of the 3 types of photoreceptors found in the bee retina. The intensity of each band could be varied individually. The light fluxes emitted by the colored areas of the multicolored display were determined quantitatively. Free-flying honeybees were trained with sugar solution to choose one of the colored areas. The illumination was then changed in such a way that the light fluxes formerly emitted by the training area were now measured on another area. When the trained bees were tested under those conditions, they still chose the training area. The relative positions of the colored areas were changed in order to exclude learning of position. It is concluded that color vision in bees is, in a certain range, independent of the spectral content of the illumination. Model calculations show that the behavior observed in bees is consistent with the retinex theory (Land, 1977), i.e., an algorithm using long-range interactions is required to explain color constancy.

134 citations


Proceedings ArticleDOI
05 Jun 1988
TL;DR: A model of surface reflectance recovery for near-Mondrian images is developed by extending the basic approach to the vector model of illumination, reflectance, and receptors by using edge detection and derivatives to screen out slowly changing illumination from color scene images.
Abstract: The authors combine an alternate version of B.K.P. Horn's computation (1974) for recovering lightness from Land's retinex scheme, as modified by A. Blake (1985), with a finite-dimensional linear model for color images. The authors recover surface spectral reflectance from images of near-Mondrian scenes by using edge detection and derivatives to screen out slowly changing illumination from color scene images. They first develop a model for lightness recovery, based on Horn's, because a similar treatment is needed for the FDM case (a particular finite-dimensional linear model). Based on the equations for the one-dimensional case, they develop a model of surface reflectance recovery for near-Mondrian images by extending the basic approach to the vector model of illumination, reflectance, and receptors. >

91 citations


Journal ArticleDOI
TL;DR: In order to study color constancy, the color appearance of the center of a center-surround paradigm was measured by using multiple-alternative forced-response matching and it was determined that, if the ratios of R, G, and B of the Center to R,G, andB of the surround remain constant as the illuminant changes, color constancies results.
Abstract: In order to study color constancy, the color appearance of the center of a center–surround paradigm was measured by using multiple-alternative forced-response matching. The center was presented with (1) no surround, (2) an adjacent chromatic surround, or (3) a chromatic surround separated from the center by an achromatic gap. The center and the surrounds were presented under various simulated illuminants ranging from illuminant A to illuminant D75. We found that when no surround is present, color constancy fails; however, when surrounds are present, some degree of color constancy is displayed. We also found that color constancy is poor when chromatic induction is minimal. In addition, it was determined that, if the ratios of R, G, and B of the center to R, G, and B of the surround remain constant as the illuminant changes, color constancy results. (R, G, and B correspond to the outputs of the retinal color mechanisms.)

79 citations


Proceedings ArticleDOI
05 Dec 1988
TL;DR: The colour constancy equation is derived, which is used to enumerate those properties of illuminant and surface reflectance required for color constancy, and indicates that good constancy requires that receptoral gain be controlled.
Abstract: By approaching colour constancy as a problem of predicting colour appearance, we derive the colour constancy equation, which we use to enumerate those properties of illuminant and surface reflectance required for colour constancy. We then use a physical realisability constraint on surface reflectances to construct the set of illuminants under which the image observed can have arisen. Two distinct algorithms arise from employing this constraint in conjunction with the colour constancy equation: the first corresponds to normalisation according to a coefficient rule, the second is considerably more complex, and allows a large number of parameters in the illuminant to be recovered. The simpler algorithm has been tested extensively on images of real Mondriaan’s, taken under different coloured lights and displays good constancy. The results also indicate that good constancy requires that receptoral gain be controlled.

47 citations


Proceedings ArticleDOI
05 Dec 1988
TL;DR: In this paper, a finite-dimensional linear model of illumination and surface reflectance is employed to solve the color constancy problem, in which the spectrum of the reflected light can be uniquely decomposed into a component due to the illuminant and another component to the surface reflectances.
Abstract: Color constancy can be achieved by analyzing the chromatic aberration in an image. Chromatic aberration spatially separates light of different wavelengths and this allows the spectral power distribution of the light to be extracted. This is more information about the light than is registered by the cones of the human visual system or by a color television camera; and, using it, we show how color constancy, the separation of reflectance from illumination, can be achieved. As examples, we consider grey-level images of (a) a colored dot under unknown illumination, and (b) an edge between two differently colored regions under unknown illumination. Our first result is that in principle we can determine completely the spectral power distribution of the reflected light from the dot or, in the case of the color edge, the difference in the spectral power distributions of the light from the two regions. By employing a finite-dimensional linear model of illumination and surface reflectance, we obtain our second result, which is that the spectrum of the reflected light can be uniquely decomposed into a component due to the illuminant and another component due to the surface reflectance. This decomposition provides the complete spectral reflectance function, and hence color, of the surface as well as the spectral power distribution of the illuminant. Up to the limit of the accuracy of the finite-dimensional model, this effectively solves the color constancy problem.

19 citations


Journal ArticleDOI
TL;DR: A physiological model of color constancy is described that is nonlinear and hence may add challenge to the concomitant design of color atlases.
Abstract: A strategy for cooperative illumination and reflectance design to produce color-stable environments includes reduction of metamerism and paramerism, followed by attempts at color constancy. Given a model of color constancy based on linear basis-function expansions of illuminant and reflectance spectra, all these goals could be served by designing reflectances and illuminants to inhabit certain compatible subspaces of spectral functions. Such linear models suggest two new indices of metamerism, one for designing illuminants and the other for designing reflectances. Implications are noted for designing color atlases. Finally, a physiological model of color constancy is described that is nonlinear and hence may add challenge to the concomitant design.

4 citations



Book ChapterDOI
01 Jan 1988
TL;DR: It has long been known that the color of an object when it is part of a general scene will not change markedly with those considerable changes in the relative amounts of red, green, and blue light that characterize illumination from sunlight versus blue skylight versus gray day light versus tungsten light versus fluorescent light.
Abstract: It is a cultural commonplace deriving from Newton that the color of an object in the world around us depends on the relative amounts of red, green, and blue light Coming from the object to our eyes. In contradiction, it has long been known that the color of an object when it is part of a general scene will not change markedly with those considerable changes in the relative amounts of red, green, and blue light that characterize illumination from sunlight versus blue skylight versus gray day light versus tungsten light versus fluorescent light. This contradiction is called color constancy. We need not examine the explanations of color constancy by Helmholtz and those who have followed him during the last Century because, as the following experiments show, the paradox does not really exist: The color of an object is not determined by the composition of the light Coming from the object.

3 citations




01 Jun 1988
TL;DR: In this paper, the authors apply CVIDS to machine vision by estimating albedo, illumination, and speed from the output to Mondrian inputs, assuming that the statistics of the estimates are stationary, despite nonstationary input photon noise.
Abstract: The Intensity Dependent Spread (IDS) operation (Cornsweet, 1985) was first introduced as a model of retinal spatial summation and human brightness perception. IDS is of interest in machine vision because it is a spatial bandpass filter which adapts locally to photon noise. IDS has recently been extended to spacetime as the Constant-Velocity IDS (CVIDS) operation. CVIDS performs spacetime adaptive bandpass filtering while remaining consistent with human spatial and temporal psychophysical data. This thesis applies CVIDS to machine vision by estimating albedo, illumination, and speed from the CVIDS output to Mondrian inputs. The following is shown: (1) Albedo, illumination, and speed can be estimated from the CVIDS response to one-dimensional step edges, assuming that albedo changes are abrupt relative to illumination changes. Simulations suggest that the statistics of the estimates are stationary, despite nonstationary input photon noise. (2) Two-dimensional optical flow can be completely constrained by applying two or more DOG filters with different sigma's to an image sequence, if at least two of the bandpass outputs have linearly independent gradients and obey the optical flow constraint equation at each point. IDS and CVIDS can be used the same way because they are bandpass filters, but they can also constrain Mondrian edge motion twice by separably encoding two frequency components in a single output image sequence. (3) IDS output to a two-greylevel input is parametrically equivalent to a log transform followed by DOG on the same input. IDS output is necessarily different from log-DOG output for more complex inputs because of IDS local adaptiveness. (4) Three variations on CVIDS are developed. An adaptive bandpass operation for input functions of time alone (f(t)) is derived from CVIDS. A spacetime bandpass operation exhibiting all CVIDS behavior except its adaptivity is also derived. Finally, CVIDS is extended to a vector operation allowing computation of speed, albedo, and illumination within one algorithm. (5) A new method of extending Land's Retinex theory to two dimensions is derived using line integration. Similarities between the method and CVIDS behavior are discussed.

Proceedings ArticleDOI
12 Feb 1988
TL;DR: In this paper, the spectral sensitivities of the R and G receptors show a large overlap, while that of the B receptors overlaps little with the other two, and the effect of unseen spectral differences between objects can reveal metamerism.
Abstract: Some results concerning lighting for human color vision can be generalized to robot color vision. These results depend mainly on the spectral sensitivities of the color channels, and their interaction with the spectral power distribution of the light. In humans, the spectral sensitivities of the R and G receptors show a large overlap, while that of the B receptors overlaps little with the other two. A color vision model that proves useful for lighting work---and which also models many features of human vision---is one in which the "opponent color" signals are T = R - G, and D = B - R. That is, a "red minus green" signal comes from the receptors with greatest spectral overlap, while a "blue minus yellow" signal comes from the two with the least overlap. Using this model, we find that many common light sources attenuate red-green contrasts, relative to daylight, while special lights can enhance red-green contrast slightly. When lighting changes cannot be avoided, the eye has some ability to compensate for them. In most models of "color constancy," only the light's color guides the eye's adjustment, so a lighting-induced loss of color contrast is not counteracted. Also, no constancy mechanism can overcome metamerism---the effect of unseen spectral differences between objects. However, we can calculate the extent to which a particular lighting change will reveal metamerism. I am not necessarily arguing for opponent processing within robots, but only presenting results based on opponent calculations.