scispace - formally typeset
Search or ask a question

Showing papers by "Gui Yun Tian published in 1999"


Journal ArticleDOI
TL;DR: The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation, empirically derived based on color object recognition experiments.
Abstract: Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation. The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.

27 citations


Journal ArticleDOI
TL;DR: In this paper, a miniaturized displacement senor for deep hole measurement is reported, which exploits the induced eddy current effects detected by chip coils, the sensor generates a digital signal, and the transducer uses two contact probes for transmitting the displacement to a noncontact sensing element.
Abstract: A miniaturised displacement senor for deep hole measurement is reported in this paper. By exploiting the induced eddy current effects detected by chip coils, the sensor generates a ’digital’ signal. The sensor chip coil can be manufactured by the similar processes to those used for manufacturing a printed circuit board (PCB) which allows them to be miniaturised. The paper elaborates on the construction and mechanism by which the displacement is directly transferred to a frequency output. It also reports on the transducer, which uses two contact probes for transmitting the displacement to a noncontact sensing element. Experimental results demonstrate the stability, linearity, measurement range and accuracy of the sensor system.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a transducer that uses two contact probes for transmitting the displacement to a non-contact sensing element, and demonstrate the stability, linearity, measurement range and accuracy of the sensor system.

7 citations


Proceedings ArticleDOI
16 Sep 1999
TL;DR: Optimal color constancy procedures against color normalization found illumination to depend both on the light source and characteristics of the scene and the color invariant normalization also deliver near-perfect recognition.
Abstract: Colors recorded in an image depend on the color of the capture illuminant. As such image colors are not stable features for object recognition but we wish they were stable since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Color constancy algorithms attempt to infer and remove the illuminant color through image analysis. Over the last two decades, various models for color constancy have been developed. Unfortunately, color constancy algorithms are still not good enough to support object recognition. In this paper, we evaluate optimal color constancy procedures against color normalization. Two perfect color constancy algorithms are described. One is perfect color constancy by the scene, which arrives at an estimate of the illuminant not through algorithmic inference, but through measurement: the light source is measured using a spectraradiometer, assuming the reflectances of object surface are known. The other is perfect color color constancy by the illuminant, which arrives at an estimate of the illuminant through measurement, assuming the reflectances of object surface are unknown. Instead of color constancy, color normalization normalizes color images in terms of the context to remove illumination. To remove dependency due to illumination, images in a calibrated dataset are preprocessed using either the color constancy or color invariant normalization. Two experiments are reported in the paper. In the first experiment, the optimal algorithms of perfect color constancy based on measurement were tested using a calibrated image dataset. In the second experiment, the performances of the optimal color constancy algorithms are compared with color invariant normalization. Unfortunately, measurement driven color constancy by the illuminant does not support perfect recognition. However, color constancy preprocessing based on a scene dependent 'effective illuminant' facilitates near-perfect recognition. In comparison the color invariant normalization also deliver near-perfect recognition. The failure of color constancy by the illuminant is understandable because the measured illuminant doesn't correspond to the actual effective illuminant. Rather, we found illumination to depend both on the light source and characteristics of the scene.

2 citations


Proceedings ArticleDOI
25 Feb 1999
TL;DR: It is found that mapping image colours based on a measure of the illuminant results in good indexing, which is a surprising result since it is accepted doctrine that a change in illuminants should result in a systematic change in image colours.
Abstract: Because the colours in an image convey a lot of information, almost all image database systems support colour content queries. Unfortunately colour based queries do not always return the images that were sought even although there is a good colour match. Such failures are easily explained. We as human observers do not see raw image colours but rather make an interpretation of the colours in an image. Our interpretation allows us to decouple the intrinsic colour of the objects and surfaces, captured in an image, from the colour of the illumination. An indoor picture with a yellowish colour cast is interpreted as just that, we do not think that all the objects in the scene are more yellow than they usually are. In contrast, image database systems generally make no such comparable interpretation. In this paper we set forth an experimental study that attempts to quantify the nature and magnitude of the illumination colour problem. We are interested in measuring how image colours depend on illumination and how this dependency might be removed. Our work based on a small, but accurately calibrated, image database comprising 11 colourful objects imaged under 4 typical household lights. Because illumination colour impacts so dramatically on image colours, querying this dataset by colour-content delivers very poor indexing. To improve indexing performance, the illumination bias in images needs to be removed. This is done by applying an appropriate mapping to the image colours (e.g. a reddish cast can be removed by reducing the redness at each pixel). We found that mapping image colours based on a measure of the illuminant results in good indexing. However, when the mapping depends on both scene content and illumination together, the indexing performance is even better still. This is a surprising result since it is accepted doctrine that a change in illuminants should result in a systematic change in image colours and this change should effect all images equally (scene content should not add any useful information). Of course if illumination colour depends on scene content then it will be difficult to measure since the spectral statistics of the scene are also unknown. If measurement is difficult, estimation (using a colour constancy algorithm) must be more difficult still.

2 citations