scispace - formally typeset
Search or ask a question
Author

Brian V. Funt

Bio: Brian V. Funt is an academic researcher from Simon Fraser University. The author has contributed to research in topics: Color constancy & Standard illuminant. The author has an hindex of 40, co-authored 177 publications receiving 7750 citations. Previous affiliations of Brian V. Funt include University at Buffalo & University of British Columbia.


Papers
More filters
Journal ArticleDOI
TL;DR: Results of tests with the new color-constant-color-indexing algorithm show that it works very well even when the illumination varies spatially in its intensity and color, which circumvents the need for color constancy preprocessing.
Abstract: Objects can be recognized on the basis of their color alone by color indexing, a technique developed by Swain-Ballard (1991) which involves matching color-space histograms. Color indexing fails, however, when the incident illumination varies either spatially or spectrally. Although this limitation might be overcome by preprocessing with a color constancy algorithm, we instead propose histogramming color ratios. Since the ratios of color RGB triples from neighboring locations are relatively insensitive to changes in the incident illumination, this circumvents the need for color constancy preprocessing. Results of tests with the new color-constant-color-indexing algorithm on synthetic and real images show that it works very well even when the illumination varies spatially in its intensity and color. >

670 citations

Journal ArticleDOI
TL;DR: Algorithm performance as a function of the number of surfaces in scenes generated from reflectance spectra, the relative effect on the algorithms of added specularities, and the effect of subsequent clipping of the data is considered.
Abstract: We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely. The algorithms chosen for close study include two gray world methods, a limiting case of a version of the Retinex method, a number of variants of Forsyth's (1990) gamut-mapping method, Cardei et al.'s (2000) neural net method, and Finlayson et al.'s color by correlation method (Finlayson et al. 1997, 2001; Hubel and Finlayson 2000) . We investigate the ability of these algorithms to make estimates of three different color constancy quantities: the chromaticity of the scene illuminant, the overall magnitude of that illuminant, and a corrected, illumination invariant, image. We consider algorithm performance as a function of the number of surfaces in scenes generated from reflectance spectra, the relative effect on the algorithms of added specularities, and the effect of subsequent clipping of the data. All data is available on-line at http://www.cs.sfu.ca//spl sim/color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca//spl sim/color/code).

456 citations

Journal ArticleDOI
TL;DR: Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.
Abstract: For pt.I see ibid., vol. 11, no.9, p.972-84 (2002). We test a number of the leading computational color constancy algorithms using a comprehensive set of images. These were of 33 different scenes under 11 different sources representative of common illumination conditions. The algorithms studied include two gray world methods, a version of the Retinex method, several variants of Forsyth's (1990) gamut-mapping method, Cardei et al.'s (2000) neural net method, and Finlayson et al.'s color by correlation method (Finlayson et al. 1997, 2001; Hubel and Finlayson 2000). We discuss a number of issues in applying color constancy ideas to image data, and study in depth the effect of different preprocessing strategies. We compare the performance of the algorithms on image data with their performance on synthesized data. All data used for this study are available online at http://www.cs.sfu.ca//spl sim/color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca//spl sim/color/code). Experiments with synthesized data (part one of this paper) suggested that the methods which emphasize the use of the input data statistics, specifically color by correlation and the neural net algorithm, are potentially the most effective at estimating the chromaticity of the scene illuminant. Unfortunately, we were unable to realize comparable performance on real images. Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.

400 citations

Journal ArticleDOI
TL;DR: In this paper, the spectral sharpening method is proposed to convert a given set of sensor sensitivity functions into a new set that will improve the performance of any color-constancy algorithm that is based on an independent adjustment of the sensor response channels.
Abstract: We develop sensor transformations, collectively called spectral sharpening, that convert a given set of sensor sensitivity functions into a new set that will improve the performance of any color-constancy algorithm that is based on an independent adjustment of the sensor response channels. Independent adjustment of multiplicative coefficients corresponds to the application of a diagonal-matrix transform (DMT) to the sensor response vector and is a common feature of many theories of color constancy, Land’s retinex and von Kries adaptation in particular. We set forth three techniques for spectral sharpening. Sensor-based sharpening focuses on the production of new sensors as linear combinations of the given ones such that each new sensor has its spectral sensitivity concentrated as much as possible within a narrow band of wavelengths. Data-based sharpening, on the other hand, extracts new sensors by optimizing the ability of a DMT to account for a given illumination change by examining the sensor response vectors obtained from a set of surfaces under two different illuminants. Finally in perfect sharpening we demonstrate that, if illumination and surface reflectance are described by two- and three-parameter finite-dimensional models, there exists a unique optimal sharpening transform. All three sharpening methods yield similar results. When sharpened cone sensitivities are used as sensors, a DMT models illumination change extremely well. We present simulation results suggesting that in general nondiagonal transforms can do only marginally better. Our sharpening results correlate well with the psychophysical evidence of spectral sharpening in the human visual system.

350 citations

Journal ArticleDOI
TL;DR: This work provides concise MATLAB™ implemen- tations of two of the spatial techniques of making pixel comparisons, along with test results on several images and a discussion of the results.
Abstract: Many different descriptions of Retinex methods of light- ness computation exist. We provide concise MATLAB™ implemen- tations of two of the spatial techniques of making pixel comparisons. The code is presented, along with test results on several images and a discussion of the results. We also discuss the calibration of input images and the postRetinex processing required to display the output images. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1636761)

299 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations

Journal ArticleDOI
TL;DR: The working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap are discussed, as well as aspects of system engineering: databases, system architecture, and evaluation.
Abstract: Presents a review of 200 references in content-based image retrieval. The paper starts with discussing the working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap. Subsequent sections discuss computational steps for image retrieval systems. Step one of the review is image processing for retrieval sorted by color, texture, and local geometry. Features for retrieval are discussed next, sorted by: accumulative and global features, salient points, object and shape features, signs, and structural combinations thereof. Similarity of pictures and objects in pictures is reviewed for each of the feature types, in close connection to the types and means of feedback the user of the systems is capable of giving by interaction. We briefly discuss aspects of system engineering: databases, system architecture, and evaluation. In the concluding section, we present our view on: the driving force of the field, the heritage from computer vision, the influence on computer vision, the role of similarity and of interaction, the need for databases, the problem of evaluation, and the role of the semantic gap.

6,447 citations

Book
05 Mar 2004
TL;DR: Bringing together all aspects of mobile robotics into one volume, Introduction to Autonomous Mobile Robots can serve as a textbook or a working tool for beginning practitioners.
Abstract: Mobile robots range from the Mars Pathfinder mission's teleoperated Sojourner to the cleaning robots in the Paris Metro. This text offers students and other interested readers an introduction to the fundamentals of mobile robotics, spanning the mechanical, motor, sensory, perceptual, and cognitive layers the field comprises. The text focuses on mobility itself, offering an overview of the mechanisms that allow a mobile robot to move through a real world environment to perform its tasks, including locomotion, sensing, localization, and motion planning. It synthesizes material from such fields as kinematics, control theory, signal analysis, computer vision, information theory, artificial intelligence, and probability theory. The book presents the techniques and technology that enable mobility in a series of interacting modules. Each chapter treats a different aspect of mobility, as the book moves from low-level to high-level details. It covers all aspects of mobile robotics, including software and hardware design considerations, related technologies, and algorithmic techniques.] This second edition has been revised and updated throughout, with 130 pages of new material on such topics as locomotion, perception, localization, and planning and navigation. Problem sets have been added at the end of each chapter. Bringing together all aspects of mobile robotics into one volume, Introduction to Autonomous Mobile Robots can serve as a textbook or a working tool for beginning practitioners.

2,414 citations

Journal ArticleDOI
TL;DR: From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition and the usefulness of invariance is category-specific.
Abstract: Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.

2,071 citations

Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations