scispace - formally typeset
Search or ask a question

Showing papers by "Takeo Kanade published in 1991"


Journal Article•DOI•
TL;DR: In this paper, the effect of surface roughness on the three primary components of a reflectance model is analyzed in detail, and the conditions that determine the validity of the model are clearly stated.
Abstract: Reflectance models based on physical optics and geometrical optics are studied. Specifically, the authors consider the Beckmann-Spizzichino (physical optics) model and the Torrance-Sparrow (geometrical optics) model. These two models were chosen because they have been reported to fit experimental data well. Each model is described in detail, and the conditions that determine the validity of the model are clearly stated. By studying reflectance curves predicted by the two models, the authors propose a reflectance framework comprising three components: the diffuse lobe, the specular lobe, and the specular spike. The effects of surface roughness on the three primary components are analyzed in detail. >

737 citations



Proceedings Article•DOI•
03 Jun 1991
TL;DR: A stereo matching method is presented which uses multiple stereo pairs with various baselines to obtain precise depth estimates without suffering from ambiguity, and experimental results for stereo images are presented to demonstrate the effectiveness of the algorithm.
Abstract: A stereo matching method is presented which uses multiple stereo pairs with various baselines to obtain precise depth estimates without suffering from ambiguity. The stereo matching method uses multiple stereo pairs with different baselines generated by a lateral displacement of a camera. Matching is performed by computing the sum of squared-difference (SSD) values. The SSD functions for individual stereo pairs are represented with respect to the inverse depth (rather than the disparity, as is usually done), and then are simply added to produce the sum of SSDs. This resulting function is called the SSSD-in-inverse-depth. The authors define a stereo algorithm, based on the SSSD-in-inverse-depth and then present a mathematical analysis to show how the algorithm can remove ambiguity and increase precision. Experimental results for stereo images are presented to demonstrate the effectiveness of the algorithm. >

359 citations



Journal Article•DOI•
TL;DR: The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed and three-dimensional perception using three types of terrain representation is examined.
Abstract: The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed. The perception techniques developed for the Navlab include road-following techniques using color classification and neural nets. These are discussed with reference to three road-following systems, SCARF, YARF, and ALVINN. Three-dimensional perception using three types of terrain representation (obstacle maps, terrain feature maps, and high-resolution maps) is examined. It is noted that perception continues to be an obstacle in developing autonomous vehicles. This work is part of the Defense Advanced Research Project Agency. Strategic Computing Initiative. >

124 citations


Proceedings Article•DOI•
26 Jun 1991
TL;DR: Algorithms are proposed for the solution of the robotic (hand-eye configuration) visual tracking and servoing problem and the use of sum-of-squared differences (SSD) optical flow for the computation of the vector of discrete displacements is proposed.
Abstract: Current robotic systems lack the flexibility of dynamic interaction with the environment. The use of sensors can make the robotic systems more flexible. Among the different types of sensors, visual sensors play a critical role. This paper addresses some of the issues associated with the use of a visual sensor in the feedback loop. In particular, algorithms are proposed for the solution of the robotic (hand-eye configuration) visual tracking and servoing problem. We state the problem of robotic visual tracking as a problem of combining control with computer vision. We propose the use of sum-of-squared differences (SSD) optical flow for the computation of the vector of discrete displacements. These displacements are fed to an adaptive controller (self-tuning regulator) that drives implemented three different adaptive control schemes and the results are presented in this paper.

123 citations


Journal Article•DOI•
TL;DR: In this paper, an array of cells, each of which contains a photodiode and the analog signal processing circuitry needed for light-stripe range finding, was fabricated through MOSIS in a 2- mu m CMOS p-well double-metal, doublepoly process.
Abstract: The authors present experimental results from an array of cells, each of which contains a photodiode and the analog signal-processing circuitry needed for light-stripe range finding. Prototype circuits were fabricated through MOSIS in a 2- mu m CMOS p-well double-metal, double-poly process. This design builds on some of the ideas that have been developed for ICs that integrate signal-processing circuitry with photosensors. In the case of light-stripe range finding, the increase in cell complexity from sensing only to sensing and processing makes the modification of the operational principle of range finding practical, which in turn results in a dramatic improvement in performance. The IC array of photosensor and analog signal processor cells that acquires 1000 frames of light-stripe range data per second-two orders of magnitude faster than conventional light-stripe range-finding methods. The highly parallel range-finding algorithm used requires that the output of each photosensor site be continuously monitored. Prototype high-speed range-finding systems have been built using a 5*5 array and a 28*32 array of these sensing elements. >

112 citations


Proceedings Article•DOI•
09 Apr 1991
TL;DR: An iterative stereo matching algorithm is presented which selects a window adaptively for each pixel, and produces the disparity estimate having the least uncertainty after evaluating both the intensity and the disparity variations within a window.
Abstract: An iterative stereo matching algorithm is presented which selects a window adaptively for each pixel. The selected window is optimal in the sense that it produces the disparity estimate having the least uncertainty after evaluating both the intensity and the disparity variations within a window. The algorithm employs a statistical model that represents uncertainty of disparity of points over the window; the uncertainty is assumed to increase with the distance of the point from the center point. The algorithm is completely local and does not include any global optimization. Also, the algorithm does not use any post-processing smoothing, but smooth surfaces are recovered as smooth while sharp disparity edges are retained. Experimental results have demonstrated a clear advantage of this algorithm over algorithms with a fixed-size window, for both synthetic and real images. >

100 citations


Proceedings Article•DOI•
07 Oct 1991
TL;DR: The authors show that a matrix of image measurements can be factored by singular value decomposition into the product of two matrices that represent shape and motion, respectively.
Abstract: Recovery scene geometry and camera motion from a sequence of images is an important problem in computer vision. If the scene geometry is specified by depth measurements, that is, by specifying distances between the camera and feature points in the scene, noise sensitivity worsens rapidly with increasing depth. The authors show hat this difficulty can be overcome by computing scene geometry directly in terms of shape, that is, by computing the coordinates of feature points in the scene with respect to a world-centered system, without recovering camera-centered depth as an intermediate quantity. More specifically, the authors show that a matrix of image measurements can be factored by singular value decomposition into the product of two matrices that represent shape and motion, respectively. The results in this paper extend to three dimensions the solution the authors described in a previous paper for planar camera motion (ICCV, Osaka, Japan, 1990). >

80 citations


Proceedings Article•DOI•
09 Apr 1991
TL;DR: A very fast lightstripe rangefinder based on an IC array of photoreceptor and analog signal processor cells which acquires 1000 frames of range image per second-two orders of magnitude faster than currently available rangefinding methods is presented.
Abstract: The authors present a very fast lightstripe rangefinder based on an IC array of photoreceptor and analog signal processor cells which acquires 1000 frames of range image per second-two orders of magnitude faster than currently available rangefinding methods. Unlike a conventional lightstripe range-finder, which obtains a frame of range image by the step-and-repeat process of projecting a stripe and grabbing and analyzing a camera image, the VLSI sensor array of this rangefinder gathers range data in parallel as a scene is swept continuously by a moving stripe. Each cell continuously monitors the output of its photoreceptor, and detects and remembers the time at which it observed the peak incident light intensity during the sweep of the stripe. Prototype rangefinding systems have been built using a 28*32 array of these sensing elements. >

79 citations


Journal Article•DOI•
TL;DR: EDDIE, the architecture for the Navlab mobile robot which provides a toolkit for building specific systems quickly and easily is described, and the annotated maps used by EDDIE and the NavLab's road-following system, called the Autonomous Mail Vehicle, is discussed.
Abstract: For pt.1 see ibid., p.31-42 (1991). A description is given of EDDIE, the architecture for the Navlab mobile robot which provides a toolkit for building specific systems quickly and easily. Included in the discussion are the annotated maps used by EDDIE and the Navlab's road-following system, called the Autonomous Mail Vehicle, which was built using EDDIE and its annotated maps as a basis. The contributions of the Navlab project and the lessons learned from it are examined. >

Proceedings Article•DOI•
Eric Krotkov1, John Bares1, Takeo Kanade1, T. Mitchell1, Reid Simmons1, Red Whittaker1 •
19 Jun 1991
TL;DR: In this paper, a six-legged walking robot, called Ambler, was constructed for the planetary rover project and used for exploration of the Earth's surface and its surrounding environment.
Abstract: The goal of the planetary rover project is to prototype an autonomous mobile robot for planetary exploration. The authors have constructed a six-legged walking robot, called the Ambler, that features orthogonal legs, an overlapping gait, and a scanning laser rangefinder to model terrain. To enable the Ambler to walk over rugged terrain, they have combined perception, planning, and real-time control into a comprehensive robotic system. >

Proceedings Article•DOI•
09 Apr 1991
TL;DR: Methods are presented for building high-level terrain descriptions, referred as topographic maps, by extracting terrain features like peaks, pits, ridges, and ravines from the contour map and new definitions for those topographic features based on the contours are developed.
Abstract: Methods are presented for building high-level terrain descriptions, referred as topographic maps, by extracting terrain features like peaks, pits, ridges, and ravines from the contour map. The resulting topographic map contains the location and type of terrain features as well as the ground topography. The authors develop new definitions for those topographic features based on the contour map. They build a contour map from an elevation map and generate the connectivity tree of all regions separated by the contours. The authors use this connectivity tree, called a topographic change tree, to extract the topographic features. Experimental results on a digital elevation model support the definitions for topographic features and the approach. >


Proceedings Article•DOI•
02 Jun 1991
TL;DR: A model-based object recognition system for specular objects that is applicable to multiple objects simply by changing object and sensor models and shows the flexibility of the proposed model based approach.
Abstract: The authors present a model-based object recognition system for specular objects. Objects with specular surfaces present a problem for computer vision. Simulating object appearances by using the sensor model, and the object model allows us to predict specular features, and to analyze the detectability and reliability of each feature. The system generates a set of aspects of the object. By precompiling the aspects with the feature detectability and the feature reliability, the system prepares adaptable matching templates. At the runtime, an input image is first classified into a few candidate aspects. A deformable template matching finds the best match among them. This method is applicable to multiple objects simply by changing object and sensor models. Experimental results using two kinds of objects and sensors are presented: a TV image of a shiny object and a synthetic aperture radar (SAR) image of an airplane. The results show the flexibility of the proposed model based approach. >


Proceedings Article•DOI•
09 Apr 1991
TL;DR: An algorithm for recovering the shape and reflectance of Lambertian surfaces in the presence of interreflections enhances the performance and the utility of existing shape-from-intensity methods.
Abstract: An algorithm for recovering the shape and reflectance of Lambertian surfaces in the presence of interreflections is presented. The surfaces may be of arbitrary but continuous shape, and with possibly varying and unknown reflectance. The actual shape and reflectance are recovered from the pseudoshape and pseudoreflectance estimated by a local shape-from-intensity method (e.g., photometric stereo). Thus, the algorithm enhances the performance and the utility of existing shape-from-intensity methods. From the results reported, two observations can be made that are pertinent to machine vision: interreflections can cause vision algorithms to produce unacceptably erroneous results and hence should not be ignored; and at least some interreflection problems are tractable and solvable. >

01 Dec 1991
TL;DR: Geometric modeling systems allow users to create, store, and manipulate models of three-dimensional (3-D) solid objects, but have severe limitations to be used for tasks such as model-based computer vision.
Abstract: : Geometric modeling systems allow users to create, store, and manipulate models of three-dimensional (3-D) solid objects. these geometric modeling systems have found many applications in CAD/CAM and robotics areas. Graphic display capability which rivals photographic techniques allows realistic visualization of design and simulation. Capabilities to compute spatial and physical properties of objects, such as mass property calculation and static interference check, are used in the design and analysis of mechanical parts and assembly. Output from the geometric modelers can be used for automatic programming of NC machines and robots. These geometric modeling systems are powerful in many application domains, but have severe limitations to be used for tasks such as model-based computer vision. Among others, (1) there is no explicit symbolic representation of the two-dimensional information obtained by the projection of the 3-D model. The output image displayed on the screen is a set of pixel intensity values, with no knowledge of the logical grouping of points, lines and polygons. Also, the relationship between 3-D and 2-D information is not maintained properly. (2) Most of the current 3-D geometric modeling systems are designed with a closed architecture, with a minimum of documentation describing the internal data structures. Moreover, some of the data structures are packed into bit-fields, making understanding and modification difficult. (3) They run as stand-alone interactive systems and cannot easily be interfaced to other programs.

01 Dec 1991
TL;DR: The contract made significant progress across a broad front on the problems of computer vision for outdoor mobile robots, new algorithms were built in neural networks, range data analysis, object recognition and road finding and there were notable programmatic events.
Abstract: : The contract made significant progress across a broad front on the problems of computer vision for outdoor mobile robots New algorithms were built in neural networks, range data analysis, object recognition and road finding Perception modules were integrated into new systems (including on road and off road, notably on the new Navlab II vehicle); and there were notable programmatic events, ranging from generating two new thesis proposals to playing a major role in the 'Tiger Team', shaping the architecture for the new DARPA program in Unmanned Ground Vehicles This report begins with a summary of the year's activities and accomplishments, in this chapter Chapter 2, '3-D Landmark Recognition from Range Image', provides more detail on object recognition from multiple sensor locations Chapter 3, 'Representation and Recovery of Road Geometry in YARF', discusses geometry issues in YARF, our symbolic road tracking system The last two chapters discuss systems issues that are important in providing cues and constraints for an active vision approach to robot driving' A Computational Model of Driving for Autonomous Vehicles', Chapter 4, introduces the complexities of reasoning for driving in traffic The fifth and final chapter, 'Combining artificial neural networks and symbolic processing for autonomous robot guidance', shows how we combine neural nets with map data in a complete system

01 Jan 1991
TL;DR: In this article, a rotary joint-based gripper was used for locomotion and basic manipulation on the space station truss and an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss to simulate a zero gravity environment.
Abstract: Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams.