scispace - formally typeset
Search or ask a question

Showing papers by "Takeo Kanade published in 1987"


Journal ArticleDOI
01 Jun 1987
TL;DR: By reading vision and navigation the carnegie mellon navlab, you can take more advantages with limited budget.
Abstract: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles. >

445 citations


01 Jan 1987
TL;DR: In this paper, the color of every pixel from an object can be described as a linear combination of the object color and the highlight color, and the intrinsic images may be a useful tool for a variety of algorithms in computer vision, such as stereo vision, motion analysis, shape from shading, and shape from highlights.
Abstract: Current methods for image segmentation are confused by artifacts such as highlights, because they are not based on any physical model of these phenomena. In this paper, we present an approach to color image understanding that accounts for color variations due to highlights and shading. Based on the physics of reflection by dielectric materials, such as plastic, we show that the color of every pixel from an object can be described as a linear combination of the object color and the highlight color. According to this model, all color pixels from one object form a planar cluster in the color space whose shape is determined by the object and highlight colors and by the object shape and illumination geometry. We present a method which exploits the color difference between object color and highlight color, as exhibited in the cluster shape, to separate the color of every pixel into a matte component and a highlight component. This generates two intrinsic images, one showing the scene without highlights, and the other one showing only the highlights. The intrinsic images may be a useful tool for a variety of algorithms in computer vision that cannot detect or analyze highlights, such as stereo vision, motion analysis, shape from shading, and shape from highlights. We have applied this method to real images in a laboratory environment, and we show these results and discuss some of the pragmatic issues endemic to precision color imaging.

138 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe three-wheeled vehicles that can move inside a pipe and adjust to the shape and size of the pipe, based on two hinged arms.
Abstract: This paper describes three-wheeled vehicles that can move inside a pipe and adjust to the shape and size of the pipe. We propose two types of vehicles: tractive and nontractive. Both types are based on two hinged arms. The tractive vehicle has a driving wheel at a hinge and two sphere bearings at the ends of the arms. The driving wheel rotates about the axis perpendicular to the plane in which the two arms move. The wheel can freely move sideways. The sphere bearings can move in all directions like ball casters. Since the stretch force of the arm to the pipe wall is generated mechanically by pulleys and a spring, the vehicle rests in the pipe by pressing the two arms in opposite directions where the diameter is the biggest and it moves according to the action of the driving wheel. Three wheels of the nontractive vehicle are sphere bearings.We analyze the shape geometry of the pipe to obtain sta bility conditions under which the vehicle can move, and we consider friction and gravity bringing the vehicle to...

51 citations


Book ChapterDOI
01 Jan 1987
TL;DR: The various components of the 3D Mosaic system are described, including stereo analysis, monocular analysis, and constructing and modifying the scene model, intended for tasks such as matching, display generation, planning paths through the scene, and making other decisions about the scene environment.
Abstract: The 3D Mosaic system is a vision system that incrementally reconstructs complex 3D scenes from multiple images. The system encompasses several levels of the vision process, starting with images and ending with symbolic scene descriptions. This paper describes the various components of the system, including stereo analysis, monocular analysis, and constructing and modifying the scene model In addition, the representation of the scene model is described. This model is intended for tasks such as matching, display generation, planning paths through the scene, and making other decisions about the scene environment. Examples showing how the system is used to interpret complex aerial photographs of urban scenes are presented. Each view of the scene, which may be either a single image or a stereo pair, undergoes analysis which results in a 3D wire-frame description that represents portions of edges and vertices of objects. The model is a surface-based description constructed from the wire frames. With each successive view, the model is incrementally updated and gradually becomes more accurate and complete. Task-specific knowledge, involving block-shaped objects in an urban scene, is used to extract the wire frames and construct and update the model.

46 citations


Proceedings ArticleDOI
25 Feb 1987
TL;DR: The first system that uses the CMU Blackboard for scheduling, geometric transformations, inter and intra machine communications is completed, and the perception now uses adaptive color classification for road tracking, and scanning laser rangefinder data for obstacle detection.
Abstract: Recent work on autonomous navigation at Carnegie Mellon spans the range from hardware improvements to computational speed to new perception algorithms to systems issues. We have a new vehicle, the Navlab, that has room for onboard researchers and computers, and that carries a full suite of sensors. We have ported several of our algorithms to the Warp, an experimental supercomputer capable of performing 100 million floating point operations per second. Our perception now uses adaptive color classification for road tracking, and scanning laser rangefinder data for obstacle detection. We have completed the first system that uses the CMU Blackboard for scheduling, geometric transformations, inter and intra machine communications.

28 citations


01 Jan 1987
TL;DR: Algorithms for identifying parameters of an N degrees-of-freedom robotic manipulator are presented and it is shown that the Newton-Euler model, which is nonlinear in the dynamic parameters, can be transformed into an equivalent modified model which is linear in dynamic parameters.
Abstract: This paper presents algorithms for identifying parameters of an N degrees-of-freedom robotic manipulator First, we outline the fundamental properties of the Newton-Euler formulation of robot dynamics from the view point of parameter identification We then show that the Newton-Euler model, which is nonlinear in the dynamic parameters, can be transformed into an equivalent modified model which is linear in dynamic parameters We develop both on-line and off-line parameter estimation procedures To illustrate our approach, we identify the dynamic parameters of the cylindrical robot, and the three degree-of-freedom positioning system of the CMU DirecbDrive Arm II The experimental implementation of our algorithm to estimate the dynamics parameters of the six degreesof-freedom CMU DD Arm II is also presented

18 citations


Book ChapterDOI
01 Jan 1987
TL;DR: In this article, a noncontact multi-light source optical proximity sensor that can measure the distance, orientation, and curvature of a surface is developed, where beams of light are sequentially focused from light emitting diodes onto a target surface.
Abstract: We have developed a noncontact multi-light source optical proximity sensor that can measure the distance, orientation, and curvature of a surface. Beams of light are sequentially focused from light emitting diodes onto a target surface. An analog light sensor - a planar PIN diode - localizes the position of the resultant light spot in the field of view of the sensor. The 3-D locations of the light spots are then computed by triangulation. The distance, orientation, and curvature of the target surface is computed by fitting a surface to a set of data points on the surface.

11 citations


01 May 1987
TL;DR: Progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Institute during 1987 is described, which centers on guiding outdoor autonomous vehicles.
Abstract: : This report describes progress in vision and navigation for outdoor mobile robots at the Carnegie Mellon Robotics Institute during 1986. This research was sponsored by DARPA as part of the Strategic Computing Initiative. Our work during 1986 culminated in two demonstration systems. The first system drives the Terregator, a desk-sized robot with six wheels, around the network of campus sidewalks. This system, named Sidewalk II, uses a video camera to follow sidewalks and a laser rangefinder to detect and avoid stairs. Sidewalk II makes extensive use of map data, for visual predictions and for path planning. The second system, Park Navigation, uses the Navlab, our new Chevrolet Van robot. The Park system concentrated on vision for following difficult roads, including curves, dirt and leaves, shadows, puddles, and both moving and fixed obstacles. We developed computer vision techniques for handling difficult roads, and built range finder programs for detecting and avoiding obstacles. Keywords: Autonomous navigation.

9 citations



01 Jul 1987
TL;DR: Researchers have designed and are constructing a Reconfigurable Modular Manipulator System (RMMS), which supports identification of modules, sensing of joint states, and commands to the joint actuator.
Abstract: Using manipulators with a fixed configuration for specific tasks is appropriate when the task requirements are known beforehand. However, in less predictable situations, such as an outdoor construction site or aboard a space station, a manipulator system requires a wide range of capabilities, probably beyond the limitations of a single, fixed-configuration manipulator. To fulfill this need, researchers have been working on a Reconfigurable Modular Manipulator System (RMMS). Researchers have designed and are constructing a prototype RMMS. The prototype currently consists of two joint modules and four link modules. The joints utilize a conventional harmonic drive and torque motor actuator, with a small servo amplifier included in the assembly. A brushless resolver is used to sense the joint position and velocity. For coupling the modules together, a standard electrical connector and V-band clamps for mechanical connection are used, although more sophisticated designs are under way for future versions. The joint design yields an output torque to 50 ft-lbf at joint speeds up to 1 radian/second. The resolver and associated electronics have resolutions of 0.0001 radians, and absolute accuracies of plus or minus 0.001 radians. Manipulators configured from these prototype modules will have maximum reaches in the 0.5 to 2 meter range. The real-time RMMS controller consists of a Motorola 68020 single-board computer which will perform real time servo control and path planning of the manipulator. This single board computer communicates via shared memory with a SUN3 workstation, which serves as a software development system and robot programming environment. Researchers have designed a bus communication network to provide multiplexed communication between the joint modules and the computer controller. The bus supports identification of modules, sensing of joint states, and commands to the joint actuator. This network has sufficient bandwidth to allow servo sampling rates in excess of 500 Hz.

4 citations



01 Jan 1987
TL;DR: The Camegie Mellon Warp machine as discussed by the authors has been used for low-level vision and robot vehicle control algorithms for a number of years and has been widely used in the field of computer vision.
Abstract: The parallel vision algorithm design and implementation project was established to facilitate vision programming on parallel architectures, particularly low-level vision and robot vehicle control algorithms on the Camegie Mellon Warp machine. To this end, we have (1) demonstrated the use of the Warp machine in several different algorithms; (2) developed a specialized programming language, called Apply, for low-level vision programming on parallel architectures in general, and Warp in particular; (3) used Warp as a research tool in vision, as opposed to using it only for research in parallel vision; (4) developed a significant library of low-level vision programs for use on Warp.

01 Jan 1987
TL;DR: Algorithms for identifying parameters of an N degrees-of-freedom robotic manipulator are presented and it is shown that the Newton-Euler model, which is nonlinear in the dynamic parameters, can be transformed into an equivalent modified model which is linear in dynamic parameters.
Abstract: This paper presents algorithms for identifying parameters of an N degrees-of-freedom robotic manipulator. First, we outline the fundamental properties of the Newton-Euler formulation of robot dynamics from the view point of parameter identification. We then show that the Newton-Euler model, which is nonlinear in the dynamic parameters, can be transformed into an equivalent modified model which is linear in dynamic parameters. We develop both on-line and off-line parameter estimation procedures. To illustrate our approach, we identify the dynamic parameters of the cylindrical robot, and the three degree-of-freedom positioning system of the CMU DirecbDrive Arm II. The experimental implementation of our algorithm to estimate the dynamics parameters of the six degreesof-freedom CMU DD Arm II is also presented.