scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Vision and Navigation for the Carnegie-Mellon Navlab

01 Jun 1987-Vol. 2, Iss: 1, pp 521-556
TL;DR: By reading vision and navigation the carnegie mellon navlab, you can take more advantages with limited budget.
Abstract: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles. >
Citations
More filters
01 Jan 1995
TL;DR: It is claimed that the state of computer architecture has been a strong influence on models of thought in Artificial Intelligence over the last thirty years.
Abstract: Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation.

1,796 citations

Proceedings Article•DOI•
01 Jan 1988
TL;DR: ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following that can effectively follow real roads under certain field conditions.
Abstract: ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.

1,784 citations

Journal Article•DOI•
TL;DR: The developments of the last 20 years in the area of vision for mobile robot navigation are surveyed and the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment are discussed.
Abstract: Surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment.

1,386 citations


Cites background from "Vision and Navigation for the Carne..."

  • ...Index Terms—Mobile robotics, navigation, computer vision, indoor navigation, outdoor navigation....

    [...]

01 Dec 1991
TL;DR: In this paper, an automated intelligent vehicle/highway system (IVHS) is described, and a four-layer hierarchical control architecture is proposed to decompose this problem into more manageable units.
Abstract: Key features of one automated intelligent vehicle/highway system (IVHS) are outlined, it is shown how core driver decisions are improved, a basic IVHS control system architecture is proposed, and a design of some control subsystems is offered. Some experimental work is summarized. A system that promises a threefold increase in capacity is outlined, and a four-layer hierarchical control architecture that decomposes this problem into more manageable units is proposed. >

1,268 citations

Journal Article•DOI•
TL;DR: A review of recent vision-based on-road vehicle detection systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems is presented.
Abstract: Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.

1,181 citations


Cites background from "Vision and Navigation for the Carne..."

  • ...Finally, our conclusions are given in Section 10....

    [...]