scispace - formally typeset
Search or ask a question
Author

Steve Goldberg

Bio: Steve Goldberg is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Mars Exploration Program & Visual odometry. The author has an hindex of 5, co-authored 6 publications receiving 889 citations. Previous affiliations of Steve Goldberg include University of Southern California.

Papers
More filters
Proceedings ArticleDOI
09 Mar 2002
TL;DR: The radiation effects analysis is summarized that suggests that commercial grade processors are likely to be adequate for Mars surface missions, and the level of speedup that may accrue from using these instead of radiation hardened parts is discussed.
Abstract: NASA's Mars Exploration Rover (MER) missions will land twin rovers on the surface of Mars in 2004. These rovers will have the ability to navigate safely through unknown and potentially hazardous terrain, using autonomous passive stereo vision to detect potential terrain hazards before driving into them. Unfortunately, the computational power of currently available radiation hardened processors limits the amount of distance (and therefore science) that can be safely achieved by any rover in a given time frame. We present overviews of our current rover vision and navigation systems, to provide context for the types of computation that are required to navigate safely. We also present baseline timing results that represent a lower bound in achievable performance (useful for systems engineering studies of future missions), and describe ways to improve that performance using commercial grade (as opposed to radiation hardened) processors. In particular, we document speedups to our stereo vision system that were achieved using the vectorized operations provided by Pentium MMX technology. Timing data were derived from implementations on several platforms: a prototype Mars rover with flight-like electronics (the Athena Software Development Model (SDM) rover), a RAD6000 computing platform (as will be used in the 2003 MER missions), and research platforms with commercial Pentium III and Sparc processors. Finally, we summarize the radiation effects analysis that suggests that commercial grade processors are likely to be adequate for Mars surface missions, and discuss the level of speedup that may accrue from using these instead of radiation hardened parts.

428 citations

Journal ArticleDOI
TL;DR: The design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission was a major step forward in the use ofComputer vision in space.
Abstract: Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions.

185 citations

Journal ArticleDOI
TL;DR: A prototype urban robot on a novel chassis with articulated tracks that enable stair climbing and scrambling over rubble and stereo vision-based obstacle avoidance, visual servoing to user-designated goals, and autonomous vision-guided stair climbing is developed.

177 citations

Proceedings ArticleDOI
19 May 2008
TL;DR: This algorithm is a significant improvement over the algorithm developed for the Mars Exploration Rover Mission because it is at least four time more computationally efficient and it tracks significantly more features.
Abstract: Visual odometry can augment or replace wheel odometry when navigating in high slip terrain which is quite important for autonomous navigation on Mars. We present a computationally efficient and robust visual odometry algorithm developed for the Mars Science Laboratory mission. This algorithm is a significant improvement over the algorithm developed for the Mars Exploration Rover Mission because it is at least four time more computationally efficient and it tracks significantly more features. The core of the algorithm is an integrated motion estimation and stereo feature tracking loop that allows for feature recovery while guiding feature correlation search to minimize computation. Results on thousands of terrestrial and Martian stereo pairs show that the algorithm can operate with no initial motion estimate while still obtaining subpixel attitude estimation performance.

131 citations


Cited by
More filters
Proceedings Article
01 Jan 2004
TL;DR: A system that estimates the motion of a stereo head or a single moving camera based on video input in real-time with low delay and the motion estimates are used for navigational purposes.
Abstract: We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates what we call visual odometry, i.e. motion estimates from visual input alone. No prior knowledge of the scene nor the motion is necessary. The visual odometry can also be used in conjunction with information from other sources such as GPS, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive and handheld platforms. We focus on results with an autonomous ground vehicle. We give examples of camera trajectories estimated purely from images over previously unseen distances and periods of time.

1,786 citations

Book ChapterDOI
01 Jan 2017
TL;DR: A system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight, which enables 3D flight in cluttered environments using only onboard sensor data.
Abstract: RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.

681 citations

Journal ArticleDOI
TL;DR: The outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... the recent concept of visual sonar has also been revised.
Abstract: Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.

649 citations

Journal ArticleDOI
TL;DR: D/sup */ Lite is introduced, a heuristic search method that determines the same paths and thus moves the robot in the same way but is algorithmically different, and is at least as efficient as D/Sup */.
Abstract: Mobile robots often operate in domains that are only incompletely known, for example, when they have to move from given start coordinates to given goal coordinates in unknown terrain. In this case, they need to be able to replan quickly as their knowledge of the terrain changes. Stentz' Focussed Dynamic A/sup */ (D/sup */) is a heuristic search method that repeatedly determines a shortest path from the current robot coordinates to the goal coordinates while the robot moves along the path. It is able to replan faster than planning from scratch since it modifies its previous search results locally. Consequently, it has been extensively used in mobile robotics. In this article, we introduce an alternative to D/sup */ that determines the same paths and thus moves the robot in the same way but is algorithmically different. D/sup */ Lite is simple, can be rigorously analyzed, extendible in multiple ways, and is at least as efficient as D/sup */. We believe that our results will make D/sup */-like replanning methods even more popular and enable robotics researchers to adapt them to additional applications.

601 citations