scispace - formally typeset
Search or ask a question
Author

Mark Maimone

Bio: Mark Maimone is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Mars Exploration Program & Exploration of Mars. The author has an hindex of 32, co-authored 62 publications receiving 4295 citations. Previous affiliations of Mark Maimone include Jet Propulsion Laboratory & Carnegie Mellon University.


Papers
More filters
Journal ArticleDOI
TL;DR: The Visual Odometry algorithm is described, several driving strategies that rely on it (including Slip Checks, Keep‐out Zones, and Wheel Dragging), and its results from the first 2 years of operations on Mars are summarized.
Abstract: NASA's two Mars Exploration Rovers (MER) have successfully demonstrated a robotic Visual Odometry capability on another world for the first time. This provides each rover with accurate knowledge of its position, allowing it to autonomously detect and compensate for any unforeseen slip encountered during a drive. It has enabled the rovers to drive safely and more effectively in highly sloped and sandy terrains and has resulted in increased mission science return by reducing the number of days required to drive into interesting areas. The MER Visual Odometry system comprises onboard software for comparing stereo pairs taken by the pointable mast-mounted 45 deg FOV Navigation cameras (NAVCAMs). The system computes an update to the 6 degree of freedom rover pose (x, y, z, roll, pitch, yaw) by tracking the motion of autonomously selected terrain features between two pairs of 256×256 stereo images. It has demonstrated good performance with high rates of successful convergence (97% on Spirit, 95% on Opportunity), successfully detected slip ratios as high as 125%, and measured changes as small as 2 mm, even while driving on slopes as high as 31 deg. Visual Odometry was used over 14% of the first 10.7 km driven by both rovers. During the first 2 years of operations, Visual Odometry evolved from an “extra credit” capability into a critical vehicle safety system. In this paper we describe our Visual Odometry algorithm, discuss several driving strategies that rely on it (including Slip Checks, Keep-out Zones, and Wheel Dragging), and summarize its results from the first 2 years of operations on Mars. © 2006 Wiley Periodicals, Inc.

634 citations

Proceedings ArticleDOI
09 Mar 2002
TL;DR: The radiation effects analysis is summarized that suggests that commercial grade processors are likely to be adequate for Mars surface missions, and the level of speedup that may accrue from using these instead of radiation hardened parts is discussed.
Abstract: NASA's Mars Exploration Rover (MER) missions will land twin rovers on the surface of Mars in 2004. These rovers will have the ability to navigate safely through unknown and potentially hazardous terrain, using autonomous passive stereo vision to detect potential terrain hazards before driving into them. Unfortunately, the computational power of currently available radiation hardened processors limits the amount of distance (and therefore science) that can be safely achieved by any rover in a given time frame. We present overviews of our current rover vision and navigation systems, to provide context for the types of computation that are required to navigate safely. We also present baseline timing results that represent a lower bound in achievable performance (useful for systems engineering studies of future missions), and describe ways to improve that performance using commercial grade (as opposed to radiation hardened) processors. In particular, we document speedups to our stereo vision system that were achieved using the vectorized operations provided by Pentium MMX technology. Timing data were derived from implementations on several platforms: a prototype Mars rover with flight-like electronics (the Athena Software Development Model (SDM) rover), a RAD6000 computing platform (as will be used in the 2003 MER missions), and research platforms with commercial Pentium III and Sparc processors. Finally, we summarize the radiation effects analysis that suggests that commercial grade processors are likely to be adequate for Mars surface missions, and discuss the level of speedup that may accrue from using these instead of radiation hardened parts.

428 citations

Proceedings ArticleDOI
10 Oct 2005
TL;DR: The visual odometry algorithm used on the Mars Exploration Rovers is described, and its results from the first year of operations on Mars are summarized.
Abstract: NASA's Mars Exploration Rovers (MER) was designed to traverse in Viking Lander-I style terrains: mostly flat, with many small non-obstacle rocks and occasional obstacles. During actual operations in such terrains, onboard position estimates derived solely from the onboard inertial measurement unit and wheel encoder-based odometry achieved well within the design goal of at most 10% error. However, MER vehicles were also driven along slippery slopes tilted as high as 31 degrees. In such conditions an additional capability was employed to maintain a sufficiently accurate onboard position estimate: visual odometry. The MER visual odometry system comprises onboard software for comparing stereo pairs taken by the pointable mast-mounted 45 degree FOV navigation cameras (NAV-CAMs). The system computes an update to the 6-DOF rover pose (x, y, z, roll, pitch, yaw) by tracking the motion of autonomously-selected "interesting" terrain features between two pairs of stereo images, in both 2D pixel and 3D world coordinates. A maximum likelihood estimator is applied to the computed 3D offsets to produce a final, corrected estimate of vehicle motion between the two pairs. In this paper we describe the visual odometry algorithm used on the Mars Exploration Rovers, and summarize its results from the first year of operations on Mars.

298 citations

Journal ArticleDOI
TL;DR: A methodology for long-distance rover navigation that meets both a high level of robustness and a low rate of error growth using robust estimation of ego-motion is described and implemented to run on-board a prototype Mars rover.

270 citations

Journal ArticleDOI
TL;DR: The scientific payloads of the rovers will include a stereo pair of panoramic cameras and a microscopic imager as mentioned in this paper, which will be used on the Mars Exploration Rovers (MROVs).
Abstract: The scientific payloads of the rovers will include a stereo pair of panoramic cameras and a microscopic imager.

205 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Simultaneous localization and mapping (SLAM) as mentioned in this paper consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it.
Abstract: Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?

2,039 citations

Journal ArticleDOI
TL;DR: What is now the de-facto standard formulation for SLAM is presented, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers.
Abstract: Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?

1,828 citations

Proceedings Article
01 Jan 2004
TL;DR: A system that estimates the motion of a stereo head or a single moving camera based on video input in real-time with low delay and the motion estimates are used for navigational purposes.
Abstract: We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates what we call visual odometry, i.e. motion estimates from visual input alone. No prior knowledge of the scene nor the motion is necessary. The visual odometry can also be used in conjunction with information from other sources such as GPS, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive and handheld platforms. We focus on results with an autonomous ground vehicle. We give examples of camera trajectories estimated purely from images over previously unseen distances and periods of time.

1,786 citations

Book
25 Jan 2008
TL;DR: The goal of this review is to present a unified treatment of HRI-related problems, to identify key themes, and discuss challenge problems that are likely to shape the field in the near future.
Abstract: Human-Robot Interaction (HRI) has recently received considerable attention in the academic community, in labs, in technology companies, and through the media. Because of this attention, it is desirable to present a survey of HRI to serve as a tutorial to people outside the field and to promote discussion of a unified vision of HRI within the field. The goal of this review is to present a unified treatment of HRI-related problems, to identify key themes, and discuss challenge problems that are likely to shape the field in the near future. Although the review follows a survey structure, the goal of presenting a coherent "story" of HRI means that there are necessarily some well-written, intriguing, and influential papers that are not referenced. Instead of trying to survey every paper, we describe the HRI story from multiple perspectives with an eye toward identifying themes that cross applications. The survey attempts to include papers that represent a fair cross section of the universities, government efforts, industry labs, and countries that contribute to HRI, and a cross section of the disciplines that contribute to the field, such as human, factors, robotics, cognitive psychology, and design.

1,602 citations

Journal ArticleDOI
TL;DR: Visual odometry is the process of estimating the egomotion of an agent (e.g., vehicle, human, and robot) using only the input of a single or If multiple cameras attached to it, and application domains include robotics, wearable computing, augmented reality, and automotive.
Abstract: Visual odometry (VO) is the process of estimating the egomotion of an agent (e.g., vehicle, human, and robot) using only the input of a single or If multiple cameras attached to it. Application domains include robotics, wearable computing, augmented reality, and automotive. The term VO was coined in 2004 by Nister in his landmark paper. The term was chosen for its similarity to wheel odometry, which incrementally estimates the motion of a vehicle by integrating the number of turns of its wheels over time. Likewise, VO operates by incrementally estimating the pose of the vehicle through examination of the changes that motion induces on the images of its onboard cameras. For VO to work effectively, there should be sufficient illumination in the environment and a static scene with enough texture to allow apparent motion to be extracted. Furthermore, consecutive frames should be captured by ensuring that they have sufficient scene overlap.

1,371 citations