scispace - formally typeset
Search or ask a question
Book ChapterDOI

Estimating uncertain spatial relationships in robotics

01 Mar 1987-Vol. 4, pp 850-850
TL;DR: A representation for spatial information, called the stochastic map, and associated procedures for building it, reading information from it, and revising it incrementally as new information is obtained, providing a general solution to the problem of estimating uncertain relative spatial relationships.
Abstract: In this paper, we describe a representation for spatial information, called the stochastic map, and associated procedures for building it, reading information from it, and revising it incrementally as new information is obtained. The map contains the estimates of relationships among objects in the map, and their uncertainties, given all the available information. The procedures provide a general solution to the problem of estimating uncertain relative spatial relationships. The estimates are probabilistic in nature, an advance over the previous, very conservative, worst-case approaches to the problem. Finally, the procedures are developed in the context of state-estimation and filtering theory, which provides a solid basis for numerous extensions.

Content maybe subject to copyright    Report

Citations
More filters
BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations

01 Jan 2002
TL;DR: This thesis will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in Dbns, and how to learn DBN models from sequential data.
Abstract: Dynamic Bayesian Networks: Representation, Inference and Learning by Kevin Patrick Murphy Doctor of Philosophy in Computer Science University of California, Berkeley Professor Stuart Russell, Chair Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T ) time instead of O(T ), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T ) space instead of O(T ); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.

2,757 citations

Proceedings ArticleDOI
09 May 2011
TL;DR: G2o, an open-source C++ framework for optimizing graph-based nonlinear error functions, is presented and demonstrated that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.
Abstract: Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.

2,192 citations


Cites methods from "Estimating uncertain spatial relati..."

  • ...The � operator applies the increment �xi to xi by using the standard motion composition operator � (see [ 25 ]) after converting the increment to the same representation as the state variable:...

    [...]

Proceedings Article
18 Jul 1999
TL;DR: Monte Carlo Localization is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success and yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches.
Abstract: This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation "where needed." The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement.

1,206 citations


Cites methods from "Estimating uncertain spatial relati..."

  • ...The idea of probabilistic state estimation goes back to Kalman filters (Gelb 1974; Smith, Self, & Cheeseman 1990 ), which use multivariate Gaussians to represent the robot’s belief....

    [...]

Journal ArticleDOI
TL;DR: An introductory description to the graph-based SLAM problem is provided and a state-of-the-art solution that is based on least-squares error minimization and exploits the structure of the SLAM problems during optimization is discussed.
Abstract: Being able to build a map of the environment and to simultaneously localize within this map is an essential skill for mobile robots navigating in unknown environments in absence of external referencing systems such as GPS. This so-called simultaneous localization and mapping (SLAM) problem has been one of the most popular research topics in mobile robotics for the last two decades and efficient approaches for solving this task have been proposed. One intuitive way of formulating SLAM is to use a graph whose nodes correspond to the poses of the robot at different points in time and whose edges represent constraints between the poses. The latter are obtained from observations of the environment or from movement actions carried out by the robot. Once such a graph is constructed, the map can be computed by finding the spatial configuration of the nodes that is mostly consistent with the measurements modeled by the edges. In this paper, we provide an introductory description to the graph-based SLAM problem. Furthermore, we discuss a state-of-the-art solution that is based on least-squares error minimization and exploits the structure of the SLAM problems during optimization. The goal of this tutorial is to enable the reader to implement the proposed methods from scratch.

1,103 citations

References
More filters
Book
01 Jan 1965
TL;DR: This chapter discusses the concept of a Random Variable, the meaning of Probability, and the axioms of probability in terms of Markov Chains and Queueing Theory.
Abstract: Part 1 Probability and Random Variables 1 The Meaning of Probability 2 The Axioms of Probability 3 Repeated Trials 4 The Concept of a Random Variable 5 Functions of One Random Variable 6 Two Random Variables 7 Sequences of Random Variables 8 Statistics Part 2 Stochastic Processes 9 General Concepts 10 Random Walk and Other Applications 11 Spectral Representation 12 Spectral Estimation 13 Mean Square Estimation 14 Entropy 15 Markov Chains 16 Markov Processes and Queueing Theory

13,886 citations

Book
01 Jan 2002
TL;DR: In this paper, the meaning of probability and random variables are discussed, as well as the axioms of probability, and the concept of a random variable and repeated trials are discussed.
Abstract: Part 1 Probability and Random Variables 1 The Meaning of Probability 2 The Axioms of Probability 3 Repeated Trials 4 The Concept of a Random Variable 5 Functions of One Random Variable 6 Two Random Variables 7 Sequences of Random Variables 8 Statistics Part 2 Stochastic Processes 9 General Concepts 10 Random Walk and Other Applications 11 Spectral Representation 12 Spectral Estimation 13 Mean Square Estimation 14 Entropy 15 Markov Chains 16 Markov Processes and Queueing Theory

12,407 citations

Book
01 Jan 1974
TL;DR: This is the first book on the optimal estimation that places its major emphasis on practical applications, treating the subject more from an engineering than a mathematical orientation, and the theory and practice of optimal estimation is presented.
Abstract: This is the first book on the optimal estimation that places its major emphasis on practical applications, treating the subject more from an engineering than a mathematical orientation. Even so, theoretical and mathematical concepts are introduced and developed sufficiently to make the book a self-contained source of instruction for readers without prior knowledge of the basic principles of the field. The work is the product of the technical staff of the The Analytic Sciences Corporation (TASC), an organization whose success has resulted largely from its applications of optimal estimation techniques to a wide variety of real situations involving large-scale systemsArthur Gelb writes in the Foreword that "It is our intent throughout to provide a simple and interesting picture of the central issues underlying modern estimation theory and practice. Heuristic, rather than theoretically elegant, arguments are used extensively, with emphasis on physical insights and key questions of practical importance."Numerous illustrative examples, many based on actual applications, have been interspersed throughout the text to lead the student to a concrete understanding of the theoretical material. The inclusion of problems with "built-in" answers at the end of each of the nine chapters further enhances the self-study potential of the text.After a brief historical prelude, the book introduces the mathematics underlying random process theory and state-space characterization of linear dynamic systems. The theory and practice of optimal estimation is them presented, including filtering, smoothing, and prediction. Both linear and non-linear systems, and continuous- and discrete-time cases, are covered in considerable detail. New results are described concerning the application of covariance analysis to non-linear systems and the connection between observers and optimal estimators. The final chapters treat such practical and often pivotal issues as suboptimal structure, and computer loading considerations.This book is an outgrowth of a course given by TASC at a number of US Government facilities. Virtually all of the members of the TASC technical staff have, at one time and in one way or another, contributed to the material contained in the work

6,015 citations


"Estimating uncertain spatial relati..." refers background or methods in this paper

  • ...Given the sensor model, the conditional estimates of the sensor values and their uncertainties, and an actual sensor measurement, we can update the state estimate using the Kalman Filter equations [Gelb, 1984] given below, and described in the next section:...

    [...]

  • ...The error in the estimation due to the non-linearities in h can be greatly reduced by iteration, using the Iterated Extended Kalman Filter equations [Gelb, 1984]:...

    [...]

  • ...A continuous dynamics model can be developed given a particular robot, and the above equations can be reformulated as functions of time (see [Gelb, 1984])....

    [...]

  • ...If the functions f in (17) and h in (15) are linear in the state vector variables, then the partial derivative matrices F and H are simply constants, and the update formulae (16) with (17), (15), and (18), represent the Kalman Filter [Gelb, 1984]....

    [...]

01 Jan 1987
TL;DR: The method presented can be generalized to six degrees offreedom and provides a practical means of mating the relationships among objects, as well as estimating the uncertainty associated with the relationships.

1,421 citations