scispace - formally typeset
Search or ask a question

Editors: Sequential Monte Carlo Methods in Practice

About: The article was published on 2001-01-01 and is currently open access. It has received 1215 citations till now. The article focuses on the topics: Dynamic Monte Carlo method & Monte Carlo method in statistical physics.
Citations
More filters
Journal ArticleDOI
TL;DR: This purpose of this introductory paper is to introduce the Monte Carlo method with emphasis on probabilistic machine learning and review the main building blocks of modern Markov chain Monte Carlo simulation.
Abstract: This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.

2,579 citations

01 Jan 2003
TL;DR: This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map.
Abstract: Simultaneous Localization and Mapping (SLAM) is an essential capability for mobile robots exploring unknown environments. The Extended Kalman Filter (EKF) has served as the de-facto approach to SLAM for the last fifteen years. However, EKF-based SLAM algorithms suffer from two well-known shortcomings that complicate their application to large, real-world environments: quadratic complexity and sensitivity to failures in data association. I will present an alternative approach to SLAM that specifically addresses these two areas. This approach, called FastSLAM, factors the full SLAM posterior exactly into a product of a robot path posterior, and N landmark posteriors conditioned on the robot path estimate. This factored posterior can be approximated efficiently using a particle filter. The time required to incorporate an observation into FastSLAM scales logarithmically with the number of landmarks in the map. In addition to sampling over robot paths, FastSLAM can sample over potential data associations. Sampling over data associations enables FastSLAM to be used in environments with highly ambiguous landmark identities. This dissertation will describe the FastSLAM algorithm given both known and unknown data association. The performance of FastSLAM will be compared against the EKF on simulated and real-world data sets. Results will show that FastSLAM can produce accurate maps in extremely large environments, and in environments with substantial data association ambiguity. Finally, a convergence proof for FastSLAM in linear-Gaussian worlds will be presented.

2,358 citations


Cites methods from "Editors: Sequential Monte Carlo Met..."

  • ...The FastSLAM algorithm implements the path estimator p(st j zt; ut; nt) using a modified particle filter [ 4 ]....

    [...]

Journal ArticleDOI
TL;DR: A more robust algorithm is developed called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation of Monte Carlo Localization algorithms, and is applied to mobile robots equipped with range finders.

1,945 citations


Cites background or methods from "Editors: Sequential Monte Carlo Met..."

  • ...Particle filters, as basic statistical tools, have become popular for tracking and position estimation in the last few years, as for example documented by a forthcoming book on this topic [ 18 ]....

    [...]

  • ...In the statistical literature, it is known as particle filters [17, 18 ,46,58], and recently computer vision researchers have proposed the same algorithm under the name of condensation algorithm [33]....

    [...]

  • ...Bayes filters have been applied to estimation problems for decades, and a recent interest in Monte Carlo approximations [ 18 ,26] suggests that the probabilistic paradigm is well suited for a broad range of state estimation problems in noisy temporal domains....

    [...]

Proceedings ArticleDOI
28 Jul 2002
TL;DR: FastSLAM as discussed by the authors is an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map.
Abstract: The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filter-based algorithms, for example, require time quadratic in the number of landmarks to incorporate each sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map. This algorithm is based on an exact factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM algorithm on both simulated and real-world data.

1,912 citations

Journal ArticleDOI
TL;DR: It is shown here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods, which allows not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so.
Abstract: Summary. Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions. Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Levy-driven stochastic volatility model.

1,869 citations