scispace - formally typeset
Search or ask a question
Author

Daphne Roller

Bio: Daphne Roller is an academic researcher from Stanford University. The author has contributed to research in topics: Subjective logic & Clutter. The author has an hindex of 3, co-authored 3 publications receiving 1217 citations.

Papers
More filters
Proceedings Article
09 Aug 2003
TL;DR: This paper describes a modified version of FastSLAM which overcomes important deficiencies of the original algorithm and proves convergence of this new algorithm for linear SLAM problems and provides real-world experimental results that illustrate an order of magnitude improvement in accuracy over the original Fast SLAM algorithm.
Abstract: Proceedings of IJCAI 2003 In [15], Montemerlo et al. proposed an algorithm called FastSLAM as an efficient and robust solution to the simultaneous localization and mapping problem. This paper describes a modified version of FastSLAM which overcomes important deficiencies of the original algorithm. We prove convergence of this new algorithm for linear SLAM problems and provide real-world experimental results that illustrate an order of magnitude improvement in accuracy over the original FastSLAM algorithm.

1,079 citations

Journal ArticleDOI
TL;DR: This approach enables taking into account and inferring indoor clutter without hand-labeling of the clutter in the training set, which is often inaccurate, and outperforms the state-of-the-art method of Hedau et al. that requires clutter labels.
Abstract: We address the problem of understanding an indoor scene from a single image in terms of recovering the room geometry (floor, ceiling, and walls) and furniture layout. A major challenge of this task arises from the fact that most indoor scenes are cluttered by furniture and decorations, whose appearances vary drastically across scenes, thus can hardly be modeled (or even hand-labeled) consistently. In this paper we tackle this problem by introducing latent variables to account for clutter, so that the observed image is jointly explained by the room and clutter layout. Model parameters are learned from a training set of images that are only labeled with the layout of the room geometry. Our approach enables taking into account and inferring indoor clutter without hand-labeling of the clutter in the training set, which is often inaccurate. Yet it outperforms the state-of-the-art method of Hedau et al. that requires clutter labels. As a latent variable based method, our approach has an interesting feature that latent variables are used in direct correspondence with a concrete visual concept (clutter in the room) and thus interpretable.

96 citations

Proceedings Article
23 Aug 1997
TL;DR: An approach that takes a knowledge base in an expressive rule-based first-order language, and leams the probabilistic parameters associated with those rules from data cases, and can handle data cases where many of the relevant aspects of the situation are unobserved.
Abstract: First-order logic is the traditional basis for knowledge representation languages. However, its applicability to many real-world tasks is limited by its inability to represent uncertainty. Bayesian belief networks, on the other hand, are inadequate for complex KR tasks due to the limited expressivity of the underlying (prepositional) language. The need to incorporate uncertainty into an expressive language has led to a resurgence of work on first-order probabilistic Logic. This paper addresses one of the main objections to the incorporation of probabilities into the language: "Where do the numbers come from?" We present an approach that takes a knowledge base in an expressive rule-based first-order language, and leams the probabilistic parameters associated with those rules from data cases. Our approach, which is based on algorithms for learning in traditional Bayesian networks, can handle data cases where many of the relevant aspects of the situation are unobserved. It is also capable of utilizing a rich variety of data cases, including instances with varying causal structure, and even involving a varying number of individuals. These features allow the approach to be used for a wide range of tasks, such as learning genetic propagation models or learning first-order STRIPS planning operators with uncertain effects.

92 citations


Cited by
More filters
Proceedings ArticleDOI
13 Nov 2007
TL;DR: A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.
Abstract: This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.

4,091 citations

Journal ArticleDOI
TL;DR: This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method.
Abstract: This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method. While there are still many practical issues to overcome, especially in more complex outdoor environments, the general SLAM method is now a well understood and established part of robotics. Another part of the tutorial summarized more recent works in addressing some of the remaining issues in SLAM, including computation, feature representation, and data association

3,760 citations

01 Jan 2003
TL;DR: This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map.
Abstract: Simultaneous Localization and Mapping (SLAM) is an essential capability for mobile robots exploring unknown environments. The Extended Kalman Filter (EKF) has served as the de-facto approach to SLAM for the last fifteen years. However, EKF-based SLAM algorithms suffer from two well-known shortcomings that complicate their application to large, real-world environments: quadratic complexity and sensitivity to failures in data association. I will present an alternative approach to SLAM that specifically addresses these two areas. This approach, called FastSLAM, factors the full SLAM posterior exactly into a product of a robot path posterior, and N landmark posteriors conditioned on the robot path estimate. This factored posterior can be approximated efficiently using a particle filter. The time required to incorporate an observation into FastSLAM scales logarithmically with the number of landmarks in the map. In addition to sampling over robot paths, FastSLAM can sample over potential data associations. Sampling over data associations enables FastSLAM to be used in environments with highly ambiguous landmark identities. This dissertation will describe the FastSLAM algorithm given both known and unknown data association. The performance of FastSLAM will be compared against the EKF on simulated and real-world data sets. Results will show that FastSLAM can produce accurate maps in extremely large environments, and in environments with substantial data association ambiguity. Finally, a convergence proof for FastSLAM in linear-Gaussian worlds will be presented.

2,358 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation, which drastically decreases the uncertainty about the robot's pose in the prediction step of the filter.
Abstract: Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches

2,209 citations

Book ChapterDOI
31 Jul 1999
TL;DR: This paper describes both parameter estimation and structure learning -- the automatic induction of the dependency structure in a model and shows how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets.
Abstract: A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat" data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning -- the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases.

1,047 citations