scispace - formally typeset
Journal ArticleDOI

Experience-based navigation for long-term localisation

Winston Churchill, +1 more
- 01 Dec 2013 - 
- Vol. 32, Iss: 14, pp 1645-1661
Reads0
Chats0
TLDR
An approach which allows us to incrementally learn a model whose complexity varies naturally in accordance with variation of scene appearance, and which focuses on vision as a primary sensor throughout, is described.
Abstract
This paper is about long-term navigation in environments whose appearance changes over time, suddenly or gradually. We describe, implement and validate an approach which allows us to incrementally learn a model whose complexity varies naturally in accordance with variation of scene appearance. It allows us to leverage the state of the art in pose estimation to build over many runs, a world model of sufficient richness to allow simple localisation despite a large variation in conditions. As our robot repeatedly traverses its workspace, it accumulates distinct visual experiences that in concert, implicitly represent the scene variation: each experience captures a visual mode. When operating in a previously visited area, we continually try to localise in these previous experiences while simultaneously running an independent vision-based pose estimation system. Failure to localise in a sufficient number of prior experiences indicates an insufficient model of the workspace and instigates the laying down of the live image sequence as a new distinct experience. In this way, over time we can capture the typical time-varying appearance of an environment and the number of experiences required tends to a constant. Although we focus on vision as a primary sensor throughout, the ideas we present here are equally applicable to other sensor modalities. We demonstrate our approach working on a road vehicle operating over a 3-month period at different times of day, in different weather and lighting conditions. We present extensive results analysing different aspects of the system and approach, in total processing over 136,000 frames captured from 37 km of driving.

read more

Citations
More filters
Journal ArticleDOI

Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age

TL;DR: Simultaneous localization and mapping (SLAM) as mentioned in this paper consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it.
Journal ArticleDOI

Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

TL;DR: What is now the de-facto standard formulation for SLAM is presented, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers.
Journal ArticleDOI

Visual Place Recognition: A Survey

TL;DR: A survey of the visual place recognition research landscape is presented, introducing the concepts behind place recognition, how a “place” is defined in a robotics context, and the major components of a place recognition system.
Journal Article

SeqSLAM : visual route-based navigation for sunny summer days and stormy winter nights

TL;DR: A new approach to visual navigation under changing conditions dubbed SeqSLAM, which removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images.
Journal ArticleDOI

University of Michigan North Campus long-term vision and lidar dataset

TL;DR: This paper documents a large scale, long-term autonomy dataset for robotics research collected on the University of Michigan’s North Campus that consists of omnidirectional imagery, 3D lidar, planar lidars, GPS, and proprioceptive sensors for odometry collected using a Segway robot.
References
More filters
Book ChapterDOI

SURF: speeded up robust features

TL;DR: A novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Journal ArticleDOI

Speeded-Up Robust Features (SURF)

TL;DR: A novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Journal ArticleDOI

Simultaneous localization and mapping (SLAM): part II

TL;DR: This paper discusses the recursive Bayesian formulation of the simultaneous localization and mapping (SLAM) problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained.
Journal ArticleDOI

Faster and Better: A Machine Learning Approach to Corner Detection

TL;DR: A new heuristic for feature detection is presented and, using machine learning, a feature detector is derived from this which can fully process live PAL video using less than 5 percent of the available processing time.
Journal ArticleDOI

BRIEF: Computing a Local Binary Descriptor Very Fast

TL;DR: This paper shows that one can directly compute a binary descriptor, which it is called BRIEF, on the basis of simple intensity difference tests and shows that it yields comparable recognition accuracy, while running in an almost vanishing fraction of the time required by either.
Related Papers (5)