scispace - formally typeset
Proceedings ArticleDOI

Visual triage: A bag-of-words experience selector for long-term visual route following

TLDR
This paper introduces an algorithm that prioritizes experiences most relevant to live operation, limiting the number of experiences required for localization, and demonstrates safe, vision-in-the-loop route following over a 31 hour period, despite appearance as different as night and day.
Abstract
Our work builds upon Visual Teach & Repeat 2 (VT&R2): a vision-in-the-loop autonomous navigation system that enables the rapid construction of route networks, safely built through operator-controlled driving. Added routes can be followed autonomously using visual localization. To enable long-term operation that is robust to appearance change, its Multi-Experience Localization (MEL) leverages many previously driven experiences when localizing to the manually taught network. While this multi-experience method is effective across appearance change, the computation becomes intractable as the number of experiences grows into the tens and hundreds. This paper introduces an algorithm that prioritizes experiences most relevant to live operation, limiting the number of experiences required for localization. The proposed algorithm uses a visual Bag-of-Words description of the live view to select relevant experiences based on what the vehicle is seeing right now, without having to factor in all possible environmental influences on scene appearance. This system runs in the loop, in real time, does not require bootstrapping, can be applied to any pointfeature MEL paradigm, and eliminates the need for visual training using an online, local visual vocabulary. By picking a subset of visually similar experiences to the live view, we demonstrate safe, vision-in-the-loop route following over a 31 hour period, despite appearance as different as night and day.

read more

Citations
More filters
Proceedings ArticleDOI

Navigation without localisation: reliable teach and repeat based on the convergence theorem

TL;DR: A position error model of a robot, which traverses a taught path by only correcting its heading is established, and a mathematical proof is outlined which shows that this position error does not diverge over time.
Journal ArticleDOI

kRadar++: Coarse-to-Fine FMCW Scanning Radar Localisation.

TL;DR: It is proved that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade.
Book ChapterDOI

I Can See for Miles and Miles: An Extended Field Test of Visual Teach and Repeat 2.0

TL;DR: Visual Teach and Repeat 2.0 is described, a vision-based path-following system capable of safe, long-term navigation over large-scale networks of connected paths in unstructured, outdoor environments and validated experimentally through an eleven-day field test in an untended gravel pit in Sudbury, Canada.
Journal ArticleDOI

Selective memory: Recalling relevant experience for long‐term visual localization

TL;DR: It is shown that the combination of the novel methods presented in this paper enable full use of incredibly rich multiexperience maps, opening the door to robust long‐term visual localization.
Posted Content

Navigation without localisation: reliable teach and repeat based on the convergence theorem

TL;DR: In this paper, the authors present a simple monocular teach-and-repeat navigation method based on the insights from the model, which is computationally efficient, it does not require camera calibration and can learn and autonomously traverse arbitrarily-shaped paths.
References
More filters
Proceedings Article

Visual categorization with bags of keypoints

TL;DR: This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches and shows that it is simple, computationally efficient and intrinsically invariant.
Proceedings ArticleDOI

Parallel Tracking and Mapping for Small AR Workspaces

TL;DR: A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.
Journal ArticleDOI

FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance

TL;DR: A probabilistic approach to the problem of recognizing places based on their appearance that can determine that a new observation comes from a previously unseen place, and so augment its map, and is particularly suitable for online loop closure detection in mobile robotics.
Book ChapterDOI

Locally Optimized RANSAC

TL;DR: The locally optimized ransac makes no new assumptions about the data, on the contrary – it makes the above-mentioned assumption valid by applying local optimization to the solution estimated from the random sample.
Journal ArticleDOI

Appearance-only SLAM at large scale with FAB-MAP 2.0

TL;DR: A new formulation of appearance-only SLAM suitable for very large scale place recognition that incorporates robustness against perceptual aliasing and substantially outperforms the standard term-frequency inverse-document-frequency (tf-idf) ranking measure.
Related Papers (5)