scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

17 May 2021-Vol. 8, pp 661199-661199
TL;DR: In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps.
Abstract: Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conducted tests and generalization experiments on the multi-session public datasets and compared them to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar (L2L), radar-to-radar (R2R), and radar-to-lidar (R2L), while the learned model is trained only once. We also release the source code publicly: https://github.com/ZJUYH/radar-to-lidar-place-recognition.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This work demonstrates the robustness and versatility of the method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training.
Abstract: Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This ...

15 citations

Journal ArticleDOI
19 Jan 2023-Energies
TL;DR: In this paper , the authors provide a systematic analysis of EV sustainability, digital technologies that enhance sustainability, their potential cybersecurity threats, and corresponding defense, and the extent to which specific security controls can mitigate these threats, thereby allowing for a smooth transition toward secure and sustainable future smart cities.
Abstract: With the global energy crisis, increasing demand, and a national-level emphasis on electric vehicles (EVs), numerous innovations are being witnessed throughout the EV industry. EVs are equipped with sensors that maintain a sustainable environment for the betterment of society and enhance human sustainability. However, at the same time, as is the case for any new digital technology, they are susceptible to threats to security and privacy. Recent incidents demonstrate that these sensors have been misused for car and energy theft, financial fraud, data compromise, and have caused severe health and safety problems, amongst other things. To the best of our knowledge, this paper provides a first systematic analysis of EV sustainability, digital technologies that enhance sustainability, their potential cybersecurity threats, and corresponding defense. Firstly, three robust taxonomies have been presented to identify the dangers that can affect long-term sustainability domains, including (1) life and well-being, (2) safe environment, and (3) innovation and development. Second, this research measures the impact of cybersecurity threats on EVs and correspondingly to their sustainability goals. Third, it details the extent to which specific security controls can mitigate these threats, thereby allowing for a smooth transition toward secure and sustainable future smart cities.

6 citations

Journal ArticleDOI
TL;DR: In this article , a cross-modality radar localisation on prior lidar maps is presented, which consists of two parts: first, bird's-eye-view radar images are transferred to fake lidar images by training a generative adversarial network offline.

3 citations

Journal ArticleDOI
TL;DR: In this paper , the authors present a systematic literature review on long-term localization and mapping following the Preferred Reporting Items for Systematic reviews and Meta-Analysis guidelines, which analyzes 142 works covering appearance invariance, modeling the environment dynamics, map size management, multisession, and computational topics such as parallel computing and timing efficiency.
Abstract: Long‐term operation of robots creates new challenges to Simultaneous Localization and Mapping (SLAM) algorithms. Long‐term SLAM algorithms should adapt to recent changes while preserving older states, when dealing with appearance variations (lighting, daytime, weather, or seasonal) or environment reconfiguration. When also operating robots for long periods and trajectory lengths, the map should readjust to environment changes but not grow indefinitely. The map size should depend only on updating the map with new information of interest, not on the operation time or trajectory length. Although several studies in the literature review SLAM algorithms, none of the studies focus on the challenges associated to lifelong SLAM. Thus, this paper presents a systematic literature review on long‐term localization and mapping following the Preferred Reporting Items for Systematic reviews and Meta‐Analysis guidelines. The review analyzes 142 works covering appearance invariance, modeling the environment dynamics, map size management, multisession, and computational topics such as parallel computing and timing efficiency. The analysis also focus on the experimental data and evaluation metrics commonly used to assess long‐term autonomy. Moreover, an overview over the bibliographic data of the 142 records provides analysis in terms of keywords and authorship co‐occurrence to identify the terms more used in long‐term SLAM and research networks between authors, respectively. Future studies can update this paper thanks to the systematic methodology presented in the review and the public GitHub repository with all the documentation and scripts used during the review process.

2 citations

Journal ArticleDOI
TL;DR: In this paper , the authors present an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments - outdoors, from urban to woodland, and indoors in warehouses and mines.
Abstract: This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -- outdoors, from urban to woodland, and indoors in warehouses and mines - without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160Hz.

2 citations

References
More filters
Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper proposes a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise, and demonstrates through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations.
Abstract: Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.

8,702 citations

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work proposes a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation, and shows how to jointly optimize the dimension reduction and the indexing algorithm.
Abstract: We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.

2,782 citations

Journal ArticleDOI
TL;DR: A probabilistic approach to the problem of recognizing places based on their appearance that can determine that a new observation comes from a previously unseen place, and so augment its map, and is particularly suitable for online loop closure detection in mobile robotics.
Abstract: This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mobile robotics.

1,582 citations

Journal ArticleDOI
TL;DR: By frequently traversing the same route over the period of a year, this dataset enables research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments to be investigated.
Abstract: We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on av...

1,285 citations

Journal ArticleDOI
TL;DR: A survey of the visual place recognition research landscape is presented, introducing the concepts behind place recognition, how a “place” is defined in a robotics context, and the major components of a place recognition system.
Abstract: Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines—particularly recognition in computer vision and animal navigation in neuroscience—have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition—the role of place recognition in the animal kingdom, how a “place” is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description.

933 citations