Open AccessPosted Content
LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion
Meet Shah,Zhiling Huang,Ankit Laddha,Matthew Langford,Blake Barber,Sidney Zhang,Carlos Vallespi-Gonzalez,Raquel Urtasun +7 more
TLDR
LiRaNet is presented, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps which achieves state-of-the-art performance on multiple large-scale datasets.Abstract:
In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps. Automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous radial velocity measurements. However, there are factors that make the fusion of lidar and radar information challenging, such as the relatively low angular resolution of radar measurements, their sparsity and the lack of exact time synchronization with lidar. To overcome these challenges, we propose an efficient spatio-temporal radar feature extraction scheme which achieves state-of-the-art performance on multiple large-scale datasets.Further, by incorporating radar information, we show a 52% reduction in prediction error for objects with high acceleration and a 16% reduction in prediction error for objects at longer range.read more
Citations
More filters
Proceedings ArticleDOI
FIERY: Future Instance Prediction in Bird's-Eye View From Surround Monocular Cameras
Anthony Hu,Zak Murez,Nikhil Mohan,Sofia Dudas,Jeffrey Hawke,Vijay Badrinarayanan,Roberto Cipolla,Alex Kendall +7 more
TL;DR: FIERY is a probabilistic future prediction model in bird's-eye view from monocular cameras that predicts future instance segmentation and motion of dynamic agents that can be transformed into non-parametric future trajectories.
Proceedings ArticleDOI
Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals
TL;DR: In this article, a two-stage deep fusion detector is proposed, which first generates proposals from two sensors and then fuses region-wise features between multimodal sensor streams to improve final detection results.
Journal ArticleDOI
Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges
TL;DR: A big picture of the deep radar perception stack is provided, including signal processing, datasets, labelling, data augmentation, and downstream tasks such as depth and velocity estimation, object detection, and sensor fusion.
Proceedings ArticleDOI
MVFuseNet: Improving End-to-End Object Detection and Motion Forecasting through Multi-View Fusion of LiDAR Data
TL;DR: In this paper, a multi-view approach for joint object detection and motion forecasting from a temporal sequence of LiDAR data is proposed, which effectively utilizes both range view (RV) and bird's eye view (BEV) for spatio-temporal feature learning as part of a temporal fusion network.
Journal ArticleDOI
Millimeter Wave FMCW RADARs for Perception, Recognition and Localization in Automotive Applications: A Survey
TL;DR: Algorithm and applications adapted or developed for these sensors in automotive applications based on frequency-modulated electromagnetic, and their noisy and lower-density outputs even compared to other technologies of RADARs, are described.
References
More filters
Proceedings ArticleDOI
Focal Loss for Dense Object Detection
TL;DR: This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.
Journal ArticleDOI
Focal Loss for Dense Object Detection
TL;DR: Focal loss as discussed by the authors focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training, which improves the accuracy of one-stage detectors.
Posted Content
nuScenes: A multimodal dataset for autonomous driving
Holger Caesar,Varun Bankiti,Alex H. Lang,Sourabh Vora,Venice Erin Liong,Qiang Xu,Anush Krishnan,Yu Pan,Giancarlo Baldan,Oscar Beijbom +9 more
TL;DR: nuScenes as mentioned in this paper is the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view.
Proceedings ArticleDOI
Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks
TL;DR: A recurrent sequence-to-sequence model observes motion histories and predicts future behavior, using a novel pooling mechanism to aggregate information across people, and outperforms prior work in terms of accuracy, variety, collision avoidance, and computational complexity.
Proceedings ArticleDOI
nuScenes: A Multimodal Dataset for Autonomous Driving
Holger Caesar,Varun Bankiti,Alex H. Lang,Sourabh Vora,Venice Erin Liong,Qiang Xu,Anush Krishnan,Yu Pan,Giancarlo Baldan,Oscar Beijbom +9 more
TL;DR: nuScenes as discussed by the authors is the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view.