End-to-End Learning of Driving Models from Large-Scale Video Datasets
Huazhe Xu,Yang Gao,Fisher Yu,Trevor Darrell +3 more
- pp 3530-3538
Reads0
Chats0
TLDR
In this article, an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state is proposed.Abstract:
Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions.read more
Citations
More filters
Posted Content
Learned Step Size Quantization
Steven K. Esser,Jeffrey L. McKinstry,Deepika Bablani,Rathinakumar Appuswamy,Dharmendra S. Modha +4 more
TL;DR: In this paper, the authors propose to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters.
Journal ArticleDOI
A Survey of End-to-End Driving: Architectures and Training Methods.
TL;DR: A deeper look on the so-called end-to-end approaches for autonomous driving, where the entire driving pipeline is replaced with a single neural network, and concludes with an architecture that combines the most promising elements of the end- to-end autonomous driving systems.
Proceedings ArticleDOI
Bayesian Relational Memory for Semantic Visual Navigation
TL;DR: A new memory architecture, Bayesian Relational Memory (BRM), is introduced to improve the generalization ability for semantic visual navigation agents in unseen environments, where an agent is given a semantic target to navigate towards.
Proceedings ArticleDOI
End-to-end Multi-Modal Multi-Task Vehicle Control for Self-Driving Cars with Visual Perceptions
TL;DR: In this article, a multi-task learning framework was proposed to predict steering angle and speed control simultaneously in an end-to-end manner, where the steering angle alone is not sufficient for vehicle control.
Proceedings ArticleDOI
Accurate and Diverse Sampling of Sequences Based on a "Best of Many" Sample Objective
TL;DR: This work addresses challenges in a Gaussian Latent Variable model for sequence prediction with a "Best of Many" sample objective that leads to more accurate and more diverse predictions that better capture the true variations in real-world sequence data.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings Article
Auto-Encoding Variational Bayes
Diederik P. Kingma,Max Welling +1 more
TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.