scispace - formally typeset
Open AccessProceedings ArticleDOI

End-to-End Learning of Driving Models from Large-Scale Video Datasets

Reads0
Chats0
TLDR
In this article, an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state is proposed.
Abstract
Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions.

read more

Citations
More filters
Journal ArticleDOI

Weakly Supervised Reinforcement Learning for Autonomous Highway Driving via Virtual Safety Cages

TL;DR: In this article, a reinforcement learning based approach to autonomous vehicle longitudinal control is presented, where the rule-based safety cages provide enhanced safety for the vehicle as well as weak supervision to the reinforcement learning agent.
Journal ArticleDOI

Study of the influence of lexicon and language restrictions on computer assisted transcription of historical manuscripts

TL;DR: The assistive transcription system without lexicon or language restrictions is able to provide an additional reduction on the human effort required to correct the transcription in more than 50% over the transcriptions offered by the HTR system.
Posted Content

End to End Vehicle Lateral Control Using a Single Fisheye Camera

TL;DR: In this article, a model to generate data and label augmentation using only one short range fisheye camera is presented and evaluated on real world driving scenarios, open road and a custom test track with challenging obstacle avoidance and sharp turns.
Book ChapterDOI

FasterVideo: Efficient Online Joint Object Detection And Tracking

TL;DR: Faster R-CNN as discussed by the authors extends the detection framework to learn instance-level embeddings which prove beneficial for data association and re-identification purposes, and reaches a very high computational efficiency necessary for relevant applications.
Posted Content

RoCUS: Robot Controller Understanding via Sampling.

TL;DR: RoCUS as discussed by the authors is a Bayesian sampling-based method to find situations that lead to trajectories which exhibit certain behaviors, which can gain important insights into the controller that are easily missed in standard task-completion evaluations.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)