scispace - formally typeset
Proceedings ArticleDOI

Learning to fly by crashing

Reads0
Chats0
TLDR
This paper builds a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects to create one of the biggest UAV crash dataset.
Abstract
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see:

read more

Citations
More filters
Posted Content

Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges

TL;DR: ContinContinual Learning (CL) is a particular machine learning paradigm where the data distribution and learning objective changes through time, or where all the training data and objective criteria are never available at once as mentioned in this paper.
Proceedings ArticleDOI

Sim2Real Viewpoint Invariant Visual Servoing by Recurrent Control

TL;DR: In this article, a deep recurrent controller is trained to automatically determine which actions move the end-effector of a robotic arm to a desired object by using its memory of past movements, correcting mistakes and gradually moving closer to the target.
Journal ArticleDOI

Review of Deep Learning Methods in Robotic Grasp Detection

TL;DR: The current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping is reviewed and how each element of the deep learning approach has improved the overall performance of robotic grasp detection is discussed.
Journal ArticleDOI

Deep Drone Racing: From Simulation to Reality With Domain Randomization

TL;DR: This approach is the first to demonstrate zero-shot sim-to-real transfer on the task of agile drone flight and shows significant improvements over the state of the art.
Posted Content

Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

TL;DR: In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision, which can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Posted Content

Rich feature hierarchies for accurate object detection and semantic segmentation

TL;DR: This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
Proceedings ArticleDOI

Parallel Tracking and Mapping for Small AR Workspaces

TL;DR: A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.
Related Papers (5)