scispace - formally typeset
Proceedings ArticleDOI

Learning to fly by crashing

Reads0
Chats0
TLDR
This paper builds a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects to create one of the biggest UAV crash dataset.
Abstract
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see:

read more

Citations
More filters
Journal ArticleDOI

Deep learning in medical imaging and radiation therapy.

TL;DR: The general principles of DL and convolutional neural networks are introduced, five major areas of application of DL in medical imaging and radiation therapy are surveyed, common themes are identified, methods for dataset expansion are discussed, and lessons learned, remaining challenges, and future directions are summarized.
Journal ArticleDOI

DroNet: Learning to Fly by Driving

TL;DR: The proposed DroNet is a convolutional neural network that can safely drive a drone through the streets of a city, trained from data collected by cars and bicycles, which, already integrated into the urban environment, would not endanger other vehicles and pedestrians.
Proceedings ArticleDOI

Learning to Drive in a Day

TL;DR: In this paper, the authors demonstrate the first application of deep reinforcement learning to autonomous driving using a single monocular image as input, and provide a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control.
Posted Content

Learning to Drive in a Day

TL;DR: This work demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision and provides a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control.
Posted Content

Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control.

TL;DR: It is demonstrated that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model.
References
More filters
Proceedings ArticleDOI

Learning to push by grasping: Using multiple tasks for effective learning

TL;DR: In this paper, the authors show that models with multi-task learning tend to perform better than task-specific models trained with the same amount of data, for example, a deep-network learned with 2.5k grasp and 2. 5k push examples performs better on grasping than a network trained on 5k grasp examples.
Proceedings ArticleDOI

Planning 3-D Path Networks in Unstructured Environments

TL;DR: Using a priori aerial data scans of forested environments, a network of free space bubbles forming safe paths within environments cluttered with tree trunks, branches and dense foliage is computed.
Proceedings ArticleDOI

Aerial robot piloted in steep relief by optic flow sensors

TL;DR: The paper investigates how the ground avoidance performances of the former OCTAVE robot could be enhanced to cope with steep relief and combines frontal and ventral OF sensors and to merge feedback and feedforward loops.
Proceedings ArticleDOI

Real-time path planning in a dynamic 3-D environment

TL;DR: A collision free path for a robot with arbitrary motion in a dynamic 3-D environment is efficiently sought by using the real-time A* algorithm and the potential field generated from each black cell of the octree.
Proceedings ArticleDOI

Self-supervised monocular distance learning on a lightweight micro air vehicle

TL;DR: A self-supervised learning approach is proposed that combines a camera and a very small short-range proximity sensor to find the relation between the appearance of objects in camera images and their corresponding distances.
Related Papers (5)