scispace - formally typeset
Proceedings ArticleDOI

Learning to fly by crashing

Reads0
Chats0
TLDR
This paper builds a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects to create one of the biggest UAV crash dataset.
Abstract
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see:

read more

Citations
More filters
Posted Content

GONet: A Semi-Supervised Deep Learning Approach For Traversability Estimation

TL;DR: In this paper, a semi-supervised deep learning approach for traversability estimation from fisheye images is proposed, which is trained with many positive images of traversable places, but just a small set of negative images depicting blocked and unsafe areas.
Posted Content

Object-centric Forward Modeling for Model Predictive Control

TL;DR: An approach to learn an object-centric forward model that can be leveraged to search for action sequences that lead to desired goal configurations, and that in conjunction with a learned correction module, this allows for robust closed loop execution.
Journal ArticleDOI

Reward-driven U-Net training for obstacle avoidance drone

TL;DR: This study proposes a new framework where a supervised segmentation network is trained with labels made by an actor-critic network in a reward-driven manner, wherein this U-Net based network infers the next moving direction from the sequence of input images.
Proceedings ArticleDOI

Self-training by Reinforcement Learning for Full-autonomous Drones of the Future*

TL;DR: This paper presents a drone concept with a full level of autonomy based on Deep Reinforcement Learning (DRL), and presents the preliminary results for an environment which is a realistic flight simulator, and an agent that is a quad-copter drone able to execute 3 actions.
Posted Content

Combining Optimal Control and Learning for Visual Navigation in Novel Environments.

TL;DR: This work coupling model-based control with learning-based perception produces a series of waypoints that guide the robot to the goal via a collision-free path and demonstrates that the proposed approach can reach goal locations more reliably and efficiently in novel environments as compared to purely geometric mapping-based or end-to-end learning- based alternatives.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Human-level control through deep reinforcement learning

TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Posted Content

Rich feature hierarchies for accurate object detection and semantic segmentation

TL;DR: This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
Proceedings ArticleDOI

Parallel Tracking and Mapping for Small AR Workspaces

TL;DR: A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.
Related Papers (5)