scispace - formally typeset
Open AccessProceedings ArticleDOI

Learning modular neural network policies for multi-task and multi-robot transfer

Reads0
Chats0
TLDR
The authors decompose neural network policies into task-specific and robot-specific modules, where the task specific modules are shared across robots and the robot specific modules were shared across all tasks on that robot, and exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations.
Abstract
Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.

read more

Citations
More filters
Journal ArticleDOI

Deep Convolutional Transfer Learning Network: A New Method for Intelligent Fault Diagnosis of Machines With Unlabeled Data

TL;DR: A new intelligent method named deep convolutional transfer learning network (DCTLN) is proposed, which facilitates the 1-D CNN to learn domain-invariant features by maximizing domain recognition errors and minimizing the probability distribution distance.
Posted Content

Multi-Task Learning with Deep Neural Networks: A Survey

TL;DR: An overview of multi-task learning methods for deep neural networks is given, with the aim of summarizing both the well-established and most recent directions within the field.
Book ChapterDOI

Dynamic Task Prioritization for Multitask Learning

TL;DR: This work proposes a notion of dynamic task prioritization to automatically prioritize more difficult tasks by adaptively adjusting the mixing weight of each task’s loss objective and outperforms existing multitask methods and demonstrates competitive results with modern single-task models on the COCO and MPII datasets.
Proceedings ArticleDOI

On-Demand Deep Model Compression for Mobile Devices: A Usage-Driven Model Selection Framework

TL;DR: A usage-driven selection framework is developed, referred to as AdaDeep, to automatically select a combination of compression techniques for a given DNN, that will lead to an optimal balance between user-specified performance goals and resource constraints.
Posted Content

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

TL;DR: In this article, a meta-controller learns to execute sequences of instructions after learning useful skills that solve subtasks, which is a step towards developing zero-shot task generalization capabilities in reinforcement learning.
References
More filters
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Posted Content

Playing Atari with Deep Reinforcement Learning

TL;DR: This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Journal ArticleDOI

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
Journal ArticleDOI

Multitask Learning

TL;DR: Multi-task Learning (MTL) as mentioned in this paper is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias.
Book ChapterDOI

Domain-adversarial training of neural networks

TL;DR: In this article, a new representation learning approach for domain adaptation is proposed, in which data at training and test time come from similar but different distributions, and features that cannot discriminate between the training (source) and test (target) domains are used to promote the emergence of features that are discriminative for the main learning task on the source domain.
Related Papers (5)
Trending Questions (1)
How do I transfer my war robots to another device?

Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another.