Open AccessPosted Content
Neural Architecture Search with Reinforcement Learning
Barret Zoph,Quoc V. Le +1 more
Reads0
Chats0
TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.Abstract:
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.read more
Citations
More filters
Journal ArticleDOI
A survey on Image Data Augmentation for Deep Learning
TL;DR: This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing DataAugmentation, a data-space solution to the problem of limited data.
Book ChapterDOI
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
TL;DR: ShuffleNet V2 as discussed by the authors proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs, based on a series of controlled experiments, and derives several practical guidelines for efficient network design.
Proceedings ArticleDOI
Transformer-XL: Attentive Language Models beyond a Fixed-Length Context.
TL;DR: This work proposes a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence, which consists of a segment-level recurrence mechanism and a novel positional encoding scheme.
Journal ArticleDOI
Deep Learning for Generic Object Detection: A Survey
Li Liu,Li Liu,Wanli Ouyang,Xiaogang Wang,Paul Fieguth,Jie Chen,Xinwang Liu,Matti Pietikäinen +7 more
TL;DR: A comprehensive survey of the recent achievements in this field brought about by deep learning techniques, covering many aspects of generic object detection: detection frameworks, object feature representation, object proposal generation, context modeling, training strategies, and evaluation metrics.
Journal ArticleDOI
A brief survey of deep reinforcement learning
TL;DR: This survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic, and highlight the unique advantages of deep neural networks, focusing on visual understanding via RL.
References
More filters
Journal ArticleDOI
The Inference of Regular LISP Programs from Examples
TL;DR: A class of LISP programs that is analogous to the finite-state automata is defined, and an algorithm is given for constructing such programs from examples of their input-output behavior.
Posted Content
Using the Output Embedding to Improve Language Models
Ofir Press,Lior Wolf +1 more
TL;DR: The authors showed that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance and proposed a new method of regularizing the output embedding.
Posted Content
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Yarin Gal,Zoubin Ghahramani +1 more
TL;DR: This work applies a new variational inference based dropout technique in LSTM and GRU models, which outperforms existing techniques, and to the best of the knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank.
Proceedings ArticleDOI
Modeling systems with internal state using evolino
TL;DR: This work uses the general framework for sequence learning, EVOlution of recurrent systems with LINear Outputs (Evolino), to discover good RNN hidden node weights through evolution, while using linear regression to compute an optimal linear mapping from hidden state to output.
Posted Content
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz,Misha Denil,Sergio Gomez,Matthew W. Hoffman,David Pfau,Tom Schaul,Brendan Shillingford,Nando de Freitas +7 more
TL;DR: In this article, the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way, and the learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained.