Open AccessPosted Content
Neural Architecture Search with Reinforcement Learning
Barret Zoph,Quoc V. Le +1 more
Reads0
Chats0
TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.Abstract:
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.read more
Citations
More filters
Proceedings Article
Growing Efficient Deep Networks by Structured Continuous Sparsification
TL;DR: In this paper, the authors develop an approach to grow deep network architectures over the course of training, driven by a principled combination of accuracy and sparsity objectives, which can start from a small, simple seed architecture and dynamically grow and prune both layers and filters.
Proceedings ArticleDOI
Neural Architecture Search Based on Particle Swarm Optimization
TL;DR: In this paper, a particle swarm optimization (PSO) based neural network architecture search algorithm is proposed to reduce the coupling between the super-net nodes and achieve competitive speed compared to state-of-the-art models.
Posted Content
Hydra: A Peer to Peer Distributed Training & Data Collection Framework.
Vaibhav Mathur,Karanbir Chahal +1 more
TL;DR: Hydra is a system that seeks to solve the problems of diverse and unbiased data and distributed training in a novel manner by proposing a decentralized distributed framework which utilizes the substantial amount of idle compute of everyday electronic devices like smartphones and desktop computers for training and data collection purposes.
Posted Content
Multi-Objective DNN-based Precoder for MIMO Communications
Xinliang Zhang,Mojtaba Vaezi +1 more
TL;DR: A unified deep neural network (DNN)-based precoder for two-user multiple-input multiple-output (MIMO) networks with five objectives: data transmission, energy harvesting, simultaneous wireless information and power transfer, physical layer (PHY) security, and multicasting is introduced.
Posted Content
Searching for Interaction Functions in Collaborative Filtering
TL;DR: This paper first design an expressive search space for SIF by reviewing and generalizing existing CF approaches, and proposes to represent the search space as a structured multi-layer perceptron, and design a stochastic gradient descent algorithm which can simultaneously update both architectures and learning parameters.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI
Histograms of oriented gradients for human detection
Navneet Dalal,Bill Triggs +1 more
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.