scispace - formally typeset
Search or ask a question
Posted Content

Neural Architecture Search with Reinforcement Learning

Barret Zoph1, Quoc V. Le1
05 Nov 2016-arXiv: Learning-
TL;DR: This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Abstract: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
Citations
More filters
Posted Content
15 Feb 2020
TL;DR: This work presents FedNAS, a highly optimized framework for efficient federated NAS that fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process, and incorporates three key optimizations: parallel candidates training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers.
Abstract: To preserve user privacy while enabling mobile intelligence, techniques have been proposed to train deep neural networks on decentralized data. However, training over decentralized data makes the design of neural architecture quite difficult as it already was. Such difficulty is further amplified when designing and deploying different neural architectures for heterogeneous mobile platforms. In this work, we propose an automatic neural architecture search into the decentralized training, as a new DNN training paradigm called Federated Neural Architecture Search, namely federated NAS. To deal with the primary challenge of limited on-client computational and communication resources, we present FedNAS, a highly optimized framework for efficient federated NAS. FedNAS fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process, and incorporates three key optimizations: parallel candidates training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers. Tested on large-scale datasets and typical CNN architectures, FedNAS achieves comparable model accuracy as state-of-the-art NAS algorithm that trains models with centralized data, and also reduces the client cost by up to two orders of magnitude compared to a straightforward design of federated NAS.

7 citations


Cites background from "Neural Architecture Search with Rei..."

  • ...NAS is known to be computation-intensive (e.g., thousands of GPU-hrs [35]), given the large number of model candidates to be explored....

    [...]

  • ..., thousands of GPU-hrs [35]), given the large number of model candidates to be explored....

    [...]

Proceedings ArticleDOI
01 Dec 2019
TL;DR: The reliability model for three-version machine learning architecture is constructed with a diversity measure defined as the intersection of error spaces in the sample space and a necessary condition under which three- version architecture achieves a higher system reliability than a single machine learning module is derived.
Abstract: The diversity of system components is one of the important contributing factors of reliable and secure software systems. In a software fault-tolerant system using diverse versions of software components, a component failure caused by defects or malicious attacks can be covered by other versions. Machine learning systems can also benefit from such a multi-version approach to improve the system reliability. Nevertheless, there are few studies addressing this issue. In this paper, we experimentally analyze how outputs of machine learning modules can be diversified by using different versions of machine learning algorithms, neural network architectures and perturbated input data. The experiments are conducted on image classification tasks of MNIST data set and Belgian Traffic Sign data set. Different neural network architectures, support vector machines and random forests are used for constructing diverse machine learning models. The diversity is characterized by the coverage of errors over the test samples. We observe that the different machine learning models have quite different error coverages that can be leveraged for system reliability design. Based on the experimental results, we construct the reliability model for three-version machine learning architecture with a diversity measure defined as the intersection of error spaces in the sample space. From the presented reliability model, we derive a necessary condition under which three-version architecture achieves a higher system reliability than a single machine learning module.

7 citations


Cites background from "Neural Architecture Search with Rei..."

  • ...Similar to hyper parameters, the best architecture for a specific problem is not known a priori, and hence architectural search techniques are actively investigated [7][8]....

    [...]

Journal ArticleDOI
TL;DR: This article reviews common barriers to open-endedness in the evolution-inspired approach and how they are dealt with in the evolutionary case and shows how these problems map onto similar ones in the machine learning approach, and discusses how the same insights and solutions that alleviated those barriers in evolutionary approaches can be ported over.
Abstract: Natural evolution gives the impression of leading to an open-ended process of increasing diversity and complexity. If our goal is to produce such open-endedness artificially, this suggests an appro...

7 citations


Cites methods from "Neural Architecture Search with Rei..."

  • ...There are some exceptions, however: Methods such as NEAT [67] use evolutionary methods to allow network architectures to adapt in response to a problem, neural architecture search [78] uses reinforcement learning to learn a probabilistic policy for constructing new architectures, and adaptive neural trees [70] recursively and dynamically generate a neural network architecture on the fly as “they” learn....

    [...]

  • ...…and Miikkulainen, 2002) use evolutionary methods to allow network architectures to adapt in response to a problem, neural architecture search (Zoph and Le, 2016) uses reinforcement learning to learn a probabilistic policy for constructing new architectures, and adaptive neural trees (Tanno…...

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the background of neural network architecture and its application in imaging analysis, and explained basic concepts of AI and its applications in medical imaging, and proposed a series of basic concepts and applications of artificial intelligence for medical imaging.

7 citations

Journal ArticleDOI
TL;DR: The proposed algorithms are shown to yield a highly compact model while keeping the accuracy acceptable for application, and the existence of parameter redundancy in the over-parameterized network motivates the proposal of model compression by way of filter pruning and low rank approximation.
Abstract: Despite the great success achieved by convolutional neural networks (CNNs) in various image understanding tasks, it is still difficult for CNNs to be applied to vein recognition tasks due to the problems of insufficient training datasets, intra-class variations, and inter-class similarities. Besides, due to the essential requirement on the storage of millions of parameters for CNN, it is challenging to use a CNN for designing a vein-based embedded person identification system. In this paper, these two problems are addressed by learning a discriminative and compact vein recognition model. For the first problem, a hierarchical generative adversarial network (HGAN) consisting of a constrained CNN and a CycleGAN is proposed for data augmentation. Two similarity losses are defined for estimating the self-similarity and inter-class dissimilarity, and a CycleGAN model is properly trained with these two losses for better task-specific training sample generation. After obtaining a baseline vein recognition model fine-tuned on the augmented datasets, the existence of parameter redundancy in the over-parameterized network motivates the proposal of model compression by way of filter pruning and low rank approximation, thus making the compressed model more suitable for deployment on embedded systems. Through the vein recognition experiments with two different datasets and an additional palmprint recognition experiment, the proposed algorithms are shown to yield a highly compact model while keeping the accuracy acceptable for application.

7 citations

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"Neural Architecture Search with Rei..." refers methods in this paper

  • ...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....

    [...]

Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations


"Neural Architecture Search with Rei..." refers methods in this paper

  • ...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....

    [...]