scispace - formally typeset
Search or ask a question
Posted Content

Neural Architecture Search with Reinforcement Learning

Barret Zoph1, Quoc V. Le1
05 Nov 2016-arXiv: Learning-
TL;DR: This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Abstract: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
Citations
More filters
Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper proposes a novel AutoML strategy based on probabilistic grammatical evolution, which is evaluated on the health domain by facing the knowledge discovery challenge in Spanish text documents and achieves state-of-the-art results.
Abstract: The process of extracting knowledge from natural language text poses a complex problem that requires both a combination of machine learning techniques and proper feature selection. Recent advances in Automatic Machine Learning (AutoML) provide effective tools to explore large sets of algorithms, hyper-parameters and features to find out the most suitable combination of them. This paper proposes a novel AutoML strategy based on probabilistic grammatical evolution, which is evaluated on the health domain by facing the knowledge discovery challenge in Spanish text documents. Our approach achieves state-of-the-art results and provides interesting insights into the best combination of parameters and algorithms to use when dealing with this challenge. Source code is provided for the research community.

15 citations


Cites methods from "Neural Architecture Search with Rei..."

  • ...In case of neural networks, a common approach is Neural Architecture Search (Zoph and Le, 2016) (NAS), were frameworks such as Auto-Keras are used (Jin et al., 2018)....

    [...]

  • ...In case of neural networks, a common approach is Neural Architecture Search (Zoph and Le, 2016) (NAS), were frameworks such as Auto-Keras are used (Jin et al....

    [...]

Proceedings ArticleDOI
03 May 2022
TL;DR: LitePose is designed, an efficient single-branch architecture for pose estimation, and two simple approaches to enhance the capacity of LitePose are introduced, including fusion deconv head and large kernel conv.
Abstract: Pose estimation plays a critical role in human-centered vision applications. However, it is difficult to deploy state-of-the-art HRNet-based pose estimation models on resource-constrained edge devices due to the high computational cost (more than 150 GMACs per frame). In this paper, we study efficient architecture design for real-time multi-person pose estimation on edge. We reveal that HRNet's high-resolution branches are redundant for models at the low-computation region via our gradual shrinking experiments. Removing them improves both efficiency and performance. Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including fusion deconv head and large kernel conv. On mobile platforms, LitePose reduces the latency by up to $5.0\times$ without sacrificing performance, compared with prior state-of-the-art efficient pose estimation models, pushing the frontier of real-time multi-person pose estimation on edge. Our code and pretrained models are released at https://github.com/mit-han-lab/litepose.

15 citations

Posted Content
TL;DR: This work empirically investigates several main factors that lead to the gaps and so weak ranking correlation between architectures under one-shot training and the ones under stand-alone complete training and proposes NAO-V2 to alleviate such gaps.
Abstract: The ability of accurately ranking candidate architectures is the key to the performance of neural architecture search~(NAS). One-shot NAS is proposed to cut the expense but shows inferior performance against conventional NAS and is not adequately stable. We find that the ranking correlation between architectures under one-shot training and the ones under stand-alone training is poor, which misleads the algorithm to discover better architectures. We conjecture that this is owing to the gaps between one-shot training and stand-alone complete training. In this work, we empirically investigate several main factors that lead to the gaps and so weak ranking correlation. We then propose NAO-V2 to alleviate such gaps where we: (1) Increase the average updates for individual architecture to a relatively adequate extent. (2) Encourage more updates for large and complex architectures than small and simple architectures to balance them by sampling architectures in proportion to their model sizes. (3) Make the one-shot training of the supernet independent at each iteration. Comprehensive experiments verify that our proposed method is effective and robust. It leads to a more stable search that all the top architectures perform well enough compared to baseline methods. The final discovered architecture shows significant improvements against baselines with a test error rate of 2.60% on CIFAR-10 and top-1 accuracy of 74.4% on ImageNet under the mobile setting. Code and model checkpoints are publicly available at https://github.com/renqianluo/NAO_pytorch.

15 citations


Cites background or methods from "Neural Architecture Search with Rei..."

  • ...Conventional NAS (Zoph and Le 2016; Zoph et al. 2018; Real et al. 2018) search on proxy task to reduce sources, e....

    [...]

  • ...Zoph and Le (2016) firstly proposed to search neural network architectures with reinforcement learning (RL) and achieved better performances than humandesigned architectures on CIFAR-10 and PTB....

    [...]

  • ...Recent NAS works show impressive results and have been applied in many tasks, including image classification (Zoph and Le 2016; Zoph et al. 2018), object detection (Ghiasi, Lin, and Le 2019), super resolution (Chu et al....

    [...]

  • ...Recent NAS works show impressive results and have been applied in many tasks, including image classification (Zoph and Le 2016; Zoph et al. 2018), object detection (Ghiasi, Lin, and Le 2019), super resolution (Chu et al. 2019), language modeling (Pham et al. 2018), neural machine translation (So,…...

    [...]

  • ...Conventional NAS (Zoph and Le 2016; Zoph et al. 2018; Real et al. 2018) search on proxy task to reduce sources, e.g., proxy dataset (part of dataset or smaller alternative dataset), proxy training (early stop) and proxy network (shallow and thin)....

    [...]

Journal ArticleDOI
TL;DR: An AlexNet network with optimized parameters is proposed for face image recognition and a Taguchi method is used for selecting preliminary factors and experiments are performed through orthogonal table design.
Abstract: In general, a convolutional neural network (CNN) consists of one or more convolutional layers, pooling layers, and fully connected layers. Most designers adopt a trial-and-error method to select CNN parameters. In this study, an AlexNet network with optimized parameters is proposed for face image recognition. A Taguchi method is used for selecting preliminary factors and experiments are performed through orthogonal table design. The proposed method filters out factors that are significantly affected. Finally, experimental results show that the proposed Taguchi-based AlexNet network obtains 87.056% and 98.72% average accuracy of image gender recognition in the CIA and MORPH databases, respectively. In addition, the average accuracy of the proposed Taguchi-based AlexNet network is 1.576% and 3.47% higher than that of the original AlexNet network in CIA and MORPH databases, respectively.

15 citations


Cites background from "Neural Architecture Search with Rei..."

  • ...Zoph and Le [19] proposed neural architecture search (NAS) with reinforcement learning....

    [...]

Posted Content
TL;DR: Inspired by the limited computing patterns, a novel network design mechanism is proposed to fix the number of channels in a group convolution, instead of the existing practice that fixing the total group numbers.
Abstract: In this paper, we propose a novel network design mechanism for efficient embedded computing Inspired by the limited computing patterns, we propose to fix the number of channels in a group convolution, instead of the existing practice that fixing the total group numbers Our solution based network, named Variable Group Convolutional Network (VarGNet), can be optimized easier on hardware side, due to the more unified computing schemes among the layers Extensive experiments on various vision tasks, including classification, detection, pixel-wise parsing and face recognition, have demonstrated the practical value of our VarGNet

15 citations


Cites background from "Neural Architecture Search with Rei..."

  • ...Besides, neural architecture search (NAS) [53, 35, 37, 54, 28] is a promising direction for automatically designing lightweight CNNs....

    [...]

  • ...ProxylessNAS: Direct neural architecture search on target task and hardware....

    [...]

  • ...Our network, VarGNet, is complementary to the existing platforms aware NAS methods, since the proposed variable group convolution is helpful for setting the search space in NAS methods....

    [...]

  • ...More recently, platforms aware NAS methods are proposed [4, 44, 10, 40] to search some specific networks that are efficient on certain hardware platforms....

    [...]

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"Neural Architecture Search with Rei..." refers methods in this paper

  • ...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....

    [...]

Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations


"Neural Architecture Search with Rei..." refers methods in this paper

  • ...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....

    [...]