scispace - formally typeset
Search or ask a question
Posted Content

Neural Architecture Search with Reinforcement Learning

Barret Zoph1, Quoc V. Le1
05 Nov 2016-arXiv: Learning-
TL;DR: This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Abstract: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
Citations
More filters
Proceedings ArticleDOI
Junning Liu, Xinju Li, Bo An, Zijie Xia, Xu Wang 
17 Oct 2022
TL;DR: A Multi-Faceted Hierarchical MTL model (MFH) that exploits the multidimensional task relations in large scale MTLs with a nested hierarchical tree structure that maximizes the shared learning through multi-facets of sharing and improves the performance with heterogeneous task tower design.
Abstract: There have been many studies on improving the efficiency of shared learning in Multi-Task Learning (MTL). Previous works focused on the "micro" sharing perspective for a small number of tasks, while in Recommender Systems (RS) and many other AI applications, we often need to model a large number of tasks. For example, when using MTL to model various user behaviors in RS, if we differentiate new users and new items from old ones, the number of tasks will increase exponentially with multidimensional relations. This work proposes a Multi-Faceted Hierarchical MTL model (MFH) that exploits the multidimensional task relations in large scale MTLs with a nested hierarchical tree structure. MFH maximizes the shared learning through multi-facets of sharing and improves the performance with heterogeneous task tower design. For the first time, MFH addresses the "macro" perspective of shared learning and defines a "switcher" structure to conceptualize the structures of macro shared learning. We evaluate MFH and SOTA models in a large industry video platform of 10 billion samples and hundreds of millions of monthly active users. Results show that MFH outperforms SOTA MTL models significantly in both offline and online evaluations across all user groups, especially remarkable for new users with an online increase of 9.1% in app time per user and 1.85% in next-day retention rate. MFH currently has been deployed in WeSee, Tencent News, QQ Little World and Tencent Video, several products of Tencent. MFH is especially beneficial to the cold-start problems in RS where new users and new items often suffer from a "local overfitting" phenomenon that we first formalize in this paper.

2 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a generative adversarial auto-augment network (GA3N) for enlarging the augmentation search space and improving classification accuracy by using GANs.

2 citations

Journal ArticleDOI
TL;DR: A X-learning NAS (XNAS) is proposed to automatically train network’s structure and parameters to reduce the network complexity while retaining (and sometimes improving) the performance.
Abstract: Deep learning has achieved great and broad breakthroughs in many real-world applications. In particular, the task of training the network parameters has been masterly handled by back-propagation learning. However, the pursuit on optimal network structures remains largely an art of trial and error. This prompts some urgency to explore an architecture engineering process, collectively known as Neural Architecture Search (NAS). In general, NAS is a design software system for automating the search of effective neural architecture. This article proposes an X-learning NAS (XNAS) to automatically train a network’s structure and parameters. Our theoretical footing is built upon the subspace and correlation analyses between the input layer, hidden layer, and output layer. The design strategy hinges upon the underlying principle that the network should be coerced to learn how to structurally improvethe input/output correlation successively (i.e., layer by layer). It embraces both Progressive NAS (PNAS) and Regressive NAS (RNAS). For unsupervised RNAS, Principal Component Analysis (PCA) is a classic tool for subspace analyses. By further incorporating teacher’s guidance, PCA can be extended to Regression Component Analysis (RCA) to facilitate supervised NAS design. This allows the machine to extract components most critical to the targeted learning objective. We shall further extend the subspace analysis from multi-layer perceptrons to convolutional neural networks, via introduction of Convolutional-PCA (CPCA) or, more simply, Deep-PCA (DPCA). The supervised variant of DPCA will be named Deep-RCA (DRCA). The subspace analyses allow us to compute optimal eigenvectors (respectively, eigen-filters) and principal components (respectively, eigen-channels) for optimal NAS design of multi-layer perceptrons (respectively, convolutional neural networks). Based on the theoretical analysis, an X-learning paradigm is developed to jointly learn the structure and parameters of learning models. The objective is to reduce the network complexity while retaining (and sometimes improving) the performance. With carefully pre-selected baseline models, X-learning has shown great successes in numerous classification-type and/or regression-type applications. We have applied X-learning to the ImageNet datasets for classification and DIV2K for image enhancements. By applying X-learning to two types of baseline models, MobileNet and ResNet, both the low-power and high-performance application categories can be supported. Our simulations confirm that X-learning is by and large very competitive relative to the state-of-the-art approaches.

2 citations

Book ChapterDOI
29 Oct 2021
TL;DR: Zhang et al. as mentioned in this paper proposed a Memory-Efficient Multi-Agent Neural Architecture Search (MEMA-NAS) framework in end-to-end object detection neural network, which introduced the multi-agent learning to search holistic architecture of the detection network.
Abstract: Object detection is a core computer vision task that aims to localize and classify categories for various objects in an image. With the development of convolutional neural networks, deep learning methods have been widely used in the object detection task, achieving promising performance compared to traditional methods. However, designing a well-performing detection network is inefficient. It consumes too much hardware resources and time to trial, and it also heavily relies on expert knowledge. To efficiently design the neural network architecture, there has been a growing interest in automatically designing neural network architecture by Neural Architecture Search (NAS). In this paper, we propose a Memory-Efficient Multi-Agent Neural Architecture Search (MEMA-NAS) framework in end-to-end object detection neural network. Specifically, we introduce the multi-agent learning to search holistic architecture of the detection network. In this way, a lot of GPU memory is saved, allowing us to search each module’s architecture of the detection network simultaneously. To find a better tradeoff between the precision and computational costs, we add the resource constraint in our method. Search experiments on multiple datasets show that MEMA-NAS achieves state-of-the-art results in search efficiency and precision.

2 citations

Posted Content
TL;DR: The challenge is outlined, the competition protocol, datasets, evaluation metric, starting kit, and baseline systems are described and every submitted solution should contain an adaptation routine which adapts the system to the new task.
Abstract: The AutoSpeech challenge calls for automated machine learning (AutoML) solutions to automate the process of applying machine learning to speech processing tasks. These tasks, which cover a large variety of domains, will be shown to the automated system in a random order. Each time when the tasks are switched, the information of the new task will be hinted with its corresponding training set. Thus, every submitted solution should contain an adaptation routine which adapts the system to the new task. Compared to the first edition, the 2020 edition includes advances of 1) more speech tasks, 2) noisier data in each task, 3) a modified evaluation metric. This paper outlines the challenge and describe the competition protocol, datasets, evaluation metric, starting kit, and baseline systems.

2 citations


Cites background from "Neural Architecture Search with Rei..."

  • ..., neural architecture search [13, 14], automated model selection [15, 16] and feature engineering [17, 18]....

    [...]

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"Neural Architecture Search with Rei..." refers methods in this paper

  • ...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....

    [...]

Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations


"Neural Architecture Search with Rei..." refers methods in this paper

  • ...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....

    [...]