Neural Architecture Search with Reinforcement Learning
Citations
29 citations
Cites background from "Neural Architecture Search with Rei..."
... neural architectures is usually a manual and time-consuming process that heavily relies on experience and expertise. Recently, neural architecture search (NAS) has been proposed to address this issue[3,27]. Models designed by NAS have achieved impressive performance that is close to or even outperforms the current state-of-the-art designed by domain experts in several challenging tasks[4,14], demonstra...
[...]
..., ResNet[31] and DenseNet[32], proposed skip-connection and dense-connection, respectively, to create “branches” of the data flow in a neural network. Possibly inspired by these structures, Zoph et al.[3] proposed to design the search space including skip connections; this search space has been quickly adopted by other works[4,8,10,12]. Another recent trend is to design a search space that covers only...
[...]
...mance is taken as the reward. Related literatures. In general, various RL-based approaches for NAS differ in (a) how the action space is designed, and (b) how the action policy is updated. Zoph et al.[3] first applied policy gradient to update the policy, and in their later work[4] changed to use proximal policy optimization; Baker et al.[6] used Q-learning to update the action policy. There are also ...
[...]
...mparisons of Neural Architecture Search Approaches. Single-Objective Neural Architecture Search Approach Search Space Algorithm Acceleration Techniques Search Cost (GPU Days) Additional Objectives NAS[3] Macro RL - 22400 - NasNet[4] Micro RL - 1800 - Hierarchical[5] Micro EA/RS - 300 - MetaQNN[6] Macro RL - 100 - GeNet[7] Macro EA - 17 - Large-Scale[8] Macro EA Weight-Sharing 2500 - Amoeba[9] Micro E...
[...]
... search algorithms in the following sections. 2.2 Reinforcement-Learning-Based Approaches Reinforcement-learning-based approaches have been the mainstream methods for NAS, especially after Zoph et al.[3] demonstrated the impressive experimental results that outperform the state-of-the-art models designed by domain experts. NAS formulated as reinforcement learning (RL) There are three fundamental elem...
[...]
28 citations
28 citations
28 citations
28 citations
References
123,388 citations
111,197 citations
55,235 citations
"Neural Architecture Search with Rei..." refers methods in this paper
...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....
[...]
42,067 citations
31,952 citations
"Neural Architecture Search with Rei..." refers methods in this paper
...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....
[...]