Neural Architecture Search with Reinforcement Learning
Citations
47 citations
Cites background or methods from "Neural Architecture Search with Rei..."
...(2) Unlike in NAS, different leaf elements can occur at varying depths in GP. (3) NAS adds several constraint to the tree structure....
[...]
...(4) In NAS, inputs to the tree are used only once; in GP, the inputs can be used multiple times within a node....
[...]
...[26] used 800 GPUs for training multiple such solutions in parallel....
[...]
...As shown by recent research [26] [25], the recurrent node in itself can be considered a deep network....
[...]
...However, very recent studies on metalearning methods such as neural architecture search and evolutionary optimization have shown that LSTM performance can be improved by complexifying it further [26] [8]....
[...]
46 citations
46 citations
Cites methods from "Neural Architecture Search with Rei..."
...NAS utilizes reinforcement learning [47, 46] and genetic algorithms [27, 42, 31] to search the transferable network blocks whose performance surpasses many manually designed architectures....
[...]
46 citations
46 citations
References
123,388 citations
111,197 citations
55,235 citations
"Neural Architecture Search with Rei..." refers methods in this paper
...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....
[...]
42,067 citations
31,952 citations
"Neural Architecture Search with Rei..." refers methods in this paper
...Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a)....
[...]