Very Deep Convolutional Networks for Large-Scale Image Recognition
Citations
363 citations
363 citations
363 citations
Cites methods from "Very Deep Convolutional Networks fo..."
...However, VGG16 still has a better overall performance than AlexNet since it has a more regular network shape that shows better scalability for its uniform hardware design....
[...]
...Despite the fact that we have implemented entire AlexNet and VGG16 models on FPGAs, and fully connected layers can be converted into convolutional layers [10], in the remainder of this paper we focus on the systolic array architecture synthesis and optimization for convolutional layers....
[...]
...We adopt two widely used real-life CNN models, AlexNet [18] and VGG16 [19], for evaluation....
[...]
...In addition, the layer 1 of VGG16 has a lower performance than other layers as well....
[...]
363 citations
Cites background or methods from "Very Deep Convolutional Networks fo..."
...4VGG team’s single model achieves top-1 error of 24.4% and top-5 error of 7.1% on validation set after the competition (Simonyan & Zisserman, 2014)....
[...]
...In ILSVRC 2014, the winner GoogLeNet (Szegedy et al., 2014) and the runner-up VGG team (Simonyan & Zisserman, 2014) both increased the depth of the network significantly, and achieved top-5 classification error 6.66% and 7.32%, respectively....
[...]
...VGG team achieves top-5 test set error of 6.8% using multiple models after the competition (Simonyan & Zisserman, 2014)....
[...]
...As listed in Table 2, one basic configuration has 16 layers and is similar with VGG’s work (Simonyan & Zisserman, 2014)....
[...]
...Besides the depth, GoogLeNet (Szegedy et al., 2014) and VGG (Simonyan & Zisserman, 2014) used multi-scale data to improve the accuracy....
[...]
363 citations
References
49,639 citations
21,729 citations
9,803 citations
9,775 citations
6,061 citations