MobileNetV2: Inverted Residuals and Linear Bottlenecks
Citations
5,709 citations
Additional excerpts
...For those detectors running on CPU platform, their backbone could be SqueezeNet [31], MobileNet [28, 66, 27, 74], or ShuffleNet [97, 53]....
[...]
...We present two options of real-time neural networks: • For GPU we use a small number of groups (1 - 8) in convolutional layers: CSPResNeXt50 / CSPDarknet53 • For VPU - we use grouped-convolution, but we refrain from using Squeeze-and-excitement (SE) blocks - specifically this includes the following models: EfficientNet-lite / MixNet [76] / GhostNet [21] / MobileNetV3...
[...]
3,445 citations
3,393 citations
2,651 citations
Cites background or methods from "MobileNetV2: Inverted Residuals and..."
...The first one, referred to as R-ASPP, was proposed in [39]....
[...]
...In this subsection, we employ MobileNetV2 [39] and the proposed MobileNetV3 as network backbones for the task of mobile semantic segmentation....
[...]
...Lite R-ASPP, improving over R-ASPP, deploys the global-average pooling in a fashion similar to the Squeeze-and-Excitation module [20], in which we employ a large pooling kernel with a large stride (to save some computation) and only one 1×1 convolution in the module....
[...]
...Following MobileNetV2 [39], we attach the first layer of SSDLite to the last feature extractor layer that has an output stride of 16, and attach the second layer of SSDLite to the last feature extractor layer that has an output stride of 32....
[...]
...As can be seen on figure 1 our models outperform the current state of the art such as MnasNet [43], ProxylessNas [5] and MobileNetV2 [39]....
[...]
2,076 citations
References
123,388 citations
73,978 citations
49,914 citations
40,257 citations
30,811 citations