scispace - formally typeset
M

Menglong Zhu

Researcher at Google

Publications -  45
Citations -  40788

Menglong Zhu is an academic researcher from Google. The author has contributed to research in topics: Object detection & Deep learning. The author has an hindex of 25, co-authored 45 publications receiving 25135 citations. Previous affiliations of Menglong Zhu include Mitsubishi Electric Research Laboratories & University of Pennsylvania.

Papers
More filters
Posted Content

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

TL;DR: This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
Proceedings ArticleDOI

MobileNetV2: Inverted Residuals and Linear Bottlenecks

TL;DR: MobileNetV2 as mentioned in this paper is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers and intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity.
Posted Content

MobileNetV2: Inverted Residuals and Linear Bottlenecks

TL;DR: A new mobile architecture, MobileNetV2, is described that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes and allows decoupling of the input/output domains from the expressiveness of the transformation.
Proceedings ArticleDOI

Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors

TL;DR: A unified implementation of the Faster R-CNN, R-FCN and SSD systems is presented and the speed/accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures is traced out.
Proceedings ArticleDOI

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

TL;DR: A quantization scheme is proposed that allows inference to be carried out using integer- only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware.