scispace - formally typeset
J

Jianyuan Guo

Researcher at Peking University

Publications -  42
Citations -  2725

Jianyuan Guo is an academic researcher from Peking University. The author has contributed to research in topics: Computer science & Transformer (machine learning model). The author has an hindex of 13, co-authored 27 publications receiving 893 citations. Previous affiliations of Jianyuan Guo include Huawei.

Papers
More filters
Proceedings ArticleDOI

GhostNet: More Features From Cheap Operations

Abstract: Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https://github.com/huawei-noah/ghostnet.
Posted Content

GhostNet: More Features from Cheap Operations

TL;DR: A novel Ghost module is proposed to generate more feature maps from cheap operations based on a set of intrinsic feature maps to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
Proceedings ArticleDOI

Beyond Human Parts: Dual Part-Aligned Representations for Person Re-Identification

TL;DR: P2Net as mentioned in this paper applies a human parsing model to extract the binary human part masks and a self-attention mechanism to capture the soft latent (non-human) part masks, achieving state-of-the-art performance on three challenging benchmarks.
Journal ArticleDOI

OCNet: Object Context for Semantic Segmentation

TL;DR: This paper proposes an efficient interlaced sparse self-attention scheme to model the dense relations between any two of all pixels via the combination of two sparse relation matrices and empirically shows the advantages of this approach with competitive performances on five challenging benchmarks.
Proceedings ArticleDOI

Attribute-Aware Attention Model for Fine-grained Representation Learning

TL;DR: A novel Attribute-Aware Attention Model is proposed, which can learn local attribute representation and global category representation simultaneously in an end-to-end manner and contains more intrinsic information for image recognition instead of the noisy and irrelevant features.