scispace - formally typeset
Z

Zhuang Liu

Researcher at University of California, Berkeley

Publications -  53
Citations -  39804

Zhuang Liu is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 25, co-authored 42 publications receiving 23096 citations. Previous affiliations of Zhuang Liu include Tsinghua University & Intel.

Papers
More filters
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Proceedings ArticleDOI

Learning Efficient Convolutional Networks through Network Slimming

TL;DR: In this article, the authors proposed a network slimming method for CNNs to simultaneously reduce the model size, decrease the run-time memory footprint, and lower the number of computing operations without compromising accuracy.
Book ChapterDOI

Deep Networks with Stochastic Depth

TL;DR: Stochastic depth is proposed, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time and reduces training time substantially and improves the test error significantly on almost all data sets that were used for evaluation.
Proceedings ArticleDOI

A ConvNet for the 2020s

TL;DR: This work gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discovers several key components that contribute to the performance difference along the way, leading to a family of pure ConvNet models dubbed ConvNeXt.
Posted Content

Densely Connected Convolutional Networks

TL;DR: The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.