scispace - formally typeset
Z

Zhuang Liu

Researcher at University of California, Berkeley

Publications -  53
Citations -  39804

Zhuang Liu is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 25, co-authored 42 publications receiving 23096 citations. Previous affiliations of Zhuang Liu include Tsinghua University & Intel.

Papers
More filters
Proceedings ArticleDOI

DSOD: Learning Deeply Supervised Object Detectors from Scratch

TL;DR: Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch following the single-shot detection (SSD) framework, and one of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector.
Posted Content

Snapshot Ensembles: Train 1, get M for free

TL;DR: DenseNet Snapshot Ensembles as mentioned in this paper proposes to train a single neural network, converging to several local minima along its optimization path and saving the model parameters by leveraging recent work on cyclic learning rate schedules.
Proceedings Article

Rethinking the Value of Network Pruning

TL;DR: The authors showed that training a large, over-parameterized model is often not necessary to obtain an efficient final model, and learned "important" weights of the large model are typically not useful for the small pruned model, which suggests that in some cases pruning can be useful as an architecture search paradigm.
Journal ArticleDOI

Convolutional Networks with Dense Connectivity

TL;DR: DenseNet as discussed by the authors proposes to connect each layer to every other layer in a feed-forward fashion, which alleviates the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially improve parameter efficiency.
Posted Content

Test-Time Training with Self-Supervision for Generalization under Distribution Shifts

TL;DR: This work turns a single unlabeled test sample into a self-supervised learning problem, on which the model parameters are updated before making a prediction, which leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.