scispace - formally typeset
H

Haoli Bai

Researcher at The Chinese University of Hong Kong

Publications -  28
Citations -  400

Haoli Bai is an academic researcher from The Chinese University of Hong Kong. The author has contributed to research in topics: Deep learning & Quantization (signal processing). The author has an hindex of 10, co-authored 28 publications receiving 246 citations. Previous affiliations of Haoli Bai include University of Electronic Science and Technology of China.

Papers
More filters
Journal ArticleDOI

Few Shot Network Compression via Cross Distillation

TL;DR: Cross distillation is proposed, a novel layer-wise knowledge distillation approach that offers a general framework compatible with prevalent network compression techniques such as pruning, and can significantly improve the student network's accuracy when only a few training instances are available.
Journal ArticleDOI

DART: Domain-Adversarial Residual-Transfer networks for unsupervised cross-domain image classification.

TL;DR: This paper proposes a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART), which not only learns domain-invariant features via adversarial training, but also achieves robust domain-adaptive classification via a residual-transfer strategy, all in an end-to-end training framework.
Proceedings ArticleDOI

Neural Relational Topic Models for Scientific Article Analysis

TL;DR: A novel Bayesian deep generative model termed as Neural Relational Topic Model (NRTM), which is composed with a Stacked Variational Auto-Encoder and a multilayer perception (MLP), which can effectively take advantages of the coherence between topic learning and citation recommendation, and significantly outperform the state-of-the-art methods on both tasks.
Proceedings ArticleDOI

Structured Inference for Recurrent Hidden Semi-markov Model

TL;DR: A structured and stochastic sequential neural network (SSNN), which composes with a generative network and an inference network that aims to not only capture the long-term dependencies but also model the uncertainty of the segmentation labels via semi-Markov models.

PocketFlow: An Automated Framework for Compressing and Accelerating Deep Neural Networks

TL;DR: An automated framework for model compression and acceleration, namely PocketFlow, which is an easy-touse toolkit that integrates a series of model compression algorithms and embeds a hyper-parameter optimization module to automatically search for the optimal combination ofhyper-parameters.