N
Ning Liu
Researcher at Northeastern University
Publications - 41
Citations - 1432
Ning Liu is an academic researcher from Northeastern University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 15, co-authored 40 publications receiving 973 citations. Previous affiliations of Ning Liu include DiDi & Syracuse University.
Papers
More filters
Proceedings ArticleDOI
CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices
Caiwen Ding,Siyu Liao,Yanzhi Wang,Zhe Li,Ning Liu,Youwei Zhuo,Chao Wang,Xuehai Qian,Yu Bai,Geng Yuan,Xiaolong Ma,Yipeng Zhang,Jian Tang,Qinru Qiu,Xue Lin,Bo Yuan +15 more
TL;DR: The CirCNN architecture is proposed, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc) and FFT can be used as the key computing kernel which ensures universal and small-footprint implementations.
Proceedings ArticleDOI
A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
TL;DR: The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem and the proposed framework can achieve the best trade-off between latency and power/energy consumption in a server cluster.
Journal ArticleDOI
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates.
TL;DR: This work proposes AutoCompress, an automatic structured pruning framework with the following key performance improvements: effectively incorporate the combination of structured pruned schemes in the automatic process, and adopt the state-of-art ADMM-based structured weight pruning as the core algorithm.
Proceedings ArticleDOI
CirCNN: Accelerating and Compressing Deep Neural Networks Using Block-CirculantWeight Matrices
Caiwen Ding,Siyu Liao,Yanzhi Wang,Zhe Li,Ning Liu,Youwei Zhuo,Chao Wang,Xuehai Qian,Yu Bai,Geng Yuan,Xiaolong Ma,Yipeng Zhang,Jian Tang,Qinru Qiu,Xue Lin,Bo Yuan +15 more
TL;DR: CirCNN as discussed by the authors utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) and the storage complexity from O(n2) to O(nlogn) with negligible accuracy loss.
Proceedings ArticleDOI
VIBNN: Hardware Acceleration of Bayesian Neural Networks
TL;DR: VIBNN as mentioned in this paper is an FPGA-based hardware accelerator design for variational inference on BNNs, which can achieve throughput of 321,543.4 Images/s and energy efficiency upto 52,694.8 Images/J.