scispace - formally typeset
S

Siyu Liao

Researcher at Rutgers University

Publications -  36
Citations -  878

Siyu Liao is an academic researcher from Rutgers University. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 10, co-authored 33 publications receiving 598 citations. Previous affiliations of Siyu Liao include Amazon.com & University of Minnesota.

Papers
More filters
Proceedings ArticleDOI

CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices

TL;DR: The CirCNN architecture is proposed, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc) and FFT can be used as the key computing kernel which ensures universal and small-footprint implementations.
Proceedings ArticleDOI

CirCNN: Accelerating and Compressing Deep Neural Networks Using Block-CirculantWeight Matrices

TL;DR: CirCNN as discussed by the authors utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) and the storage complexity from O(n2) to O(nlogn) with negligible accuracy loss.
Proceedings ArticleDOI

PermDNN: efficient compressed DNN architecture with permuted diagonal matrices

TL;DR: In this article, the authors proposed PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices, which eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining.
Proceedings ArticleDOI

Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework

TL;DR: In this paper, a tensor decomposition-based model compression using Alternating Direction Method of Multipliers (ADMM) is proposed to solve the optimization problem with constraints on tensor ranks.