X
Xiaoliang Chen
Researcher at University of California, Irvine
Publications - 6
Citations - 31
Xiaoliang Chen is an academic researcher from University of California, Irvine. The author has contributed to research in topics: Computer science & Clock rate. The author has an hindex of 1, co-authored 3 publications receiving 19 citations.
Papers
More filters
Journal ArticleDOI
An Analog Neural Network Computing Engine Using CMOS-Compatible Charge-Trap-Transistor (CTT)
Yuan Du,Li Du,Xuefeng Gu,Jieqiong Du,X. Shawn Wang,Boyu Hu,Jiang Ming-Zhe,Xiaoliang Chen,Subramanian S. Iyer,Mau-Chung Frank Chang +9 more
TL;DR: An analog neural network computing engine based on CMOS-compatible charge-trap transistor (CTT) is proposed and obtained a performance comparable to state-of-the-art fully connected neural networks using 8-bit fixed-point resolution.
Journal ArticleDOI
An Efficient High-Throughput Structured-Light Depth Engine
TL;DR: An efficient high-throughput depth engine is proposed to generate high-quality 3-D depth maps for speckle-pattern structured-light depth cameras with significant reduction of computational complexity in contrast to the sum-of-absolute-distance (SAD) method.
Posted Content
An Analog Neural Network Computing Engine using CMOS-Compatible Charge-Trap-Transistor (CTT)
Yuan Du,Li Du,Xuefeng Gu,Jieqiong Du,X. Shawn Wang,Boyu Hu,Jiang Ming-Zhe,Xiaoliang Chen,Su Jun-Jie,Subramanian S. Iyer,Mau-Chung Frank Chang +10 more
TL;DR: In this paper, an analog neural network computing engine based on CMOS-compatible charge-trap transistor (CTT) is proposed, which is composed of a scalable CTT multiplier array and energy efficient analog-digital interfaces.
Posted Content
Memory-Efficient CNN Accelerator Based on Interlayer Feature Map Compression
Zhuang Shao,Xiaoliang Chen,Li Du,Lei Chen,Yuan Du,Wei Zhuang,Huadong Wei,Chenjia Xie,Zhongfeng Wang +8 more
TL;DR: In this article, an efficient hardware accelerator with an interlayer feature compression technique is proposed to reduce the required on-chip memory size and off-chip access bandwidth by transforming the stored data into frequency domain using hardware-implemented 8x8 discrete cosine transform.
Proceedings ArticleDOI
Deep Neural Network Interlayer Feature Map Compression Based on Least-Squares Fitting
TL;DR: In this article , the feature maps are firstly divided into block groups and two base blocks are selected for each block group, and the fitting parameters are selectively stored in the on-chip memory according to the mean-squared error (MSE) results.