scispace - formally typeset
L

Li Xiudong

Researcher at Tsinghua University

Publications -  14
Citations -  485

Li Xiudong is an academic researcher from Tsinghua University. The author has contributed to research in topics: Convolutional neural network & Convolution. The author has an hindex of 7, co-authored 14 publications receiving 295 citations.

Papers
More filters
Journal ArticleDOI

A High Energy Efficient Reconfigurable Hybrid Neural Network Processor for Deep Learning Applications

TL;DR: Thinker is an energy efficient reconfigurable hybrid-NN processor fabricated in 65-nm technology designed to exploit data reuse and guarantee parallel data access, which improves computing throughput and energy efficiency.
Proceedings ArticleDOI

A 1.06-to-5.09 TOPS/W reconfigurable hybrid-neural-network processor for deep learning applications

TL;DR: An energy-efficient hybrid neural network (NN) processor is implemented in a 65nm technology that has two 16×16 reconfigurable heterogeneous processing elements (PEs)arrays designed to support on demand partitioning and reconfiguration for parallel processing different NNs.
Proceedings ArticleDOI

A 141 UW, 2.46 PJ/Neuron Binarized Convolutional Neural Network Based Self-Learning Speech Recognition Processor in 28NM CMOS

TL;DR: An ultra-low power speech recognition processor is implemented in 28 nm CMOS technology, which is based on an optimized binary convolutional neural network (BCNN) based on a tailored self-learning mechanism to learn the features of users and improve recognition accuracy on the fly.
Journal ArticleDOI

An Energy-Efficient Reconfigurable Processor for Binary-and Ternary-Weight Neural Networks With Flexible Data Bit Width

TL;DR: A reconfigurable processor in a 28-nm CMOS technology to accelerate the inferences of BTNNs is designed and a joint optimization approach for convolutional layers is proposed to search optimal calculation pattern for each layer.
Proceedings ArticleDOI

A 5.1pJ/Neuron 127.3us/Inference RNN-based Speech Recognition Processor using 16 Computing-in-Memory SRAM Macros in 65nm CMOS

TL;DR: A 65nm CMOS speech recognition processor, named Thinker-IM, which employs 16 computing-in-memory (SRAM-CIM) macros for binarized recurrent neural network (RNN) computation, achieving neural energy efficiency of 2.8 × better than state-of-the-art.