scispace - formally typeset
T

Tianshi Chen

Researcher at Chinese Academy of Sciences

Publications -  129
Citations -  7688

Tianshi Chen is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Artificial neural network & Evolutionary algorithm. The author has an hindex of 28, co-authored 124 publications receiving 6394 citations. Previous affiliations of Tianshi Chen include Center for Excellence in Education & University of Science and Technology of China.

Papers
More filters
Proceedings ArticleDOI

DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning

TL;DR: This study designs an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy, and shows that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s in a small footprint.
Proceedings ArticleDOI

DaDianNao: A Machine-Learning Supercomputer

TL;DR: This article introduces a custom multi-chip machine-learning architecture, showing that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system.
Proceedings ArticleDOI

ShiDianNao: shifting vision processing closer to the sensor

TL;DR: This paper proposes an accelerator which is 60x more energy efficient than the previous state-of-the-art neural network accelerator, designed down to the layout at 65 nm, with a modest footprint and consuming only 320 mW, but still about 30x faster than high-end GPUs.
Proceedings ArticleDOI

Cambricon-x: an accelerator for sparse neural networks

TL;DR: A novel accelerator is proposed, Cambricon-X, to exploit the sparsity and irregularity of NN models for increased efficiency and experimental results show that this accelerator achieves, on average, 7.23x speedup and 6.43x energy saving against the state-of-the-art NN accelerator.
Journal ArticleDOI

Cambricon: an instruction set architecture for neural networks

TL;DR: This paper proposes a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques.