Y
Yunji Chen
Researcher at Chinese Academy of Sciences
Publications - 136
Citations - 7574
Yunji Chen is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Artificial neural network & Computer science. The author has an hindex of 26, co-authored 117 publications receiving 6242 citations. Previous affiliations of Yunji Chen include University of Science and Technology of China & University of California, Santa Barbara.
Papers
More filters
Proceedings ArticleDOI
DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
TL;DR: This study designs an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy, and shows that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s in a small footprint.
Proceedings ArticleDOI
DaDianNao: A Machine-Learning Supercomputer
Yunji Chen,Luo Tao,Liu Shaoli,Zhang Shijin,Liqiang He,Jia Wang,Ling Li,Tianshi Chen,Zhiwei Xu,Ninghui Sun,Olivier Temam +10 more
TL;DR: This article introduces a custom multi-chip machine-learning architecture, showing that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system.
Proceedings ArticleDOI
ShiDianNao: shifting vision processing closer to the sensor
Zidong Du,Robert Fasthuber,Tianshi Chen,Paolo Ienne,Ling Li,Luo Tao,Xiaobing Feng,Yunji Chen,Olivier Temam +8 more
TL;DR: This paper proposes an accelerator which is 60x more energy efficient than the previous state-of-the-art neural network accelerator, designed down to the layout at 65 nm, with a modest footprint and consuming only 320 mW, but still about 30x faster than high-end GPUs.
Proceedings ArticleDOI
Cambricon-x: an accelerator for sparse neural networks
Zhang Shijin,Zidong Du,Lei Zhang,Lan Huiying,Liu Shaoli,Ling Li,Qi Guo,Tianshi Chen,Yunji Chen +8 more
TL;DR: A novel accelerator is proposed, Cambricon-X, to exploit the sparsity and irregularity of NN models for increased efficiency and experimental results show that this accelerator achieves, on average, 7.23x speedup and 6.43x energy saving against the state-of-the-art NN accelerator.
Journal ArticleDOI
Cambricon: an instruction set architecture for neural networks
TL;DR: This paper proposes a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques.