scispace - formally typeset
T

Tien-Ju Yang

Researcher at Massachusetts Institute of Technology

Publications -  32
Citations -  6868

Tien-Ju Yang is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 16, co-authored 30 publications receiving 4279 citations. Previous affiliations of Tien-Ju Yang include National Taiwan University.

Papers
More filters
Journal ArticleDOI

Efficient Processing of Deep Neural Networks: A Tutorial and Survey

TL;DR: In this paper, the authors provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs, and discuss various hardware platforms and architectures that support DNN, and highlight key trends in reducing the computation cost of deep neural networks either solely via hardware design changes or via joint hardware and DNN algorithm changes.
Posted Content

Efficient Processing of Deep Neural Networks: A Tutorial and Survey

TL;DR: In this article, the authors provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs, and discuss various hardware platforms and architectures that support deep neural networks.
Journal ArticleDOI

Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

TL;DR: Eyeriss v2 as mentioned in this paper is a DNN accelerator architecture designed for running compact and sparse DNNs, which can process sparse data directly in the compressed domain for both weights and activations and therefore is able to improve both processing speed and energy efficiency with sparse models.
Proceedings ArticleDOI

Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning

TL;DR: Zhang et al. as discussed by the authors proposed an energy-aware pruning algorithm for CNNs, which directly uses the energy consumption of a CNN to guide the pruning process, and the energy estimation methodology uses parameters extrapolated from actual hardware measurements.
Posted Content

Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

TL;DR: Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs, is presented, which introduces a highly flexible on-chip network that can adapt to the different amounts of data reuse and bandwidth requirements of different data types, which improves the utilization of the computation resources.