scispace - formally typeset
L

Linghao Song

Researcher at Duke University

Publications -  43
Citations -  1938

Linghao Song is an academic researcher from Duke University. The author has contributed to research in topics: Speedup & Deep learning. The author has an hindex of 14, co-authored 40 publications receiving 1156 citations. Previous affiliations of Linghao Song include University of California, Los Angeles & University of Pittsburgh.

Papers
More filters
Proceedings ArticleDOI

PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning

TL;DR: PipeLayer is presented, a ReRAM-based PIM accelerator for CNNs that support both training and testing and proposes highly parallel design based on the notion of parallelism granularity and weight replication, which enables the highly pipelined execution of bothTraining and testing, without introducing the potential stalls in previous work.
Proceedings ArticleDOI

GraphR: Accelerating Graph Processing Using ReRAM

TL;DR: GRAPHR as discussed by the authors is the first ReRAM-based graph processing accelerator, which is based on the principle of near-data processing and explores the opportunity of performing massive parallel analog operations with low hardware and energy cost.
Journal ArticleDOI

A Survey of Accelerator Architectures for Deep Neural Networks

TL;DR: Various architectures that support DNN executions in terms of computing units, dataflow optimization, targeted network topologies, architectures on emerging technologies, and accelerators for emerging applications are discussed.
Proceedings ArticleDOI

A spiking neuromorphic design with resistive crossbar

TL;DR: This work proposed a spiking neuromorphic design built on resistive crossbar structures and implemented with IBM 130nm technology that can achieve >50% energy savings, while the average probability of failed recognition increase only 1.46% and 5.99% in the feedforward and Hopfield implementations, respectively.
Proceedings ArticleDOI

HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array

TL;DR: HyPar as mentioned in this paper partitions the feature map tensors (input and output), the kernel tensors, the gradient tensors and the error tensors for the DNN accelerators, and uses a hierarchical layer-wise dynamic programming method to search for the partition for each layer.