scispace - formally typeset
H

Huawei Li

Researcher at Chinese Academy of Sciences

Publications -  261
Citations -  2467

Huawei Li is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Automatic test pattern generation & Fault coverage. The author has an hindex of 21, co-authored 233 publications receiving 1882 citations.

Papers
More filters
Proceedings ArticleDOI

DeepBurning: automatic generation of FPGA-based learning accelerators for the neural network family

TL;DR: A design automation tool allowing the application developers to build from scratch learning accelerators that targets their specific NN models with custom configurations and optimized performance, and greatly simplifies the design flow of NN accelerators for the machine learning or AI application developers.
Proceedings ArticleDOI

An abacus turn model for time/space-efficient reconfigurable routing

TL;DR: The abacus-turn-model (AbTM) is proposed for designing time/space-efficient reconfigurable wormhole routing algorithms and its applicability with scalable performance in large-scale NoC applications is proved.
Journal ArticleDOI

On Topology Reconfiguration for Defect-Tolerant NoC-Based Homogeneous Manycore Systems

TL;DR: In this paper, the authors propose to achieve fault tolerance by employing redundancy at the core level instead of at the micro-architecture level, which not only maximizes the performance of the on-chip communication scheme, but also provides a unified topology to operating system and application software running on the processor.
Posted Content

EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks

TL;DR: The proposed EnGN is designed to accelerate the three key stages of GNN propagation, which is abstracted as common computing patterns shared by typical GNNs, and proposes the ring-edge-reduce(RER) dataflow that tames the poor locality of sparsely-and-randomly connected vertices, and the RER PE-array to practice RER dataflow.
Journal ArticleDOI

EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks

TL;DR: EnGN as discussed by the authors proposes a specialized accelerator architecture to accelerate the three key stages of GNN propagation, which is abstracted as common computing patterns shared by typical GNNs, and uses graph tiling strategy to fit large graphs into EnGN and make good use of the hierarchical onchip buffers through adaptive computation reordering and tile scheduling.