scispace - formally typeset
Y

Yinhe Han

Researcher at Chinese Academy of Sciences

Publications -  200
Citations -  2734

Yinhe Han is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Computer science & Network on a chip. The author has an hindex of 20, co-authored 168 publications receiving 2078 citations. Previous affiliations of Yinhe Han include Huawei.

Papers
More filters
Proceedings ArticleDOI

FlexFlow: A Flexible Dataflow Accelerator Architecture for Convolutional Neural Networks

TL;DR: This paper proposes aflexible dataflow architecture (FlexFlow) that can leverage the complementary effects among feature map, neuron, and synapse parallelism to mitigate the mismatch between the parallel types supported by computing engine and the dominant parallel types of CNN workloads.
Proceedings ArticleDOI

DeepBurning: automatic generation of FPGA-based learning accelerators for the neural network family

TL;DR: A design automation tool allowing the application developers to build from scratch learning accelerators that targets their specific NN models with custom configurations and optimized performance, and greatly simplifies the design flow of NN accelerators for the machine learning or AI application developers.
Journal ArticleDOI

RT3D: Real-Time 3-D Vehicle Detection in LiDAR Point Cloud for Autonomous Driving

TL;DR: A real-time three-dimensional (RT3D) vehicle detection method that utilizes pure LiDAR point cloud to predict the location, orientation, and size of vehicles and proposes a pose-sensitive feature map design which can be strongly activated by the relative poses of vehicles, leading to a high regression accuracy.
Proceedings ArticleDOI

C-brain: a deep learning accelerator that tames the diversity of CNNs through adaptive data-level parallelization

TL;DR: This work has proposed a novel deep learning accelerator, which offers multiple types of data-level parallelism: inter-kernel, intra-kernel and hybrid, and can adaptively switch among the three types of parallelism and the corresponding data tiling schemes to dynamically match different networks or even different layers of a single network.
Proceedings ArticleDOI

An abacus turn model for time/space-efficient reconfigurable routing

TL;DR: The abacus-turn-model (AbTM) is proposed for designing time/space-efficient reconfigurable wormhole routing algorithms and its applicability with scalable performance in large-scale NoC applications is proved.