scispace - formally typeset
X

Xiaobing Feng

Researcher at Chinese Academy of Sciences

Publications -  116
Citations -  2362

Xiaobing Feng is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Compiler & Speedup. The author has an hindex of 15, co-authored 106 publications receiving 1892 citations. Previous affiliations of Xiaobing Feng include Huawei.

Papers
More filters
Proceedings ArticleDOI

ShiDianNao: shifting vision processing closer to the sensor

TL;DR: This paper proposes an accelerator which is 60x more energy efficient than the previous state-of-the-art neural network accelerator, designed down to the layout at 65 nm, with a modest footprint and consuming only 320 mW, but still about 30x faster than high-end GPUs.
Proceedings ArticleDOI

PuDianNao: A Polyvalent Machine Learning Accelerator

TL;DR: An ML accelerator called PuDianNao is presented, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network, and can perform up to 1056 GOP/s, and consumes 596 mW only.
Proceedings ArticleDOI

Level by level: making flow- and context-sensitive pointer analysis scalable for millions of lines of code

TL;DR: The level-by-level algorithm, LevPA, gives rises to a precise and compact SSA representation for subsequent program analysis and optimization tasks and a flow- and context-sensitive MAY/MUST mod (modification) set and read set for each procedure.
Book ChapterDOI

Software-hardware cooperative DRAM bank partitioning for chip multiprocessors

TL;DR: A new hardware and software cooperative DRAM bank partitioning method that combines page coloring and XOR cache mapping to evaluate the benefit potential of reducing interthread interference is presented.
Proceedings ArticleDOI

Panthera: holistic memory management for big data processing over hybrid memories

TL;DR: Panthera is proposed, a semantics-aware, fully automated memory management technique for Big Data processing over hybrid memories that reduces energy by 32 – 52% at only a 1 – 9% execution time overhead.