scispace - formally typeset
X

Xiaoyi Wang

Researcher at Beijing University of Technology

Publications -  19
Citations -  409

Xiaoyi Wang is an academic researcher from Beijing University of Technology. The author has contributed to research in topics: Speedup & Finite difference method. The author has an hindex of 6, co-authored 19 publications receiving 277 citations. Previous affiliations of Xiaoyi Wang include Tsinghua University.

Papers
More filters
Journal ArticleDOI

Type 2 diabetes mellitus prediction model based on data mining

TL;DR: A novel model based on data mining techniques for predicting type 2 diabetes mellitus (T2DM) based on a series of preprocessing procedures is proposed and is shown to be useful for the realistic health management of diabetes.
Proceedings ArticleDOI

GPU friendly fast Poisson solver for structured power grid network analysis

TL;DR: A novel simulation algorithm for large scale structured power grid networks is proposed that formulates the traditional linear system as a special two-dimension Poisson equation and solves it using an analytical expressions based on FFT technique.
Proceedings ArticleDOI

Physics-based electromigration modeling and assessment for multi-segment interconnects in power grid networks

TL;DR: The accuracy of the proposed transient analysis approach is validated against the numerical analysis, and the resulting EM-aware full-chip power grid reliability analysis has been demonstrated and compared with existing methods.
Proceedings ArticleDOI

Fast physics-based electromigration analysis for multi-branch interconnect trees

TL;DR: A new analsys method is proposed for the EM hydrostatic stress evolution for multi-branch interconnect trees, which is the foundation of the EM reliability assessment for large scale on-chip interconnect networks, such as power grid networks.
Proceedings ArticleDOI

Brief Industry Paper: optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms

TL;DR: In this article, a feature decomposition approach is proposed for memory efficiency optimization of GNN inference, which can significantly reduce the peak memory usage and mitigate OOM problems during GNN inferences.