scispace - formally typeset
Y

Yifan Sun

Researcher at College of William & Mary

Publications -  38
Citations -  460

Yifan Sun is an academic researcher from College of William & Mary. The author has contributed to research in topics: Computer science & Overhead (computing). The author has an hindex of 8, co-authored 28 publications receiving 227 citations. Previous affiliations of Yifan Sun include Northeastern University.

Papers
More filters
Posted Content

Summarizing CPU and GPU Design Trends with Product Data

TL;DR: It is found that transistor scaling remains critical in keeping the laws valid, but architectural solutions have become increasingly important and will play a larger role in the future, and that GPUs consistently deliver higher performance than CPUs.
Proceedings ArticleDOI

MGPUSim: enabling multi-GPU performance modeling and optimization

TL;DR: This work presents MGPUSim, a cycle-accurate, extensively validated, multi-GPU simulator, based on AMD's Graphics Core Next 3 (GCN3) instruction set architecture, and proposes the Locality API, an API extension that allows the GPU programmer to both avoid the complexity of multi- GPU programming, while precisely controlling data placement in the multi- GPUs memory.
Proceedings ArticleDOI

Mapping the Landscape of COVID-19 Crisis Visualizations

TL;DR: In this paper, an analysis of 668 COVID-19 visualizations is presented to examine who, (uses) what data, (to communicate) what messages, in what form, under what circumstances in the context of crisis visualizations.
Proceedings ArticleDOI

Mapping the Landscape of COVID-19 Crisis Visualizations

TL;DR: In this paper, the authors present a conceptual framework derived from their analysis, that examines who, (uses) what data, (to communicate) what messages, in what form, under what circumstances in the context of COVID-19 crisis visualizations.
Proceedings ArticleDOI

Profiling DNN Workloads on a Volta-based DGX-1 System

TL;DR: This work profile and analyze the training of five popular DNNs using 1, 2, 4 and 8 GPUs, and shows the breakdown of the training time across the FP+ BP stage and the WU stage to provide insights about the limiting factors of theTraining algorithm as well as to identify the bottlenecks in the multi-GPU system architecture.