scispace - formally typeset
C

Cy Chan

Researcher at Lawrence Berkeley National Laboratory

Publications -  38
Citations -  1380

Cy Chan is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Computer science & Speedup. The author has an hindex of 14, co-authored 34 publications receiving 1187 citations. Previous affiliations of Cy Chan include Massachusetts Institute of Technology.

Papers
More filters
Proceedings ArticleDOI

PetaBricks: a language and compiler for algorithmic choice

TL;DR: PetaBricks is presented, a new implicitly parallel language and compiler where having multiple implementations of multiple algorithms to solve a problem is the natural way of programming and makes algorithmic choice a first class construct of the language.
Proceedings ArticleDOI

An auto-tuning framework for parallel multicore stencil computations

TL;DR: In this article, the authors present a stencil auto-tuning framework that significantly advances programmer productivity by automatically converting a straightforward sequential Fortran 95 stencil expression into tuned parallel implementations in Fortran, C, or CUDA.
Journal ArticleDOI

AMReX: a framework for block-structured adaptive mesh refinement

TL;DR: Author(s): Zhang, Weiqun; Almgren, Ann; Beckner, Vince; Bell, John; Blaschke, Johannes; Chan, Cy; Day, Marcus; Friesen, Brian; Gott, Kevin; Graves, Daniel; Katz, Max; Myers, Andrew; Nguyen, Tan; Nonaka, Andrew ; Rosso, Michele; Williams, Samuel; Zingale, Michael
Proceedings ArticleDOI

Language and compiler support for auto-tuning variable-accuracy algorithms

TL;DR: Language extensions are proposed that expose trade-offs between time and accuracy to the compiler and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code.
Proceedings ArticleDOI

Siblingrivalry: online autotuning through local competitions

TL;DR: SiblingRivalry is presented, a new model for always-on online autotuning that allows parallel programs to continuously adapt and optimize themselves to their environment and often outperform the original algorithm that uses the entire system.