T
Thomas E. Spelce
Researcher at Lawrence Livermore National Laboratory
Publications - 7
Citations - 111
Thomas E. Spelce is an academic researcher from Lawrence Livermore National Laboratory. The author has contributed to research in topics: Xeon & InfiniBand. The author has an hindex of 6, co-authored 7 publications receiving 107 citations.
Papers
More filters
Proceedings ArticleDOI
Beyond homogeneous decomposition: scaling long-range forces on Massively Parallel Systems
David F. Richards,Jim Glosli,Bor Chan,M. R. Dorr,Erik W. Draeger,Jean-Luc Fattebert,W. D. Krauss,Thomas E. Spelce,Frederick H. Streitz,Mike Surh,John A. Gunnels +10 more
TL;DR: An approach that creates a heterogeneous decomposition by partitioning effort according to the scaling properties of the component algorithms is reported, which will allow many problems to scale across current and next-generation machines.
Journal ArticleDOI
Performance evaluation of supercomputers using HPCC and IMB Benchmarks
Subhash Saini,Robert Ciotti,Brian T. N. Gunney,Thomas E. Spelce,Alice Koniges,Don Dossa,Panagiotis Adamidis,Rolf Rabenseifner,Sunil R. Tiyyagura,Matthias S. Mueller +9 more
TL;DR: The HPC Challenge (HPCC) Benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers.
Proceedings ArticleDOI
Performance evaluation of supercomputers using HPCC and IMB benchmarks
Subhash Saini,Robert Ciotti,Brian T. N. Gunney,Thomas E. Spelce,Alice Koniges,Don Dossa,Panagiotis Adamidis,Rolf Rabenseifner,Sunil R. Tiyyagura,Matthias S. Mueller,Rod Fatoohi +10 more
TL;DR: The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers.
Proceedings ArticleDOI
Scaling physics and material science applications on a massively parallel Blue Gene/L system
George Almási,Gyan Bhanot,Alan Gara,Manish Gupta,James C. Sexton,Bob Walkup,Vasily V. Bulatov,Andrew W. Cook,Bronis R. de Supinski,James N. Glosli,Jeffrey Greenough,Francois Gygi,Alison Kubota,Steve Louis,Thomas E. Spelce,Frederick H. Streitz,Peter Williams,Robert K. Yates,Charles J. Archer,José E. Moreira,C. A. Rendleman +20 more
TL;DR: This study describes early experience with several physics and material science applications on a 32,768 node Blue Gene/L system, which was installed recently at the Lawrence Livermore National Laboratory and represents the first proof point that MPI applications can effectively scale to over ten thousand processors.
Journal ArticleDOI
BlueGene/L applications: Parallelism On a Massive Scale
Bronis R. de Supinski,Martin Schulz,Vasily V. Bulatov,William H. Cabot,Bor Chan,Andrew W. Cook,Erik W. Draeger,James N. Glosli,Jeffrey Greenough,Keith Henderson,Alison Kubota,Steve Louis,Brian J. Miller,Mehul Patel,Thomas E. Spelce,Frederick H. Streitz,Peter Williams,Robert K. Yates,Andy Yoo,George Almási,Gyan Bhanot,Alan Gara,John A. Gunnels,Manish Gupta,José E. Moreira,James C. Sexton,Bob Walkup,Charles J. Archer,Francois Gygi,Timothy C. Germann,Kai Kadau,Peter S. Lomdahl,C. A. Rendleman,Michael Welcome,William Clarence McLendon,Bruce Hendrickson,Franz Franchetti,Stefan Kral,Jürgen Lorenz,Christoph Überhuber,Edmond Chow,Ümit V. Çatalyürek +41 more
TL;DR: All applications show excellent scaling behavior, even at very large processor counts, with one code even achieving a sustained performance of more than 100 Tflop/s, clearly demonstrating the real success of the BG/L design.