T
Tatiana Shpeisman
Researcher at Intel
Publications - 86
Citations - 3152
Tatiana Shpeisman is an academic researcher from Intel. The author has contributed to research in topics: Transactional memory & Compiler. The author has an hindex of 31, co-authored 84 publications receiving 2998 citations. Previous affiliations of Tatiana Shpeisman include University of Maryland, College Park & PARC.
Papers
More filters
Journal ArticleDOI
Safe nondeterminism in a deterministic-by-default parallel language
Robert L. Bocchino,Stephen T. Heumann,Nima Honarmand,Sarita V. Adve,Vikram Adve,Adam Welc,Tatiana Shpeisman +6 more
TL;DR: A language together with a type and effect system that supports nondeterministic computations with a deterministic-by-default guarantee, which provides a static semantics, dynamic semantics, and a complete proof of soundness for the language, both with and without the barrier removal feature.
Journal ArticleDOI
Single global lock semantics in a weakly atomic STM
Vijay Menon,Steven Balensiefer,Tatiana Shpeisman,Ali-Reza Adl-Tabatabai,Richard L. Hudson,Bratin Saha,Adam Welc +6 more
TL;DR: A new weakly atomic Java STM implementation is described that provides single global lock semantics while permitting concurrent execution, but it is shown that this comes at a significant performance cost.
Proceedings ArticleDOI
Invyswell: a hybrid transactional memory for haswell's restricted transactional memory
TL;DR: In this paper, Invyswell is presented, a novel HyTM that exploits the benefits and manages the limitations of Haswell's RTM and outperforms NOrec, a state-of-the-art STM, by 35%, Hybrid Norec, NOrec's hybrid implementation, by 18%, and haswell's hardware-only lock elision by 25% across all STAMP benchmarks.
Proceedings ArticleDOI
Latte: a language, compiler, and runtime for elegant and efficient deep neural networks
Leonard Truong,Rajkishore Barik,Ehsan Totoni,Hai Liu,Chick Markley,Armando Fox,Tatiana Shpeisman +6 more
TL;DR: Latte is presented, a domain-specific language for DNNs that provides a natural abstraction for specifying new layers without sacrificing performance, and 3-6x speedup over Caffe (C++/MKL) on the three state-of-the-art ImageNet models executing on an Intel Xeon E5-2699 v3 x86 CPU.
Proceedings ArticleDOI
Dynamic optimization for efficient strong atomicity
TL;DR: Measurements on a set of transactional and non-transactional Java workloads demonstrate that the techniques presented substantially reduce the overhead of strong atomicity from a factor of 5x down to 10% or less over an efficient weak atomicity baseline.