scispace - formally typeset
C

Christopher B. Colohan

Researcher at Carnegie Mellon University

Publications -  15
Citations -  1067

Christopher B. Colohan is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Speculative multithreading & Thread (computing). The author has an hindex of 10, co-authored 15 publications receiving 1053 citations. Previous affiliations of Christopher B. Colohan include Google.

Papers
More filters
Proceedings ArticleDOI

A scalable approach to thread-level speculation

TL;DR: This paper proposes and evaluates a design for supporting TLS that seamlessly scales to any machine size because it is a straightforward extension of writeback invalidation-based cache coherence (which itself scales both up and down).
Journal ArticleDOI

The STAMPede approach to thread-level speculation

TL;DR: This article proposes and evaluates a design for supporting TLS that seamlessly scales both within a chip and beyond because it is a straightforward extension of write-back invalidation-based cache coherence (which itself scales both up and down).
Proceedings ArticleDOI

Compiler optimization of scalar value communication between speculative threads

TL;DR: This paper presents and evaluates dataflow algorithms for three increasingly-aggressive instruction scheduling techniques that reduce the critical forwarding path introduced by the synchronization associated with this data forwarding in Thread-Level Speculation.
Proceedings ArticleDOI

Improving value communication for thread-level speculation

TL;DR: This paper shows how to apply value prediction, dynamic synchronization and hardware instruction prioritization to improve value communication and hence performance in several SPECint benchmarks that have been automatically transformed by the compiler to exploit TLS.
Proceedings ArticleDOI

Compiler optimization of memory-resident value communication between speculative threads

TL;DR: This work proposes using the compiler to first identify frequently-occurring memory-resident data dependences, then insert synchronization for communicating values to preserve these dependences and finds that by synchronizing frequently-Occurring datadependences the authors can significantly improve the efficiency of parallel execution.