scispace - formally typeset
Proceedings ArticleDOI

PInTE: Probabilistic Induction of Theft Evictions

Reads0
Chats0
TLDR
In this paper , the authors introduce Probabilistic Induction of Theft Evictions, or PInTE, which allows controllable contention induction via data movement towards eviction in the last level cache replacement policy.
Abstract
Cache contention analysis remains complex without a controlled & lightweight method of inducing contention for shared resources. Prior art commonly leverages a second workload on an adjacent core to cause contention, and the workload is either real or tune-able. Using a secondary workload comes with unique problems in simulation: real workloads aren’t controllable and can result in many combinations to measure a broad range of contention; and tune-able workloads provide control but don’t guarantee contention without filling all cache sets with contention behavior. Lastly, running multiple workloads increases the runtime of simulation environments by 2.4× on average.We introduce Probabilistic Induction of Theft Evictions, or PInTE which allows controllable contention induction via data movement towards eviction in the last level cache replacement policy. PInTE provides configurable contention with 2.6× fewer experiments, 2.2× less average time, and 5.6× less total time for a set of SPEC 17 speed-based traces. Further, PInTE incurs −8.46% average relative error in performance when compared to real contention. Run-time and reuse behavior of workloads under PInTE contention approximate behavior under real contention — information distance is 0.03 bits and 0.84 bits, respectively. Additionally, PInTE enables a first-time contention sensitivity analysis of SPEC and case studies which evaluate the resilience of micro-architectural techniques under growing contention.

read more

Content maybe subject to copyright    Report

References
More filters
Proceedings ArticleDOI

Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches

TL;DR: In this article, the authors propose a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources.
Proceedings ArticleDOI

Sniper: exploring the level of abstraction for scalable and accurate parallel multi-core simulation

TL;DR: Interval simulation provides a balance between detailed cycle-accurate simulation and one-IPC simulation, allowing long-running simulations to be modeled much faster than with detailed cycle, while still providing the detail necessary to observe core-uncore interactions across the entire system.
Proceedings ArticleDOI

Adaptive insertion policies for high performance caching

TL;DR: A Dynamic Insertion Policy (DIP) is proposed to choose between BIP and the traditional LRU policy depending on which policy incurs fewer misses, and shows that DIP reduces the average MPKI of the baseline 1MB 16-way L2 cache by 21%, bridging two-thirds of the gap between LRU and OPT.
Proceedings ArticleDOI

High performance cache replacement using re-reference interval prediction (RRIP)

TL;DR: This paper proposes Static RRIP that is scan-resistant and Dynamic RRIP (DRRIP) that is both scan- resistant and thrash-resistant that require only 2-bits per cache block and easily integrate into existing LRU approximations found in modern processors.
Related Papers (5)