scispace - formally typeset
A

Aamer Jaleel

Researcher at Nvidia

Publications -  95
Citations -  6121

Aamer Jaleel is an academic researcher from Nvidia. The author has contributed to research in topics: Cache & Cache pollution. The author has an hindex of 33, co-authored 89 publications receiving 5484 citations. Previous affiliations of Aamer Jaleel include Intel & University of Maryland, College Park.

Papers
More filters
Proceedings ArticleDOI

Adaptive insertion policies for high performance caching

TL;DR: A Dynamic Insertion Policy (DIP) is proposed to choose between BIP and the traditional LRU policy depending on which policy incurs fewer misses, and shows that DIP reduces the average MPKI of the baseline 1MB 16-way L2 cache by 21%, bridging two-thirds of the gap between LRU and OPT.
Proceedings ArticleDOI

High performance cache replacement using re-reference interval prediction (RRIP)

TL;DR: This paper proposes Static RRIP that is scan-resistant and Dynamic RRIP (DRRIP) that is both scan- resistant and thrash-resistant that require only 2-bits per cache block and easily integrate into existing LRU approximations found in modern processors.
Journal ArticleDOI

DRAMsim: a memory system simulator

TL;DR: DRAMsim is introduced, a detailed and highly-configurable C-based memory system simulator that implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters.
Journal ArticleDOI

Scheduling heterogeneous multi-cores through Performance Impact Estimation (PIE)

TL;DR: This paper proposes Performance Impact Estimation (PIE) as a mechanism to predict which workload-to-core mapping is likely to provide the best performance and shows that it requires limited hardware support and can improve system performance by an average of 5.5% over recent state-of-the-art scheduling proposals and by 8.7% over a sampling-based scheduling policy.
Proceedings ArticleDOI

Adaptive insertion policies for managing shared caches

TL;DR: This paper proposes Thread-Aware Dynamic Insertion Policy (TADIP), a adaptive insertion policy that can take into account the memory requirements of each of the concurrently executing applications and provides performance benefits similar to doubling the size of an LRU-managed cache.