scispace - formally typeset
R

Ravishankar Iyer

Researcher at Intel

Publications -  116
Citations -  2826

Ravishankar Iyer is an academic researcher from Intel. The author has contributed to research in topics: Cache & Cache pollution. The author has an hindex of 28, co-authored 112 publications receiving 2747 citations.

Papers
More filters
Proceedings ArticleDOI

OWL: cooperative thread array aware scheduling techniques for improving GPGPU performance

TL;DR: This paper presents a coordinated CTA-aware scheduling policy that utilizes four schemes to minimize the impact of long memory latencies, and indicates that the proposed mechanism can provide 33% average performance improvement compared to the commonly-employed round-robin warp scheduling policy.
Proceedings ArticleDOI

Cache revive: architecting volatile STT-RAM caches for enhanced performance in CMPs

TL;DR: This work forms the relationship between retention-time and write-latency, and finds optimal retention- time for architecting an efficient cache hierarchy using STT-RAM to overcome high write latency and energy problems.
Proceedings ArticleDOI

Orchestrated scheduling and prefetching for GPGPUs

TL;DR: Techniques that coordinate the thread scheduling and prefetching decisions in a General Purpose Graphics Processing Unit (GPGPU) architecture to better tolerate long memory latencies are presented and a new prefetch-aware warp scheduling policy is proposed that overcomes problems with existing warp scheduling policies.
Proceedings ArticleDOI

Communist, utilitarian, and capitalist cache policies on CMPs: caches as a shared resource

TL;DR: It is found that simple policies like LRU replacement and static uniform partitioning are not sufficient to provide near-optimal performance under any reasonable definition, indicating that some thread-aware cache resource allocation mechanism is required.
Proceedings ArticleDOI

CHOP: Adaptive filter-based DRAM caching for CMP server platforms

TL;DR: Detailed simulations with server workloads show that filter-based DRAM caching techniques achieve the following: on average over 30% performance improvement over previous solutions, several magnitudes lower area overhead in tag space required for cache-line based DRAM caches, and significantly lower memory bandwidth consumption as compared to page-granularDRAM caches.