scispace - formally typeset
S

Sai Prashanth Muralidhara

Researcher at Pennsylvania State University

Publications -  14
Citations -  413

Sai Prashanth Muralidhara is an academic researcher from Pennsylvania State University. The author has contributed to research in topics: Shared memory & Cache. The author has an hindex of 6, co-authored 14 publications receiving 396 citations.

Papers
More filters
Proceedings ArticleDOI

Reducing memory interference in multicore systems via application-aware memory channel partitioning

TL;DR: In this paper, the authors present an alternative approach to reduce inter-application interference in the memory system: application-aware memory channel partitioning (MCP), which maps the data of applications that are likely to severely interfere with each other to different memory channels.
Proceedings ArticleDOI

Optimizing shared cache behavior of chip multiprocessors

TL;DR: The proposed data locality optimization scheme improves inter-core conflict misses in the shared cache by 67% on average when both allocation and scheduling are used and the execution time improvements achieved are very close to the optimal savings that could be achieved using a hypothetical scheme.
Proceedings ArticleDOI

Intra-application cache partitioning

TL;DR: A dynamic, runtime system based, cache partitioning scheme that partitions the shared cache space dynamically among the individual threads of a given application, and shows that speeding up the critical path thread this way results in overall performance enhancement of the application execution in the long term.
Proceedings ArticleDOI

Dynamic thread and data mapping for NoC based CMPs

TL;DR: This work presents dynamic (runtime) thread and data mappings for NoC based CMPs to reduce the distance between the location of the core that requests data and the core whose local memory contains that requested data.
Proceedings ArticleDOI

Profiler and compiler assisted adaptive I/O prefetching for shared storage caches

TL;DR: The experimental data collected shows that while I/O prefetching brings benefits, its effectiveness reduces significantly as the number of CPUs is increased, and the proposed scheme improves performance, on average, by 19.9%, 11.9% and 10.3% when 8 CPUs are used.